| Today's Chemist at Work | E-Mail Us | Electronic Readers Service |
||||||||
Introduction The 1950s began amid a continuing wave of international paranoia. The Cold War intensified in the late 1940s as the West responded to the ideological loss of China to communism and the very real loss of atom bomb exclusivity to the Soviets. The first few years of the 1950s heated up again with the Korean conflict. On the home front, World War II– related science that was now declassified, together with America’s factories now turned to peace, shaped a postwar economic boom. Driven by an unprecedented baby boom, the era’s mass consumerism focused on housing, appliances, automobiles, and luxury goods. Technologies applied to civilian life included silicone products, microwave ovens, radar, plastics, nylon stockings, long-playing vinyl records, and computing devices. New medicines abounded, driven by new research possibilities and the momentum of the previous decade’s “Antibiotic Era.” A wave of government spending was spawned by two seminal influences: a comprehensive new federal science and technology policy and the anti-“Red” sentiment that dollars spent for science were dollars spent for democracy. While improved mechanization streamlined production in drug factories, the DNA era dawned. James Watson and Francis Crickdetermined the structure of the genetic material in 1953. Prescription and nonprescription drugs were legally distinguished from one another for the first time in the United States as the pharmaceutical industry matured. Human cell culture and radioimmunoassays developed as key research technologies; protein sequencing and synthesis burgeoned, promising the development of protein drugs. In part because of Cold War politics, in part because the world was becoming a smaller place, global health issues took center stage. Fast foods and food additives became commonplace in theWest. “The Pill” was developed and first tested in Puerto Rico. Ultrasound was adapted to fetal monitoring. Gas chromatography (GC), mass spectrometry, and polyacrylamide gel electrophoresis began transforming drug research, as did the growth of the National Institutes of Health (NIHI) and the National Science Foundation (NSF). The foundations of modern immunology were laid as the pharmaceutical industry moved ever forward in mass-marketing through radio and the still-novel format of television. But above all, through the lens of this time in the Western world, was the heroic-scientist image of Jonas Salk, savior of children through the conquest of polio via vaccines. Antibiotics
redux
Not only did the development of antibiotics that began in the 1940s lead to the control of bacterial infections, it also permitted remarkable breakthroughs in the growth of tissue culture in the 1950s. These breakthroughs enabled the growth of polio and other viruses in animal cell cultures rather than in whole animals, and permitted a host of sophisticated physiological studies that had never before been possible. Scientists were familiar with the concepts of tissue culture since the first decades of the century, but routine application was still too difficult. After antibiotics were discovered, such research no longer required an “artist in biological technique” to maintain the requisite sterile conditions in the isolation, maintenance, and use of animal cells, according to virus researcher Kingsley F. Sanders. In 1957, he noted that the use of antibiotics in tissue culture made the process so easy that “even an amateur in his kitchen can do it.” Funding
medicine
As early as 1946, Bush argued for creating a national science funding body. The heating up of the Cold War, as much as anything else, precipitated the 1950 implementation of his idea in the form of the National Science Foundation—a major funnel for government funding of basic research, primarily for the university sector. It was a federal version of the phenomenally successful Rockefeller Foundation. The new foundation had a Division of Biological and Medical Sciences, but its mission was limited to supporting basic research so that it wouldn’t compete with the more clinically oriented research of the NIH. The NIH rode high throughout the 1950s, with Congress regularly adding $8 million to $15 million to the NIH budget proposed by the first Eisenhower administration. By 1956, the NIH budget had risen to almost $100 million. By the end of the decade, the NIH was supporting some 10,000 research projects at 200 universities and medical schools at a cost of $250 million. Other areas of government also expanded basic medical research under the Bush vision. In 1950, for example, the Atomic Energy Commission received a $5 million allocation from Congress specifically to relate atomic research to cancer treatment. In this same vein, in 1956, Oak Ridge National Laboratory established a medical instruments group to help promote the development of technology for disease diagnostics and treatment that would lead, in conjunction with advances in radioisotope technology, to a new era of physiologically driven medicine. Part of this research funding was under the auspices of the Atoms for Peace program and led to the proliferation of human experiments using radioactive isotopes, often in a manner that would horrify a later generation of Americans with its cavalier disregard for participants’ rights. Science funding, including medical research, received an additional boost with the 1957 launch of the first orbital satellite. The Soviet sputnik capped the era with a wave of science and technology fervor in industry, government, and even the public schools. The perceived “science gap” between the United States and the Soviet Union led to the 1958 National Defense Education Act. The act continued the momentum of government-led education that started with the GI Bill to provide a new, highly trained, and competent workforce that would transform industry. This focus on the importance of technology fostered increased reliance on mechanized mass-production techniques. During World War II, the pharmaceutical industry had learned its lesson—that bigger was better in manufacturing methods—as it responded to the high demand for penicillin. Private funds were also increasingly available throughout the 1950s—and not just from philanthropic institutions. The American Cancer Society and the National Foundation for Infantile Paralysis were two of the largest public disease advocacy groups that collected money from the general public and directed the significant funds to scientific research. This link between the public and scientific research created, in some small fashion, a sense of investment in curing disease, just as investing in savings bonds and stamps in the previous decade had created a sense of helping to win World War II. The
war on polio
The Salk vaccine relied on the new technology of growing viruses in cell cultures, specifically in monkey kidney cells (first available in 1949). Later, the human HeLa cell line was used as well. The techniques were developed by John F. Enders (Harvard Medical School), Thomas H. Weller (Children’s Medical Center, Boston), and Frederick C. Robbins (Case Western Reserve University, Cleveland), who received the 1954 Nobel Prize in Physiology or Medicine for their achievement. Salk began preliminary testing of his polio vaccine in 1952, with a massive field trial in the United States in 1954. According to Richard Carter, a May 1954 Gallup Poll found that “more Americans were aware of the polio field trial than knew the full name of the President of the United States.” Salk’s vaccine was a killed-virus vaccine that was capable of causing the disease only when mismanaged in production. This unfortunately happened within two weeks of the vaccine’s release. The CDC Poliomyelitis Surveillance Unit was immediately established; the popularly known “disease detectives” traced down the problem almost immediately, and the “guilty” vaccine lot was withdrawn. It turned out that Cutter Laboratories had released a batch of vaccine with live contaminants that tragically resulted in at least 260 cases of vaccine-induced polio.
The debate that raged in the 1950s over Salk versus Sabin (fueled at the time by a history of scientific disputes between the two men) continues today: Some countries primarily use the injected vaccine, others use the oral, and still others use one or the other, depending on the individual patient. Instruments
and assays
Of particular value to medical microbiology and ultimately to the development of biotechnology was the production of an automated bacterial colony counter. This type of research was first commissioned by the U.S. Army Chemical Corps. Then the Office of Naval Research and the NIH gave a significant grant for the development of the Coulter counter, commercially introduced as the Model A. A.J.P. Martin in Britain developed gas–liquid partition chromatography in 1952. The first commercial devices became available three years later, providing a powerful new technology for chemical analysis. In 1954, Texas Instruments introduced silicon transistors—a technology encompassing everything from transistorized analytical instruments to improved computers and, for the mass market, miniaturized radios. The principle for electromagnetic microbalances was developed near the middle of the decade, and a prototype CT scanner was unveiled. In 1958, amniocentesis was developed, and Scottish physician Ian McDonald pioneered the use of ultrasound for diagnostics and therapeutics. Radiometer micro pH electrodes were developed by Danish chemists for bedside blood analysis. In a further improvement in computing technology, Jack Kilby at Texas Instruments developed the integrated circuit in 1958. In 1959, the critical technique of polyacrylamide gel electrophoresis (PAGE) was in place, making much of the coming biotechnological analysis of nucleic acids and proteins feasible. Strides in the use of atomic energy continued apace with heavy government funding. In 1951, Brookhaven National Laboratory opened its first hospital devoted to nuclear medicine, followed seven years later by a Medical Research Center dedicated to the quest for new technologies and instruments. By 1959, the Brookhaven Medical Research Reactor was inaugurated, making medical isotopes significantly cheaper and more available for a variety of research and therapeutic purposes. In one of the most significant breakthroughs in using isotopes for research purposes, in 1952, Rosalyn Sussman Yalow, working at the Veterans Hospital in the Bronx in association with Solomon A. Berson, developed the radioimmunoassay (RIA) for detecting and following antibodies and other proteins and hormones in the body. Physiology
explodes
New compounds and structures were identified in the human body throughout the decade. In 1950, GABA (gamma-aminobutyric acid ) was identified in the brain. Soon after that, Italian biologist Rita Levi-Montalcini demonstrated the existence of a nerve growth hormone. In Germany, F.F.K. Lynen isolated the critical enzyme cofactor, acetyl-CoA, in 1955. Human growth hormone was isolated for the first time in 1956. That same year, William C. Boyd of the Boston University Medical School identified 13 “races” of humans based on blood groups. Breakthroughs were made that ultimately found their way into the development of biotechnology. By 1952, Robert Briggs and Thomas King, developmental biologists at the Institute for Cancer Research in Philadelphia, successfully transplanted frog nuclei from one egg to another—the ultimate forerunner of modern cloning techniques. Of tremendous significance to the concepts of gene therapy and specific drug targeting, sickle cell anemia was shown to be caused by one amino acid difference between normal and sickle hemoglobin (1956–1958). Although from today’s perspective it seems to have occurred surprisingly late, in 1956 the human chromosome number was finally revised from the 1898 estimate of 24 pairs to the correct 23 pairs. By 1959, examination of chromosome abnormalities in shape and number had become an important diagnostic technique. That year, it was determined that Down’s syndrome patients had 47 chromosomes instead of 46. As a forerunner of the rapid development of immunological sciences, in 1959 Australian virologist Frank Macfarlane Burnet proposed his clonal selection theory of antibody production, which stated that antibodies were selected and amplified from preexisting rather than instructionally designed templates. A
rash of new drugs
In 1951, monoamine oxidase (MAO) inhibitors were introduced to treat psychosis. In 1952, reserpine was isolated from rauwolfia and eventually was used for treating essential hypertension. But in 1953, the rauwolfia alkaloid was used as the first of the tranquilizer drugs. The source plant came from India, where it had long been used as a folk medicine. The thiazide drugs were also developed in this period as diuretics for treating high blood pressure. In 1956, halothane was introduced as a general anesthetic. In 1954, the highly touted chlorpromazine (Thorazine) was approved as an antipsychotic in the United States. It had started as an allergy drug developed by the French chemical firm Rhône-Poulenc, and it was noticed to have “slowed down” bodily processes. Also in 1954, the FDA approved BHA (butylated hydroxyanisole) as a food preservative; coincidentally, McDonald’s was franchised that same year. It soon became the largest “fast food” chain. Although not really a new “drug” (despite numerous fast food “addicts”), the arrival and popularity of national fast food chains (and ready-made meals such as the new TV dinners in the supermarket) were the beginning of a massive change in public nutrition and thus, public health. Perhaps the most dramatic change in the popularization of drugs came with the 1955 marketing of meprobamate (first developed by Czech scientist Frank A. Berger) as Miltown (by Wallace Laboratories) and Equanil (by Wyeth). This was the first of the major tranquilizers or anti-anxiety compounds that set the stage for the 1960s “drug era.” The drug was so popular that it became iconic in American life. (The most popular TV comedian of the time once referred to himself as “Miltown” Berle.) Unfortunately, meprobamate also proved addictive. In 1957, British researcher Alick Isaacs and J. Lindenman of the National Institute for Medical Research, Mill Hill, London, discovered interferon—a naturally occurring antiviral protein, although not until the 1970s (with the advent of gene-cloning technology) would it become routinely available for drug use. In 1958, a saccharin-based artificial sweetener was introduced to the American public. That year also marked the beginning of the thalidomide tragedy (in which the use of a new tranquilizer in pregnant women caused severe birth defects), although it would not become apparent until the 1960s. In 1959, Haldol (haloperidol) was first synthesized for treating psychotic disorders. Blood products also became important therapeutics in this decade, in large part because of the 1950 development of methods for fractionating blood plasma by Edwin J. Cohn and colleagues. This allowed the production of numerous blood-based drugs, including fraction X (1956), a protein common to both the intrinsic and extrinsic pathways of blood clotting, and fraction VIII (1957), a blood-clotting protein used for treating hemophilia. The
birth of birth control
Sanger was a trained nurse who was a supporter of radical, left-wing causes. McCormick was the daughter-in-law of Cyrus McCormick, founder of International Harvester, whose fortune she inherited when her husband died. Both were determined advocates of birth control as the means to solving the world’s overpopulation. Pincus (who founded the Worcester Foundation for Experimental Biology) was a physiologist whose research interests focused on the sexual physiology of rabbits. He managed to fertilize rabbit eggs in a test tube and got the resulting embryos to grow for a short time. The feat earned him considerable notoriety, and he continued to gain a reputation for his work in mammalian reproductive biology. Sanger and McCormick approached Pincus and asked him to produce a physiological contraceptive. He agreed to the challenge, and McCormick agreed to fund the project. Pincus was certain that the key was the use of a female sex hormone such as progesterone. It was known that progesterone prevented ovulation and thus was a pregnancy-preventing hormone. The problem was finding suitable, inexpensive sources of the scarce compound to do the necessary research. Enter American chemist Russell Marker. Marker’s research centered on converting sapogenin steroids found in plants into progesterone. His source for the sapogenins was a yam grown in Mexico. Marker and colleagues formed a company (Syntex) to produce progesterone. In 1949, he left the company over financial disputes and destroyed his notes and records. However, a young scientist hired that same year by Syntex ultimately figured prominently in further development of “the Pill.”
The new hire, Djerassi, first worked on the synthesis of cortisone from diosgenin. He later turned his attention to synthesizing an “improved” progesterone, one that could be taken orally. In 1951, his group developed a progesterone-like compound called norethindrone. Pincus had been experimenting with the use of progesterone in rabbits to prevent fertility. He ran into an old acquaintance in 1952, John Rock, a gynecologist, who had been using progesterone to enhance fertility in patients who were unable to conceive. Rock theorized that if ovulation were turned off for a short time, the reproductive system would rebound. Rock had essentially proved in humans that progesterone did prevent ovulation. Once Pincus and Rock learned of norethindrone, the stage was set for wider clinical trials that eventually led to FDA approval of it in 1960 as an oral contraceptive. However, many groups opposed this approval on moral, ethical, legal, and religious grounds. Despite such opposition, the Pill was widely used and came to have a profound impact on society. DNA
et al.
Two years later, Severo Ochoa at New York University School of Medicine discovered polynucleotide phosphorylase, an RNA-degrading enzyme. In 1956, electron microscopy was used to determine that the cellular structures called microsomes contained RNA (they were thus renamed ribosomes). That same year, Arthur Kornberg at Washington University Medical School (St. Louis, MO) discovered DNA polymerase. Soon after that, DNA replication as a semiconservative process was worked out separately by autoradiography in 1957 and then by using density centrifugation in 1958. With the discovery of transfer RNA (tRNA) in 1957 by Mahlon Bush Hoagland at Harvard Medical School, all of the pieces were in place for Francis Crick to postulate in 1958 the “central dogma” of DNA—that genetic information is maintained and transferred in a one-way process, moving from nucleic acids to proteins. The path was set for the elucidation of the genetic code the following decade. On a related note, in 1958, bacterial transduction was discovered by Joshua Lederberg at the University of Wisconsin—a critical step toward future genetic engineering. Probing
proteins
In the field of nutrition, in 1950 the protein-building role of the essential amino acids was demonstrated. Linus Pauling, at the California Institute of Technology, proposed that protein structures are based on a primary alpha-helix (a structure that served as inspiration for helical models of DNA). Frederick Sanger at the Medical Research Council (MRC) Unit for Molecular Biology at Cambridge and Pehr Victor Edman developed methods for identifying N-terminal peptide residues, an important breakthrough in improved protein sequencing. In 1952, Sanger used paper chromatography to sequence the amino acids in insulin. In 1953, Max Perutz and John Kendrew, cofounders of the MRC Unit for Molecular Biology, determined the structure of hemoglobin using X-ray diffraction.
In 1954, Vincent du Vigneaud at Cornell University synthesized the hormone oxytocin—the first naturally occurring protein made with the exact makeup it has in the body. The same year, ribosomes were identified as the site of protein synthesis. In 1956, the three-dimensional structure of proteins was linked to the sequence of its amino acids, so that by 1957, John Kendrew was able to solve the first three-dimensional structure of a protein (myoglobin); this was followed in 1959 with Max Perutz’s determination of the three-dimensional structure of hemoglobin. Ultimately, linking protein sequences with subsequent structures permitted development of structure–activity models, which allowed scientists to determine the nature of ligand binding sites. These developments proved critical to functional analysis in basic physiological research and to drug discovery, through specific targeting. On
to the Sixties
|
||||||||
© 2000 American Chemical Society |