ACS Publications Division - to list of Journals
Pharmaceutical Centuryto Pharmaceutical Century home
Analytical Chemistry | Chemical & Engineering News | Modern Drug Discovery
| Today's Chemist at Work | E-Mail Us | Electronic Readers Service 
1950s
opening art
Corporate Sponsors
ArQule, Inc.
Aurora Biosciences
Bioview
Bruker Instruments, Inc.
Frontier Scientific, Inc.
Honeywell
IBC USA Conferences
ISP Fine Chemicals
J-Kem Scientific, Inc.
Metrohm-Peak
Midwest Biotech, Inc.
Ricerca, Inc.
Tal Technologies, Inc.
Prescriptions & polio

Introduction   The 1950s began amid a continuing wave of international paranoia. The Cold War intensified in the late 1940s as the West responded to the ideological loss of China to communism and the very real loss of atom bomb exclusivity to the Soviets. The first few years of the 1950s heated up again with the Korean conflict.

On the home front, World War II– related science that was now declassified, together with America’s factories now turned to peace, shaped a postwar economic boom. Driven by an unprecedented baby boom, the era’s mass consumerism focused on housing, appliances, automobiles, and luxury goods. Technologies applied to civilian life included silicone products, microwave ovens, radar, plastics, nylon stockings, long-playing vinyl records, and computing devices. New medicines abounded, driven by new research possibilities and the momentum of the previous decade’s “Antibiotic Era.” A wave of government spending was spawned by two seminal influences: a comprehensive new federal science and technology policy and the anti-“Red” sentiment that dollars spent for science were dollars spent for democracy.

While improved mechanization streamlined production in drug factories, the DNA era dawned. James Watson and Francis Crickdetermined the structure of the genetic material in 1953. Prescription and nonprescription drugs were legally distinguished from one another for the first time in the United States as the pharmaceutical industry matured. Human cell culture and radioimmunoassays developed as key research technologies; protein sequencing and synthesis burgeoned, promising the development of protein drugs.

In part because of Cold War politics, in part because the world was becoming a smaller place, global health issues took center stage. Fast foods and food additives became commonplace in theWest. “The Pill” was developed and first tested in Puerto Rico. Ultrasound was adapted to fetal monitoring. Gas chromatography (GC), mass spectrometry, and polyacrylamide gel electrophoresis began transforming drug research, as did the growth of the National Institutes of Health (NIHI) and the National Science Foundation (NSF). The foundations of modern immunology were laid as the pharmaceutical industry moved ever forward in mass-marketing through radio and the still-novel format of television.

But above all, through the lens of this time in the Western world, was the heroic-scientist image of Jonas Salk, savior of children through the conquest of polio via vaccines.

Antibiotics redux
The phenomenal success of antibiotics in the 1940s spurred the continuing pursuit of more and better antibacterial compounds from a variety of sources, especially from natural products in soils and synthetic modifications of compounds discovered earlier. In 1950, the antibiotic Nystatin was isolated from Streptomyces noursei obtained from soil in Virginia. In 1952, erythromycin was first isolated from S. erythreus from soil in the Philippines. Other antibiotics included Novobiocin (1955), isolated from S. spheroides from Vermont; Vancomycin (1956) from S. orientalis from soils in Borneo and Indiana; and Kanamycin (1957) from S. kanamyceticus from Japan. The search for antibiotic compounds also led researchers in Great Britain to discover, in 1957, an animal glycoprotein (interferon) with antiviral activity.

Not only did the development of antibiotics that began in the 1940s lead to the control of bacterial infections, it also permitted remarkable breakthroughs in the growth of tissue culture in the 1950s. These breakthroughs enabled the growth of polio and other viruses in animal cell cultures rather than in whole animals, and permitted a host of sophisticated physiological studies that had never before been possible. Scientists were familiar with the concepts of tissue culture since the first decades of the century, but routine application was still too difficult. After antibiotics were discovered, such research no longer required an “artist in biological technique” to maintain the requisite sterile conditions in the isolation, maintenance, and use of animal cells, according to virus researcher Kingsley F. Sanders. In 1957, he noted that the use of antibiotics in tissue culture made the process so easy that “even an amateur in his kitchen can do it.”

Funding medicine
In the 1950s, general science research, particularly biological research, expanded in the United States to a great extent because of the influence of Vannevar Bush, presidential science advisor during and immediately after World War II. His model, presented in the 1945 report to the President, Science: The Endless Frontier, set the stage for the next 50 years of science funding. Bush articulated a linear model of science in which basic research leads to applied uses. He insisted that science and government should continue the partnership forged in the 1940s with the Manhattan Project (in which he was a key participant) and other war-related research. 
BIOGRAPHY: A computer pioneer
Dorothy Crowfoot Hodgkin (1910–1994) is the only woman Nobel laureate from the United Kingdom. She earned the 1964 Nobel Prize in Chemistry for her solution to the molecular structure of penicillin and vitamin B12. She pointed to a book by Sir William Bragg, Old Trades and New Know ledge (1925), as the beginning of her interest in the use of X-rays to explore many of the questions left unanswered by chemistry—such as the structure of biological substances. After studying at Somerville College (Oxford, U.K.), Hodgkin was introduced to J. D. Bernal, a pioneer X-ray crystallographer, in 1932, and later became his assistant.

Hodgkin’s work was unique for its technical brilliance and medical importance, but also for its use, at every step, of computing machines of various degrees of sophistication. The use of computing machines became essential because crystallographic analysis required adding a series of numbers derived from the positions and intensities of reflected X-rays combined with some idea of their phases. With small molecules, the numbers were not too difficult to manage, but with large molecules, the numbers to be summed became unmanageable. The structure of vitamin B12 was incredibly complex, and by 1948, chemists had deciphered only half of it. Hodgkin and her group collected data on B12 for six years and took 2500 pictures of its crystals. Using an early computer, she and her colleagues at UCLA announced the complete structure in 1956.

As early as 1946, Bush argued for creating a national science funding body. The heating up of the Cold War, as much as anything else, precipitated the 1950 implementation of his idea in the form of the National Science Foundation—a major funnel for government funding of basic research, primarily for the university sector. It was a federal version of the phenomenally successful Rockefeller Foundation. The new foundation had a Division of Biological and Medical Sciences, but its mission was limited to supporting basic research so that it wouldn’t compete with the more clinically oriented research of the NIH.

The NIH rode high throughout the 1950s, with Congress regularly adding $8 million to $15 million to the NIH budget proposed by the first Eisenhower administration. By 1956, the NIH budget had risen to almost $100 million. By the end of the decade, the NIH was supporting some 10,000 research projects at 200 universities and medical schools at a cost of $250 million.

Other areas of government also expanded basic medical research under the Bush vision. In 1950, for example, the Atomic Energy Commission received a $5 million allocation from Congress specifically to relate atomic research to cancer treatment. In this same vein, in 1956, Oak Ridge National Laboratory established a medical instruments group to help promote the development of technology for disease diagnostics and treatment that would lead, in conjunction with advances in radioisotope technology, to a new era of physiologically driven medicine.

Part of this research funding was under the auspices of the Atoms for Peace program and led to the proliferation of human experiments using radioactive isotopes, often in a manner that would horrify a later generation of Americans with its cavalier disregard for participants’ rights.

Science funding, including medical research, received an additional boost with the 1957 launch of the first orbital satellite. The Soviet sputnik capped the era with a wave of science and technology fervor in industry, government, and even the public schools. The perceived “science gap” between the United States and the Soviet Union led to the 1958 National Defense Education Act. The act continued the momentum of government-led education that started with the GI Bill to provide a new, highly trained, and competent workforce that would transform industry. This focus on the importance of technology fostered increased reliance on mechanized mass-production techniques. During World War II, the pharmaceutical industry had learned its lesson—that bigger was better in manufacturing methods—as it responded to the high demand for penicillin.

Private funds were also increasingly available throughout the 1950s—and not just from philanthropic institutions. The American Cancer Society and the National Foundation for Infantile Paralysis were two of the largest public disease advocacy groups that collected money from the general public and directed the significant funds to scientific research. This link between the public and scientific research created, in some small fashion, a sense of investment in curing disease, just as investing in savings bonds and stamps in the previous decade had created a sense of helping to win World War II.

The war on polio
Perhaps the most meaningful medical story to people of the time was that of Jonas Salk and his “conquest of polio.” Salk biographer Richard Carter described the response to the April 12, 1955, vaccine announcement: “More than a scientific achievement, the vaccine was a folk victory, an occasion for pride and jubilation…. People observed moments of silence, rang bells, honked horns, … closed their schools or convoked fervid assemblies therein, drank toasts, hugged children, attended church.” The public felt a strong tie to Salk’s research in part because he was funded by the National Foundation for Infantile Paralysis. Since 1938, the organization’s March of Dimes had collected small change from the general public to fund polio research. The group managed to raise more money than was collected for heart disease or even cancer.

The Salk vaccine relied on the new technology of growing viruses in cell cultures, specifically in monkey kidney cells (first available in 1949). Later, the human HeLa cell line was used as well. The techniques were developed by John F. Enders (Harvard Medical School), Thomas H. Weller (Children’s Medical Center, Boston), and Frederick C. Robbins (Case Western Reserve University, Cleveland), who received the 1954 Nobel Prize in Physiology or Medicine for their achievement.

Salk began preliminary testing of his polio vaccine in 1952, with a massive field trial in the United States in 1954. According to Richard Carter, a May 1954 Gallup Poll found that “more Americans were aware of the polio field trial than knew the full name of the President of the United States.” Salk’s vaccine was a killed-virus vaccine that was capable of causing the disease only when mismanaged in production. This unfortunately happened within two weeks of the vaccine’s release. The CDC Poliomyelitis Surveillance Unit was immediately established; the popularly known “disease detectives” traced down the problem almost immediately, and the “guilty” vaccine lot was withdrawn. It turned out that Cutter Laboratories had released a batch of vaccine with live contaminants that tragically resulted in at least 260 cases of vaccine-induced polio.
SOCIETY: Rx for prescriptions
The 1951 Humphrey–Durham amendments to the U.S. Federal Food, Drug, and Cosmetic Act of 1938 fully differentiated two categories of drugs: prescription and nonprescription, referred to as “legend” and “OTC” (over-the-counter), respectively. The great expansion of the prescription-only category shifted the doctor–patient relationship more toward the giving out and receipt of prescriptions. Not unexpectedly, this also dramatically changed the relationship between doctors and drug companies in the United States throughout the 1950s. Prescription drug advertising had to be directed at doctors rather than patients, which spurred pharmaceutical companies to aggressively lobby doctors to prescribe their drugs.

In 1955, the AMA discontinued its drug-testing program, claiming in part that it had become an onerous burden. Critics claimed it was because the organization wanted to increase advertising revenues by removing conflict-of-interest restrictions preventing advertising in AMA journals. In 1959, the AMA demonstrated its support of the drug industry in its fight against the Kefauver–Harris Bill, which was intended to regulate drug development and marketing. The bill languished until 1962, when the thalidomide tragedy and powerful public pressures led to its passage.

Ironically, these safety problems helped promote an alternative vaccine that used live rather than killed virus. In 1957, the attenuated oral polio vaccine was finally developed by Albert Sabin and became the basis for mass vaccinations in the 1960s. The live vaccine can infect a small percentage of those inoculated against the disease with active polio, primarily those with compromised immune systems, and is also a danger to immunocompromised individuals who have early contact with the feces of vaccinated individuals. In its favor, it provides longer lasting immunity and protection against gastrointestinal reinfection, eliminating the reservoir of polio in the population.

The debate that raged in the 1950s over Salk versus Sabin (fueled at the time by a history of scientific disputes between the two men) continues today: Some countries primarily use the injected vaccine, others use the oral, and still others use one or the other, depending on the individual patient.

Instruments and assays
The 1950s saw a wave of new instrumentation, some of which, although not originally used for medical purposes, was eventually used in the medical field. In 1951, image-analyzing microscopy began. By 1952, thin sectioning and fixation methods were being perfected for electron microscopy of intracellular structures, especially mitochondria. In 1953, the first successful open-heart surgery was performed using the heart–lung machine developed by John H. Gibbon Jr. in Philadelphia.

Of particular value to medical microbiology and ultimately to the development of biotechnology was the production of an automated bacterial colony counter. This type of research was first commissioned by the U.S. Army Chemical Corps. Then the Office of Naval Research and the NIH gave a significant grant for the development of the Coulter counter, commercially introduced as the Model A.

A.J.P. Martin in Britain developed gas–liquid partition chromatography in 1952. The first commercial devices became available three years later, providing a powerful new technology for chemical analysis. In 1954, Texas Instruments introduced silicon transistors—a technology encompassing everything from transistorized analytical instruments to improved computers and, for the mass market, miniaturized radios.

The principle for electromagnetic microbalances was developed near the middle of the decade, and a prototype CT scanner was unveiled. In 1958, amniocentesis was developed, and Scottish physician Ian McDonald pioneered the use of ultrasound for diagnostics and therapeutics. Radiometer micro pH electrodes were developed by Danish chemists for bedside blood analysis.

In a further improvement in computing technology, Jack Kilby at Texas Instruments developed the integrated circuit in 1958.

In 1959, the critical technique of polyacrylamide gel electrophoresis (PAGE) was in place, making much of the coming biotechnological analysis of nucleic acids and proteins feasible.

Strides in the use of atomic energy continued apace with heavy government funding. In 1951, Brookhaven National Laboratory opened its first hospital devoted to nuclear medicine, followed seven years later by a Medical Research Center dedicated to the quest for new technologies and instruments. By 1959, the Brookhaven Medical Research Reactor was inaugurated, making medical isotopes significantly cheaper and more available for a variety of research and therapeutic purposes.

In one of the most significant breakthroughs in using isotopes for research purposes, in 1952, Rosalyn Sussman Yalow, working at the Veterans Hospital in the Bronx in association with Solomon A. Berson, developed the radioimmunoassay (RIA) for detecting and following antibodies and other proteins and hormones in the body.

Physiology explodes
The development of new instruments, radioistopes, and assay techniques found rapid application in the realm of medicine as research into general physiology and therapeutics prospered. In 1950, for example, Konrad Emil Bloch at Harvard University used carbon-13 and carbon-14 as tracers in cholesterol buildup in the body. Also in 1950, Albert Claude of the Université Catolique de Louvain in Belgium discovered the endoplasmic reticulum using electron microscopy. That same year, influenza type C was discovered.

New compounds and structures were identified in the human body throughout the decade. In 1950, GABA (gamma-aminobutyric acid ) was identified in the brain. Soon after that, Italian biologist Rita Levi-Montalcini demonstrated the existence of a nerve growth hormone. In Germany, F.F.K. Lynen isolated the critical enzyme cofactor, acetyl-CoA, in 1955. Human growth hormone was isolated for the first time in 1956. That same year, William C. Boyd of the Boston University Medical School identified 13 “races” of humans based on blood groups.

Breakthroughs were made that ultimately found their way into the development of biotechnology. By 1952, Robert Briggs and Thomas King, developmental biologists at the Institute for Cancer Research in Philadelphia, successfully transplanted frog nuclei from one egg to another—the ultimate forerunner of modern cloning techniques. Of tremendous significance to the concepts of gene therapy and specific drug targeting, sickle cell anemia was shown to be caused by one amino acid difference between normal and sickle hemoglobin (1956–1958). Although from today’s perspective it seems to have occurred surprisingly late, in 1956 the human chromosome number was finally revised from the 1898 estimate of 24 pairs to the correct 23 pairs. By 1959, examination of chromosome abnormalities in shape and number had become an important diagnostic technique. That year, it was determined that Down’s syndrome patients had 47 chromosomes instead of 46.

As a forerunner of the rapid development of immunological sciences, in 1959 Australian virologist Frank Macfarlane Burnet proposed his clonal selection theory of antibody production, which stated that antibodies were selected and amplified from preexisting rather than instructionally designed templates.

A rash of new drugs
New knowledge (and, as always, occasional serendipity) led to new drugs. The decade began with the Mayo Clinic’s discovery of cortisone in 1950—a tremendous boon to the treatment of arthritis. But more importantly, it saw the first effective remedy for tuberculosis. In 1950, British physician Austin Bradford Hill demonstrated that a combination of streptomycin and para-aminosalicylic acid (PAS) could cure the disease, although the toxicity of streptomycin was still a problem. By 1951, an even more potent antituberculosis drug was developed simultaneously and independently by the Squibb Co. and Hoffmann-LaRoche. Purportedly after the death of more than 50,000 mice (part of a new rapid screening method developed by Squibb to replace the proverbial guinea pigs as test animals) and the examination of more than 5000 compounds, isonicotinic acid hydrazide proved able to protect against a lethal dose of tubercle bacteria. It was marketed ultimately as isoniazid and proved especially effective in mixed dosage with streptomycin or pas.

In 1951, monoamine oxidase (MAO) inhibitors were introduced to treat psychosis. In 1952, reserpine was isolated from rauwolfia and eventually was used for treating essential hypertension. But in 1953, the rauwolfia alkaloid was used as the first of the tranquilizer drugs. The source plant came from India, where it had long been used as a folk medicine. The thiazide drugs were also developed in this period as diuretics for treating high blood pressure. In 1956, halothane was introduced as a general anesthetic. In 1954, the highly touted chlorpromazine (Thorazine) was approved as an antipsychotic in the United States. It had started as an allergy drug developed by the French chemical firm Rhône-Poulenc, and it was noticed to have “slowed down” bodily processes.

Also in 1954, the FDA approved BHA (butylated hydroxyanisole) as a food preservative; coincidentally, McDonald’s was franchised that same year. It soon became the largest “fast food” chain. Although not really a new “drug” (despite numerous fast food “addicts”), the arrival and popularity of national fast food chains (and ready-made meals such as the new TV dinners in the supermarket) were the beginning of a massive change in public nutrition and thus, public health.

Perhaps the most dramatic change in the popularization of drugs came with the 1955 marketing of meprobamate (first developed by Czech scientist Frank A. Berger) as Miltown (by Wallace Laboratories) and Equanil (by Wyeth). This was the first of the major tranquilizers or anti-anxiety compounds that set the stage for the 1960s “drug era.” The drug was so popular that it became iconic in American life. (The most popular TV comedian of the time once referred to himself as “Miltown” Berle.) Unfortunately, meprobamate also proved addictive.

In 1957, British researcher Alick Isaacs and J. Lindenman of the National Institute for Medical Research, Mill Hill, London, discovered interferon—a naturally occurring antiviral protein, although not until the 1970s (with the advent of gene-cloning technology) would it become routinely available for drug use. In 1958, a saccharin-based artificial sweetener was introduced to the American public. That year also marked the beginning of the thalidomide tragedy (in which the use of a new tranquilizer in pregnant women caused severe birth defects), although it would not become apparent until the 1960s. In 1959, Haldol (haloperidol) was first synthesized for treating psychotic disorders.

Blood products also became important therapeutics in this decade, in large part because of the 1950 development of methods for fractionating blood plasma by Edwin J. Cohn and colleagues. This allowed the production of numerous blood-based drugs, including fraction X (1956), a protein common to both the intrinsic and extrinsic pathways of blood clotting, and fraction VIII (1957), a blood-clotting protein used for treating hemophilia.

The birth of birth control
Perhaps no contribution of chemistry in the second half of the 20th century had a greater impact on social customs than the development of oral contraceptives. Several people were important in its development—among them Margaret Sanger, Katherine McCormick, Russell Marker, Gregory Pincus, and Carl Djerassi.

Sanger was a trained nurse who was a supporter of radical, left-wing causes. McCormick was the daughter-in-law of Cyrus McCormick, founder of International Harvester, whose fortune she inherited when her husband died. Both were determined advocates of birth control as the means to solving the world’s overpopulation. Pincus (who founded the Worcester Foundation for Experimental Biology) was a physiologist whose research interests focused on the sexual physiology of rabbits. He managed to fertilize rabbit eggs in a test tube and got the resulting embryos to grow for a short time. The feat earned him considerable notoriety, and he continued to gain a reputation for his work in mammalian reproductive biology.

Sanger and McCormick approached Pincus and asked him to produce a physiological contraceptive. He agreed to the challenge, and McCormick agreed to fund the project. Pincus was certain that the key was the use of a female sex hormone such as progesterone. It was known that progesterone prevented ovulation and thus was a pregnancy-preventing hormone. The problem was finding suitable, inexpensive sources of the scarce compound to do the necessary research. Enter American chemist Russell Marker. Marker’s research centered on converting sapogenin steroids found in plants into progesterone. His source for the sapogenins was a yam grown in Mexico. Marker and colleagues formed a company (Syntex) to produce progesterone. In 1949, he left the company over financial disputes and destroyed his notes and records. However, a young scientist hired that same year by Syntex ultimately figured prominently in further development of “the Pill.” 
TECHNOLOGY: HeLa cells
Henrietta Lacks moved from Virginia to Baltimore in 1943. She died eight years later, but her cells continued to live, sparking a controversy that is still alive and well today. In 1951, Lacks was diagnosed with a malignant cervical tumor at Johns Hopkins School of Medicine. Before radium treatment was applied, a young resident took a sample of the tumor and passed it along to George Gey, head of tissue culture research at Hopkins. His laboratory had been searching for the tool needed to study cancer: a line of human cells that could live outside the human body. Growing these cells would allow possible cures to be tested before using them in humans. They found this tool with Lacks’ cells, which grew at an amazing pace. Gey called them HeLa cells. These cells were instrumental in growing the poliovirus and helped create the polio vaccine. Gey shared these cells with his colleagues worldwide, and they became a staple of in vitro human physiological research.

However, Lacks’ family was completely unaware of this use of her cells until 1974, when one of her daughters-in-law found out by chance through a family friend that the friend had been working with cells from a Henrietta Lacks. She contacted Johns Hopkins to see what was in Henrietta’s files. However, Hopkins never got back in touch with the family.

The two issues raised are the question of consent and whether the person’s family is due anything morally or legally if what comes from the cells is of commercial value. Although consent is required now, what happened at Johns Hopkins in the 1950s was not at all uncommon. The other issue is still unresolved. One could say that the field of biomedical ethics was born the day Henrietta Lacks walked through the doors of Johns Hopkins. 

The new hire, Djerassi, first worked on the synthesis of cortisone from diosgenin. He later turned his attention to synthesizing an “improved” progesterone, one that could be taken orally. In 1951, his group developed a progesterone-like compound called norethindrone.

Pincus had been experimenting with the use of progesterone in rabbits to prevent fertility. He ran into an old acquaintance in 1952, John Rock, a gynecologist, who had been using progesterone to enhance fertility in patients who were unable to conceive. Rock theorized that if ovulation were turned off for a short time, the reproductive system would rebound. Rock had essentially proved in humans that progesterone did prevent ovulation. Once Pincus and Rock learned of norethindrone, the stage was set for wider clinical trials that eventually led to FDA approval of it in 1960 as an oral contraceptive. However, many groups opposed this approval on moral, ethical, legal, and religious grounds. Despite such opposition, the Pill was widely used and came to have a profound impact on society.

DNA et al.
Nucleic acid chemistry and biology were especially fruitful in the 1950s as not only the structure of DNA but also the steps in its replication, transcription, and translation were revealed. In 1952, Rosalind E. Franklin at King’s College in England began producing the X-ray diffraction images that were ultimately used by James Watson and Francis Crick in their elucidation of the structure of DNA published in Nature in 1953.

Two years later, Severo Ochoa at New York University School of Medicine discovered polynucleotide phosphorylase, an RNA-degrading enzyme. In 1956, electron microscopy was used to determine that the cellular structures called microsomes contained RNA (they were thus renamed ribosomes). That same year, Arthur Kornberg at Washington University Medical School (St. Louis, MO) discovered DNA polymerase. Soon after that, DNA replication as a semiconservative process was worked out separately by autoradiography in 1957 and then by using density centrifugation in 1958.

With the discovery of transfer RNA (tRNA) in 1957 by Mahlon Bush Hoagland at Harvard Medical School, all of the pieces were in place for Francis Crick to postulate in 1958 the “central dogma” of DNA—that genetic information is maintained and transferred in a one-way process, moving from nucleic acids to proteins. The path was set for the elucidation of the genetic code the following decade. On a related note, in 1958, bacterial transduction was discovered by Joshua Lederberg at the University of Wisconsin—a critical step toward future genetic engineering.

Probing proteins
Behind the hoopla surrounding the discovery of the structure of DNA, the blossoming of protein chemistry in the 1950s is often ignored. Fundamental breakthroughs occurred in the analysis of protein structure and the elucidation of protein functions.

In the field of nutrition, in 1950 the protein-building role of the essential amino acids was demonstrated. Linus Pauling, at the California Institute of Technology, proposed that protein structures are based on a primary alpha-helix (a structure that served as inspiration for helical models of DNA). Frederick Sanger at the Medical Research Council (MRC) Unit for Molecular Biology at Cambridge and Pehr Victor Edman developed methods for identifying N-terminal peptide residues, an important breakthrough in improved protein sequencing. In 1952, Sanger used paper chromatography to sequence the amino acids in insulin. In 1953, Max Perutz and John Kendrew, cofounders of the MRC Unit for Molecular Biology, determined the structure of hemoglobin using X-ray diffraction. 
Suggested reading
  • Breakthrough: The Saga of Jonas Salk, Carter, R. (Trident Press: New York, 1966) 
  • Dates in Medicine: A Chronological Record of Medical Progress Over Three Millennia, Sebastian, A., Ed. (Parthenon: New York, 2000) 
  • The Greatest Benefit to Mankind: A Medical History of Humanity Porter, R. (W. W. Norton: New York, 1997) 
  • Readings in American Health Care: Current Issues in Socio-Historical Perspective, Rothstein, W. G., Ed. (University of Wisconsin Press: Madison, 1995) 
  • The Social Transformation of American Medicine, Starr, P. (Basic Books: New York, 1982) 
  • Two Centuries of American Medicine: 1776–1976, Bordley, J. III, and Harvey, A.M. (W. B. Saunders: Philadelphia, 1976) 

In 1954, Vincent du Vigneaud at Cornell University synthesized the hormone oxytocin—the first naturally occurring protein made with the exact makeup it has in the body. The same year, ribosomes were identified as the site of protein synthesis. In 1956, the three-dimensional structure of proteins was linked to the sequence of its amino acids, so that by 1957, John Kendrew was able to solve the first three-dimensional structure of a protein (myoglobin); this was followed in 1959 with Max Perutz’s determination of the three-dimensional structure of hemoglobin. Ultimately, linking protein sequences with subsequent structures permitted development of structure–activity models, which allowed scientists to determine the nature of ligand binding sites. These developments proved critical to functional analysis in basic physiological research and to drug discovery, through specific targeting.

On to the Sixties
By the end of the 1950s, all of pharmaceutical science had been transformed by a concatenation of new instruments and new technologies—from GCs to X-ray diffraction, from computers to tissue culture—coupled, perhaps most importantly, to a new understanding of the way things (meaning cells, meaning bodies) worked. The understanding of DNA’s structure and function—how proteins are designed and how they can cause disease—provided windows of opportunity for drug development that had never before been possible. It was a paradigm shift toward physiology-based medicine, born with the hormone and vitamin work in the 1920s and 1930s, catalyzed by the excitement of the antibiotic era of the 1940s, that continued throughout the rest of the century with the full-blown development of biotechnology-based medicine. The decade that began by randomly searching for antibiotics in dirt ended with all the tools in place to begin searching for drugs with a knowledge of where they should fit in the chemical world of cells, proteins, and DNA.


© 2000 American Chemical Society
Analytical Chemistry Chemical and Engineering News Modern Drug Discovery Today's Chemist at Work

CASChemPortChemCenterPubs Page