The Ongoing Biomedical Revolution Created by Rethinking How to Learn
Future generations will remember our present era for its revolution in biomedical discovery and practice. A near doubling in life expectancy and the cure of diseases previously untreatable reflect this seismic shift. Embracing the scientific method in research and practice fueled this change. Well-known but often overlooked, this method underpins modern bioscience by providing a ratcheting directional engine that advances knowledge and clinical care. Medicine has always been scientific, but it is also a practical art. Historically, its teaching relied on experts passing knowledge to students in the master-apprentice model. The modern emergence of evidence-based teaching, rooted in the scientific method and realized through clinical research, has led to countless new discoveries and improved outcomes. Central to this revolution lies a willingness to test and challenge doctrine, including appraising how to collect, analyze, and validate scientific evidence itself.
When our descendants look back at our current era, they will reflect that it represented a seismic shift in biomedical practice. Since the beginning of the twentieth century, life expectancy in the United States has nearly doubled, rising from fifty to eighty years.1 Indeed, many of us have friends and family who thrive well into their nineties. So myriad and diverse are medical advances that it would be impossible to designate one as exceptional. Yet this seismic shift did not result from singular events or discoveries. Instead, it arose from adopting a new approach to acquiring biomedical knowledge, one that is rooted in evidence instead of legacy, and elaborated by following the scientific method instead of doctrine. Each success derived from a continually evolving chain that began with observations, which led to questions, explanations, mechanistic understandings, refinements, and ultimately robust applications. The scientific method provides an intellectual ratcheting system that backstops each new informational link and ensures that scientific knowledge moves ineluctably in a forward direction, at least when it is practiced correctly, which does not always happen (more on that later). All of the modern biomedical advances we enjoy have relied on its guiding principles. This change in approach was not immediately apparent to the historical practitioners of medicine, but by the late twentieth century, the father of evidence-based medicine, David Sackett, captured the need when he told medical students, “Half of what you’ll learn in medical school will be shown to be either dead wrong or out of date within five years of your graduation; the trouble is that nobody can tell you which half.”2 The key to the future lay in constantly challenging the status quo.
We can consider a few examples. The impact of antibiotics on human health cannot be overstated. Nearly one-third of all deaths in the early part of the twentieth century resulted from infection. Alexander Fleming’s observation in 1928 that staphylococcus bacteria did not grow around Penicillium, a contaminating mold on his petri dishes, led him to conclude that the mold produced a substance, which he dubbed penicillin, that prevented bacterial growth.3 The story paused there for ten years until a team led by Howard Florey and including Ernst Chain and Norman Heatley developed methods to cultivate the mold, extract and purify the penicillin, and evaluate its benefit in both animals and humans in clinical trials and field tests.4 Further development of production methods and clinical delivery for the antibiotic revolutionized the treatment of infections during World War II, saving thousands of lives. Fleming, Florey, and Chain shared the 1945 Nobel Prize in Physiology or Medicine for this discovery, though it is fair to say that many others contributed to the ultimate success.
More than this individual drug, this work led to a paradigm of scientific discovery that focused on finding bioactive compounds in nature that can be extracted, purified, and eventually delivered to patients. Other antibiotics like tetracyclines, erythromycin, azithromycin, streptomycin, and gentamycin, to name only a handful, got their start by following that paradigm. And this model was not limited to antibiotics. For example, drugs like digoxin, which was originally found in the foxglove plant and is still used to treat arrhythmias, and aspirin, identified thousands of years ago as a pain reliever in the bark of willow trees, and later characterized and purified as acetylsalicylic acid, also trace their origins to natural sources. In my own field of cancer care, many chemotherapy agents used today, including vincristine, paclitaxel (Taxol), etoposide, camptothecins, doxorubicin, and bleomycin, were all found this same way. A single world-changing observation can enable a powerful advancement. But when it leads to a forward-driving discovery engine and a development paradigm, the results reverberate planetwide.
As discoveries go, new tools provide the most leverage. They install doors in walls that had previously blocked access to new knowledge, enabling new exploration and discovery. And when new tools lead to the discovery of still newer tools, the resulting virtuous cycle catalyzes a chain reaction. Our era hosted the development of molecular biology—a discipline that not only spawned other new disciplines of science but also enabled and amplified the use of antibiotics, vaccination, biological therapies, and other drugs in ways that forever altered our ability to study diseases, their diagnosis, and their treatment.
Throughout the middle of the twentieth century, biologists and biochemists deepened our knowledge about life’s cardinal processes (including heredity, cell division, growth, production, transfer and use of energy, and maintaining homeostasis). Life’s workhorse for these processes is a class of molecules called proteins. Evidence revealed that proteins (which, in my chauvinistic view as a protein scientist, are life’s most interesting molecules) provide the verbs to biology. They digest, stabilize, replicate, synthesize, modify, cleave, ligate, transfer, regulate, translocate, pull, carry, receive, bind, transport, recognize, transcribe, and polymerize, among countless other actions. Through combining genetic principles with biochemistry, cell biology, and understanding the structure of DNA, it emerged that most genes represent instructions for how to produce the many proteins needed for life. In oversimplified terms, a gene dictates the amino acids that, when linked in the proscribed order, result in a protein with its unique features and abilities. Researchers found that one class of proteins in particular, called enzymes, act as molecular machines that carry out specific actions in cells, often with extraordinary precision and efficiency. Seeing their potential, pioneers sought to exploit these exquisite machines for new applications.
In 1972, biochemist Paul Berg and his postdoctoral fellow David Jackson published the first human-designed linkage of two pieces of DNA.5 Berg was interested in viruses, which are biological entities that reside somewhere between molecular structures and life. Viruses are packages of nucleic acids, proteins, and lipids that invade the cells of their hosts and issue instructions to these cells to start making copies of the viruses, at the cost of the cells’ own metabolic obligations. Viruses are ubiquitous throughout biology and have evolved to infect all forms of life, from bacteria to plants and animals. Berg wondered if he could link fragments of DNA from two viruses that were unrelated, from distant parts of the evolutionary spectrum. One was the SV40 virus, which lives in the primate world, deep among the vertebrate animal species. The other was from a virus that infects bacteria, called Lambda bacteriophage (or λ). Linking the two today would require only a few days, spent mostly waiting for reactions to occur, and need only a few hours of actual hands-on activity. But like all firsts, in its day, this process required a tour de force of biochemical work; everything is easier once someone shows you how to do it.
The DNA from both SV40 and λ had to be obtained and purified. However, in their isolated forms, both viruses were closed circular loops of DNA such that, in order to link them together, they first had to be cut open, or linearized. Jackson and Berg accomplished this through a defense enzyme that bacteria use to recognize and shred foreign viral DNA. Fortunately, each virus had only one recognition sequence for the defense enzyme (called EcoR1 restriction endonuclease), resulting in one single piece of DNA each (much like opening a bicycle chain at a single link). They used a different enzyme (called E. coli DNA ligase), also isolated from bacteria, to relink the DNA at the ends and restore the circle. But before linking them, they had to ensure that SV40 would attach to λ and avoid the viruses linking back to themselves. To achieve this, they exploited the base-pairing property of DNA: that is, one strand of the DNA double helix matches to the other strand by pairing complementary bases. Adenine pairs with thymine and guanine pairs with cytosine. Jackson and Berg thus added a string of adenine to one of the strands of one virus and a string of thymine to one of the strands of the other virus. These two unpaired strings would naturally seek each other chemically, driving the two viruses to each other instead of to themselves. The complete procedure was complex, with several additional steps required, but in the end, it succeeded in producing one large circle of DNA that included one mammalian virus linked to one bacterial virus.
More than the physical accomplishment of isolating, purifying, cutting, modifying, binding, and finally linking two unrelated DNA was the profound intellectual leap: scientists had the power to recombine DNA from different sources (indeed, Berg referred to the resulting DNA as recombinant DNA). DNA not only carries the instructions to produce the components necessary for life, but, in many cases, it can carry instructions that lead to disease (either aberrant instructions, as in genetically linked disease, or instructions used by pathogens). The ability to mix and match the genetic material meant a faster path to understanding disease, and the ability to impact both health and disease in new ways. In 1980, Berg was awarded the Nobel Prize in Chemistry for his groundbreaking work in this field.
The next crucial step toward molecular biology started in late 1972, when Herb Boyer, a young biochemistry professor at the University of California, San Francisco, traveled to a microbiology conference in Hawaii. Boyer studied the EcoR1 enzyme, which Escherichia coli bacteria use to protect themselves from viruses by cutting DNA at specific sequences, specifically GAATTC. In fact, Boyer had provided the enzyme that Berg used in his recombinant DNA experiments. At the Hawaii meeting, Boyer met Stanley Cohen, who like Berg was a Stanford professor.6 Cohen studied circular pieces of DNA that propagated themselves in bacteria, like miniature chromosomes, called plasmids. Cohen knew how to extract plasmids from bacteria, and he also knew how to put them back into living cells. By this time, antibiotics had been in use for decades, and it was already apparent that bacteria had evolved genes that imparted resistance to antibiotics such as tetracycline or penicillin, which they could laterally transfer to other bacteria using plasmids.
Over late-night pastrami sandwiches, Cohen and Boyer recognized that they could combine their complementary expertise, as well as utilize the discoveries made in the Berg lab, to shuttle a piece of DNA bearing antibiotic resistance into a plasmid that lacked that capability. They could then transfer this resistance gene–bearing recombinant plasmid into bacteria that were otherwise sensitive to the antibiotic, to give them resistance. As added proof, they could show that only “recombinant” plasmids conferred antibiotic resistance to the bacteria, not “empty” ones. It was already known that bacteria transferred antibiotic resistance via plasmids, so the experimentalists would not be creating something that did not already exist, but this time it would be humanmade. Fortunately, the DNA and genes derived entirely from the microbial kingdom and were thus unlikely to cause trouble in people. Over the course of several months, they transferred pieces of DNA back and forth along the San Francisco Bay by car, exemplifying the modern maxim that scientists often travel thousands of miles to distant places, like Hawaii, in order to establish a collaboration with a next-door neighbor. Soon Cohen and Boyer provided the first demonstration that pieces of DNA could be recombined and moved in and out of living cells.7 And more importantly, once inside the cells, the recombinant DNA functioned to give the cells new properties, making them resistant to an antibiotic.
It would be nearly impossible to overstate the significance of these early discoveries in molecular biology. They opened the floodgates to tools and experiments that exploited the abilities of bacteria and other model biological systems. The field of molecular biology has found application in both the study of and the administration to biology, affecting all the known kingdoms and countless genera and species.
By providing tools that measure and manipulate the genes and proteins of cells, molecular biology has profoundly deepened our understanding of general biology and how it works. This is true both in broad fundamental strokes, for all species on Earth, and in detail for many species that have been studied intensively, including humans. To understand how biology operates under healthy circumstances, scientists routinely study what happens when things go wrong (for example, genes are mutated, new infections occur, or genes are lost or acquired).
It was the study of genetic traits found in mutants that led to our understanding of heredity. This work began with Gregor Mendel, the monk who tracked heredity in pea plants, and Thomas Morgan, who hunted for heritable mutant traits in the fruit fly.8 We did not have to hunt for mutants in people, because as one of my genetics professors, David Cox, used to teach us, they walk in the door of our doctor’s offices, announcing themselves and describing their own symptoms. Molecular biology revolutionized this paradigm by providing methods to find the mutated genes, as well as tools to test how these genetic changes cause altered physiological function. These approaches have illuminated our understanding of numerous diseases, identifying new disease causes (for instance, mutations in known genetic disorders like Duchenne’s muscular dystrophy and cystic fibrosis; mutations in numerous cancer-causing genes like TP53 and RB1), and distinguishing previously unrecognized disease forms (such as different molecular subtypes of breast cancer).9 Moreover, the elucidation of the molecular underpinnings of disease has identified points of intervention with increasing specificity, while at the same time providing a ready-made set of tools for testing whether those interventions were successful. The field of targeted molecular therapies, a separate discipline in its own right, emerged from and because of molecular biology. These drugs act at specific molecular junctures in disease, providing impressive improvements with far fewer side effects than historic medications.
Weaving through all of these successes is the common thread of the scientific method itself: starting with a problem, leading to a hypothesis, a test, an analysis, a conclusion, and a new problem. Most histories of science, like this one, leap from success to success, with little attention to the many intervening failed experiments and approaches. Writers understand that experiments do not always work, but page space and time limit how much can be said. Still, this gives the impression of an inexorable march toward knowledge, rather than the fits, starts, and reversals that characterize the actual process. Scientists spend most of their time failing. We try an experiment and it does not work. We try again, with some slight modifications, and it fails again, though slightly differently. We take note of these different failures and try again with yet another set of tweaks hoping to determine if the third round of failure is closer or farther from success. Success at science is all about failing with intent and grace.
Many experimental failures are obvious. These failures occur in apparent and observable ways, such as when cells fail to grow, or they fail to produce an intended protein, or the yield of a product is far lower than expected. Often, with such failures, the application of systematic troubleshooting leads to a solution. At some point in their journey, all graduate students spend months doing such troubleshooting. But a more sinister problem arises when experiments give the appearance of success, although the experimentalists fail to recognize that the results are misleading or even incorrect. These experiments seem conclusive, and overly enthusiastic experimentalists imaginatively fill in gaps to reach an exciting but erroneous conclusion. Poor execution or designs that ask the wrong question lead us astray.
A historic and notorious example of this was Duncan MacDougall’s experiment in 1901, intending to measure the weight of the human soul. MacDougall, a reputable Massachusetts physician, studied six terminally ill patients. When they were close to death, they were moved, along with their beds, to an industrial scale so that a change in weight (the leaving of the soul) could be recorded at the time of death. MacDougall’s article on this describes the measurements taken on each patient.10 In one case, the patient died within minutes of transfer to the scale, and “the experiment was so hurried, jarring of the scales had not wholly ceased and the apparent weight loss, one and one-half ounces, might have been due to accidental shifting of the sliding weight on that beam,” limiting the usefulness of the result. In four of the cases, there were weight losses near the time of death, which either continued on to further losses later (that is, losses not related to the moment of death) or returned to the starting weight (one case), or where the moment of death could not be known, when there was “a good deal of interference by people opposed to our work,” such that MacDougall regarded these experiments of limited value or at least did not focus on their results when reporting them. This left one patient, when “suddenly coincident with death the beam end dropped with an audible stroke hitting against the lower limiting bar,” and he measured a loss of three-fourths of an ounce (about twenty-one grams). This latter patient led MacDougall to conclude in his paper in the April 1907 issue of American Medicine that the unexplained weight loss would seem to be “soul substance.”
To his credit, MacDougall noted that more experiments would be needed to prove the results. Still, this did not stop him from discussing the results with The New York Times using a much more conclusive tone.11 Those results were then widely disseminated as evidence for the weight of a soul being near an ounce, eventually leading, almost one hundred years later, to the star-studded Hollywood film 21 Grams.12 In hindsight, we can see countless problems with this study, including its very small sample size, its selective reporting of results that agreed with the hypothesis, and the imprecision in the ability to measure such small changes in weight. Scales at the time required manually sliding weights back and forth across a bar with markings to make a measurement, and the differences reported in the study were less than 0.05 percent of the total load on the scale (patient plus bed and, at times, plus experimentalist). But a deeper look here would ask the question, was MacDougall even measuring what he thought he was measuring? For example, how did he know the precise moment of death, an issue he himself raised for some of the patients? And how did he know the manner and time that the soul would leave the body? MacDougall’s entire experiment relied on the notion that the soul leaves exactly and instantly when the patient dies. MacDougall managed to find what he expected to find.
Looking back through twenty-first-century glasses, we readily find fault with MacDougall’s study of the soul substance and its mass; the mistakes and tenuousness of premise leap into view. Still, we should remember that many modern experimental guidelines, including our recognition of experimental biases, had not been established then. And despite those guidelines, we continue to make errors in modern experimentation, albeit our mistakes are often more subtle and difficult to detect, yet nevertheless impactful.
A particular problem arises at the junction of science with medicine. Medicine cannot wait for science to resolve all the issues. Patients need immediate care, so doctors are obliged to make their best judgments with the evidence at hand. Inevitably, erroneous conclusions will sometimes be reached.
Medicine was always scientific; even dating back to the time of Hippocrates, physicians practiced observation, diagnosis, categorization, gathering empirical evidence, and rational thinking. Yet medicine has also always been a practical art influenced by scientific discovery but not wholly based on the scientific method. Well into the twentieth century, the practice of medicine was dominated by the old master-apprentice style of teaching. Senior doctors taught junior doctors, “This is how we do that.” Assumptions were made based on what appeared to be logical thinking. Accepted methods quickly became dogma. Medicine has not abandoned this approach, sometimes called “expert-based medicine”; but fortunately, it has supplemented it with the critical new approach of evidence-based medicine.
One of the best examples of this transition from expert-based to evidence-based medicine relates to a previously common surgery: the tonsillectomy. The tonsils are two oval-shaped pads of tissue, one on each side of the back of the mouth and the top of the throat, made up of lymphoid tissue, which contains immune cells, such as lymphocytes, that play crucial roles in detecting and suppressing infections. Since the nineteenth century, surgeons often removed the tonsils when they became inflamed, typically due to infections, such as with colds and fevers. By the beginning of the twentieth century, tonsillectomies were quite common. In Great Britain, where numbers are available, there were around eighty thousand tonsillectomies performed a year, a rate that remained stable for decades.13 Notably, tonsillectomies were almost exclusively performed on children. Thus, for more than the first third of the twentieth century, tonsillectomies were the predominant reason to find children in the hospital.14 By the latter third of that century, tonsillectomies would become rare.
What caused such a dramatic change? The answer is complex. Doubts emerged about tonsillectomy early on. Pediatricians in the United States voiced concerns that tonsillectomies were hazardous and were epidemiologically linked to poliomyelitis. But these concerns were quickly met with a powerful countervailing opposition that largely came from the surgeons, who had been taught by other surgeons that this procedure was beneficial and necessary. They argued that when tonsils became infected, patients would swallow the infectious agents and further spread them in the system. Some surgeons argued for social benefit, even advocating for prophylactic surgery. So entrenched was the belief in the need for tonsillectomy that in 1936, when a three-year-old boy with “cold and temperature” died within minutes of the beginning of the surgery, the cause of death was listed as “anesthetic misadventure.” No one, not the surgeon, the anesthetist, the coroner, nor the father, even thought to question whether the boy should have had the surgery to begin with. Tonsillectomies had become a medical ritual, common to childhood.15
In the 1940s, heightened awareness of poliomyelitis prompted greater public health attention. Polio is caused by the poliovirus, which is commonly spread by the fecal-oral route, but can also be spread in respiratory droplets. In the years prior to the vaccine, there were frequent summer epidemics. Typically, more than two-thirds of those infected experienced no symptoms. About a quarter might get flu-like symptoms, and some might also have gastrointestinal symptoms. Less than 1 percent of those infected would develop severe neurological symptoms, but when they did, those symptoms could be devastating, including meningitis, permanent paralysis, and even death. The growing emphasis on public health meant more epidemiological studies, which asked questions about long-term outcomes. Growing quantitative evidence suggested that tonsillectomies were precipitating, or at least predisposing to, poliomyelitis, leading pediatricians to warn that the procedure was dangerous.16 Laryngologists countered, claiming it was necessary and safe, and that the problem was related to procedural changes in how the surgery was performed, not the surgery itself. There were repeated efforts to reform how the procedure was done to reduce the risks, but a growing number of quantitative epidemiological studies pointed to a lack of benefit of the surgery.
By the mid-twentieth century, antibiotics were increasingly available, providing an alternative to surgery for treating bacterial tonsilitis. Antibiotics do not work for viral infections. And by 1955, Jonas Salk had released a vaccine for polio, which could arguably have allayed concerns about tonsillectomies and poliomyelitis. But by then there were even more questions about the long-term consequences of a surgery whose benefits were not clear. Several randomized clinical trials were designed to do a head-to-head comparison of outcomes with and without tonsillectomy, but there were so many arguments about how to properly execute such trials that none were completed. By the end of the 1970s, researchers widely considered tonsillectomy unnecessary. The procedure is still done today, but rarely, and only for very specific circumstances, such as chronic and refractory reinfection of the tonsils.
Concurrent with these events, other physicians began to question common management of different diseases and pushed for clinical data to provide definitive evidence for the best approach. This coalesced into a strategy referred to as evidence-based medicine. David Sackett credited Tom Chalmers’s 1955 paper on a randomized factorial trial of bed rest and diet for hepatitis for changing his way of thinking about medicine.17 By blindly randomizing patients with acute infectious hepatitis into treatment arms and monitoring outcomes, Chalmers dispelled a long-held notion that bed rest was required to avoid jaundice and liver damage. The paper led Sackett to rethink medical “conventional wisdom,” as well as his own management of his patients, and inspired Sackett’s push toward evidence-based medicine. It also helped motivate the use of meta-analysis as a way to evaluate a therapy. Brian Haynes, a clinical epidemiologist and biostatistician, started his journey toward evidence-based medicine in 1969 when he asked a lecturer in medical school what evidence there was for the Freudian theories the lecturer was recounting. The lecturer admitted that he didn’t know of any. Haynes remembered: “I had an intense tingle in my body as I wondered how much of my medical education was based on unproved theories.”18
Physicians and scientists in the evidence-based medicine group advocated for a different approach to medical education and medical decision-making. Historically, medical school had relied on students memorizing “facts,” which, as it turned out, were commonly historically accepted axioms, many unproven. The evidence-based medicine movement pushed medical schools to teach future doctors how to critically evaluate medical research data and understand their implications. As reflected in Sackett’s quote at the beginning of this essay, this acknowledged that our understanding of medicine and best practices for patient management would always be in flux, and a critical skill for any doctor is the ability to evaluate and adapt to new knowledge. Similarly, clinical decisions historically were made using expert-dictated algorithms of the If The Patient Has X, Then We Do Y approach. Doctors made their recommendations to patients as proclamations without consideration of the patient’s values and without offering alternatives. The new approach, articulated in a series of published articles on “critical appraisal,” argued that physicians should understand the clinical studies well enough to assess the quantitative risks and benefits of a recommendation.19 Furthermore, they needed to adapt this to the patient in front of them, including accounting for other illnesses and health factors that might change the balance. Importantly, this should be presented to the patient, along with the risks and benefits of alternatives so that the patient could participate in the decision.
These efforts led to a surge in randomized clinical trials that questioned historical dogma and, in many cases, overturned it. Cardiology trials belied the long-held belief that a class of drugs called beta blockers would be dangerous after a myocardial infarction. In fact, patients treated with beta blockers within twenty-four hours of such an event had better outcomes than those who were not.20 By the late 1990s, the long-held tenet of treating low back pain with bed rest also got turned on its head. Patients who returned to full activity early did better than those who remained in bed.21 And even the evidence-based approaches themselves needed scrutiny. A science developed around the approach to clinical trials to ensure that they test the questions they are intended to and the conclusions they reach are sound. This means avoiding all kinds of biases. Small sample sizes, improper cohort selection, skewed control selection, use of inappropriate statistics, surrogate outcomes versus patient-oriented outcomes, uncontrolled variables and artifacts, multiple hypothesis testing, publication bias, limits to clinical equipoise, misinterpretation, and conflicts of interest are some of these biases.
The evidence-based approach is now pervasive in medicine and forms the core of medical education. Its adoption reflects nothing more than codifying the scientific method as the means to test and advance medicine. Its application enabled the major medical advances of our time, particularly those described here. Each case began with a problem, followed by a hypothesis, a test, an analysis, a conclusion, and a new problem. The cycle is repeated, each step bringing us better understanding and outcomes.
In fact, the scientific method has become so successful, it now spreads far beyond traditional science. Aficionados of social media platforms will recognize that “scientific testing” appears with varying levels of fidelity in all manner of programming. Influencers in cooking, physical fitness, shopping, gardening, home improvement, photography, cocktails, barbecue, mountaineering, and countless others routinely perform experiments to determine or demonstrate the best approach. Even non–science geeks find themselves captivated by “life hacks” and such programming.
Looking back from the future, our era will be recognized as a key turning point in how we approach medical knowledge. Instead of handing down rote doctrine from master to apprentice, we use the scientific method to test our assumptions and determine best practices. We live at an amazing time, with growing life spans, reductions in world hunger, expanding literacy, and cures for diseases previously incurable. Still, if we accept Sackett, much of what we believe today will turn out to be outdated or wrong, and it is impossible to know which. So we must rely on this critical methodological engine and the innovations that it has already fostered to provide the tools with which we continue to test our thinking in the future. Our health relies on it.