The Purpose and Conduct of Science
Rede Lecture (As Delivered)
Senate Building, University Of Cambridge
March 9, 2011
I approach this lectern with both gratitude and trepidation: the Rede Lecture was the first named lecture I ever heard mentioned with reverence.
The time was 1960, and I was an undergraduate studying English literature at Amherst College in western Massachusetts. One evening the College organized a panel discussion about what is arguably the most famous Rede lecture: The Two Cultures, given by C.P. Snow the year before. The lecture had been published on both sides of the Atlantic and discussed widely, especially by those (like my mentors in the Amherst English department) who were understandably upset that their culture was judged by Lord Snow to be inferior, of less benefit to the world, and soon to be eclipsed by the other, the sciences---a culture with which literary people could allegedly not even converse.
Snow's description of scholars who can't easily talk to each other has come to seem both self-evident and exaggerated over the years. But the notion of the Two Cultures has shown a remarkable staying power. His lecture continues to be reissued in print and widely discussed, even in other Rede lectures, now (of course) including this one. His theme has retained some resonance for me because, in the years following the Amherst panel discussion, I migrated from graduate studies in literature to medicine and then to basic biological sciences. So I dine out on a Two Cultures reputation.
This afternoon, I want to look at what is happening within one of Snow's Two Cultures, the sciences---especially the branch of the sciences I know best, the life sciences.
As Snow himself noted, science is thought to have at least two cultures of its own:
--what he called "pure" or "basic" science, the fundamental, non-commercial enterprise that deals largely in ideas, mechanisms, observations, and discoveries, mainly in academic settings;
--and "applied" science, the practical and often profitable component, largely situated in the private sector, that invents and tests things that are useful, based on scientific principles.
In good times, these two faces of science are in a synergistic equilibrium, balanced by opportunity. That equilibrium forms the basis for the large-scale investments that the United States has made in science since World War II. In 1945, Vannevar Bush, the patron saint of our Federal science agencies and advisor to Franklin Roosevelt, wrote in his seminal work, Science, the Endless Frontier, that basic research is "the pacemaker of technological progress….New products and new processes do not appear full-grown. They are founded on new principles and new conceptions, which in turn are painstakingly developed by research in the purest realms of science."
In other words, basic scientists create opportunity when they make discoveries, often serendipitous, about the natural world; these discoveries fuel the development and testing of inventions by applied scientists. At the same time, applied science often provides economic and social justification for investments in basic science. Salvador Luria, the geneticist and Nobel Laureate, had a neat metaphor for this: "Under the pressure of utilitarian society, the cathedral of science has come to look like one of those monasteries one sees in the French countryside, in which a modest church is almost hidden by a prosperous distillery. The sale of products becomes the justification for being allowed to pray to the Lord."
Regardless of their place in this broad range of activities, scientists are driven by questions: How does the natural world work? How do we harness what we know to improve the way we live?
All of science also intersects with technology---the methods by which we measure natural phenomena, fabricate new things, and then test them. Technology is inherently different from science: it doesn't answer questions; it generates the tools with which answers are obtained. But technology depends on scientific principles, and the progress of science depends deeply on the development of new and more powerful tools for measurement. Microscopes, telescopes, and machines for rapid sequencing of DNA are only a few of the triumphant technologies that make sophisticated science possible. In this era of extraordinarily large sets of data--about stars or about genes--virtually all of science has become fully dependent on another rapidly growing technology: information technology.
A broad range of sciences--supported by technology--forms a healthy eco-system. But in difficult times, they are likely to compete for resources and attention. Society and its governments--the major source of support for science, especially basic science, in our countries at this time--want science to solve life's most dire and immediate problems, conferring economic and social benefits. So these goals are likely to be emphasized at periods of fiscal stress, such as the times in which we now live. When the emphasis occurs at the expense of the early, basic phases of science, the imbalance jeopardizes our long-term future in two ways: by risking a shortfall of unexpected new ideas that should foster invention long from now; and by diminishing some of the pleasures of science that have attracted so many remarkable minds to it.
I aim to describe this situation in my own field of medical science. I intend to illustrate how scientists do their work, how our sub-cultures intersect, and how science delivers on its promise by turning discoveries into societal benefits
The world looks forbidding and less enlightened these days to the publicly funded scientist--especially at this very moment in the United States. For the first time in my memory, Congress--at least the newly elected Republican majority in the House of Representatives--threatens to reduce the budgets of our most revered science agencies, including the National Institutes of Health. Vannevar Bush's "purest realms of science" are not much talked about in defense of our agencies. Few would venture to ask for money only to advance knowledge.
Two words dominate the arguments I hear to promote science funding. One is "innovation," a term usually taken to mean harnessing science and technology to solve practical problems and stimulate the economy. The other is "translation," used widely in medical sciences to denote the conversion of new knowledge about disease into a form that can directly benefit human health. Of course, neither of these concepts is bad. On the contrary. We should celebrate the capacity of science and technology to advance the public good--both the past achievements and the future promise.
One of the things that drew me and many of my colleagues to Barack Obama was his enthusiastic embrace of the importance of science and technology for our society. I recall participating in a panel discussion with him at Carnegie Mellon Institute in Pittsburgh, in May, 2008, before he had secured the Democratic nomination. He led an animated conversation, called the "competitiveness roundtable", with leaders in manufacturing, energy, medicine, education, labor, and the environment, aimed at convincing the audience that all the spheres we represented, and the American economy itself, are dependent on success in science and technology. His enthusiasm for this idea has not waned, as shown in his recent State of the Union speech (in which he says that if your airplane is losing altitude, the one thing you shouldn't jettison is the motor); in his budget proposal for 2012 that would expand the resources at many Federal science agencies; and even in the attention he has given to student science fairs.
I do not expect political treatment of science to get better than this, especially in America and in hard economic times. But I fear that even some scientists are abandoning the appropriate balance between the basic and applied sciences and forgetting how progress is actually made. This is not just an understandable pandering in tough times to the public's hopes for what science can produce; it is also a lost opportunity to teach how science actually works.
There are at least three grounds for concern about building our defense of science solely on its applications: the danger of promising more to society that science can deliver in the near future; the risk of justifying science solely for its capacity to meet defined and immediate needs, when it also needs to be defended for its unforeseen future utility; and the failure to appreciate scientific inquiry for its beauty and its human values.
In this enlarged consideration of science, we should also recall its limits. Science and technology may help to provide the material things that diminish discomfort and allow contented living, but those things are not the actual grounds for satisfaction in life. Pleasures--at least in my life--are as likely to be delivered by the arts and sports, by friendships and love, and by knowledge and discovery.
It also helps to specify what I mean by science. At one level, science represents an evidence-based way of thinking, distinct from that based on intuition or belief. In this sense, "natural philosophy," as science was once known, is not fundamentally different from many other disciplines that depend on rational, evidence-based thought. I learned this early in my college life from the writings of the literary critic (and Cambridge graduate and teacher) I.A. Richards, especially from his seminal book, Practical Criticism. Richards insisted that readers of poetry should try to understand a poem by beginning with the evidence, with the words on the page.
But science is not just rational thinking; there are many evidence-based disciplines. We generally intend the word "science" to refer to the study of the natural world--the attempt to describe and understand the physical, chemical and biological properties of both the living and inanimate objects we encounter in this universe. These disciplines are also more likely than the humanities and arts to provide opportunities for experimentation, for quantitative analysis, and for development of practical benefits.
Still, such a pragmatic assessment can undermine the true beauty and underlying purpose of science when it confronts the origins and content of the universe, the composition and laws of matter, and the evolution and function of life forms.
A deeper dive into basic vs applied sciences
In his Rede Lecture, Snow made some acerbic comments about the cultural divide that he perceived within science itself. "Pure scientists have by and large been dim-witted about engineers and applied science. They couldn't get interested." He offered an explanation: a willful elitism he called inherently British and built on class distinctions. "They (the pure scientists) wouldn't recognize that many of the problems (of applied science) were as intellectually exacting as pure problems, and that many of the solutions were as satisfying and beautiful. Their instinct....was to take it for granted that applied science was an occupation for second-rate minds."
One problem with this assessment is its simplified distinctions. It is often difficult to draw a firm line between basic and applied research, especially in fields like biology and medicine. There are many kinds of science between the two extremes, and they may not be easily categorized. For instance, most of my own work is centered on a specific disease, cancer, and hence might not be considered fully "basic"; on the other hand, it is not directly focused on prevention or treatment and so is not really "applied" either.
To describe how medical research works, it is more illuminating to reconstruct the events that led to improved health than to classify the activities rigidly. These stories usually show how a confluence of discoveries, observations, technical developments, and clinical trials eventually provide benefit. Such stories advance understanding in a way that abstractions fail to do and enhance an appreciation of all phases of research activity.
Still, it does seem to me that movements among these phases of research have recently gotten easier. Forty years ago, when my own scientific career was getting underway, the worlds of basic and applied research were more clearly separated and defined that they are today. Nearly all basic biological science was the purview of government and academic labs, whereas applied work occurred in the private sector, especially the pharmaceutical industry. Basic scientists valued their intellectual freedom and could claim the moral superiority of working without the motivation of profit as a trade-off for lower salaries. Those working to make products in industry saw virtue in utility and enjoyed team efforts.
But these worlds began to merge--and, in retrospect, very quickly--with the advent of recombinant DNA technology. I distinctly recall a small faculty gathering at the University of California-San Francisco in the mid-1970's when our colleague Herb Boyer--one of the inventors of the new technology--announced that he and Bob Swenson, a venture capitalist (a term new to most of us) were going to start a company to make products--and profits--with recombinant DNA. The others in the room were incredulous and joked about it nervously. Herb offered shares in the new company for a nickel a share, but I don't think anyone took up his offer. There were regrets when the company, Genentech, offered its shares publicly a few years later, and they rose quickly to nearly ninety dollars each.
Soon after this, the biotech industry took off, thanks to strong investor interest and to the swift recognition by many scientists of all stripes that this powerful new technology could be used to make beneficial biological products, like human insulin or growth factor, or new vaccines, such as those against hepatitis B and papillomaviruses.
The biotech revolution swept away most of the reservations that my academic colleagues (and I) might have had about interactions with industry. The promise of medical benefit was exhilarating; the entrepreneurial atmosphere, especially in the fledgling start-ups, was fun; and the financial rewards were welcome. There were few who did not participate if invited. An important thing was happening that extended well beyond the simple validation of the promise of recombinant DNA methods. Molecular biology was coming of age.
When this story of scientific innovation--the birth of a new industry--is told, it usually begins with a fateful discussion between Herb Boyer and his Stanford collaborator, Stanley Cohen, at a Honolulu delicatessen, just a few years before the founding of Genentech. The remembered image is the corned beef sandwich over which they outlined the critical experiments--the tests that later convinced investors that something practical and economically valuable could be done with recombinant DNA. But coming into the story at this stage is like picking up Middlemarch in the later chapters to read about Dorothea Brooke's happy days with the light-hearted Will Ladislaw, and never learning about her earlier marriage to the somber scholar, Edward Causabon.
The early chapters of the recombinant DNA story are not easy to read. They are replete with esoteric observations that were difficult to understand, could not possibly have been interesting to any investor, and would have been supportable only by a donor with a lenient outlook or by a government willing to gamble on abstruse basic science.
Bacterial viruses failed to grow in certain bacteria. Yes, but why? What was the mechanism of the restriction? Years of careful studies, without any prospect of economic return, were required to show that those bacteria were equipped with enzymes--we now call them restriction enzymes--that cut certain common sites in DNA. As a consequence, the resulting DNA fragments could be reassembled to make combinations (called "recombinant DNA") and propagated in bacteria. If the recombinant DNA encoded something useful, like human insulin, a product--patented, practical, and pricey--could then be made.
All the steps in the process that fueled this new industry from beginning to end were good science. But you can't help loving the people who, in the early days, stuck with their innate curiosity to solve an abstract biological problem, without seeing--or being able to see--the great industry that was lying in wait around the mysterious corner of bacterial restriction.
All science is full of such tales. But medical science is what I know best. So consider another example: the development of vaccines to protect against the most talked about and feared infectious disease of my own childhood: polio. The public--and I--best remember the heady days in the late 1950's when Jonas Salk and Albert Sabin first tested their vaccines, injected or oral, to demonstrate protection against infection by the three strains of polio virus. But, as my enlightened parents kept telling me, there were other, earlier heroes. The vaccine would not have been possible without a much less heralded history--the identification of polio viruses and the experiments that allowed them to grow in primate cells in a laboratory dish.
Or consider the serendipitous discovery of the class of drugs--the platinum agents--that later cured Lance Armstrong of his testicular cancer. Nearly fifty years ago, a bacterial physiologist, Barnett Rosenberg, wondered whether an electric current would affect the behavior of bacteria growing in solution. When the bacteria stopped dividing, he pressed on: the response was not due to the current, but to platinum compounds released from the electrodes. When those compounds also halted the division of mammalian cells in a culture dish, he and others began to test them against cancer cells. Some cancers, such as testicular cancer, responded dramatically.
A final example is my own story, so forgive me for a greater depth of detail. I have now spent the past forty years--most of my career as a scientist--in the midst of a dramatic transition in cancer research. When I entered the field in 1970, it was stymied by a long-standing mystery in cell biology: What causes the profound changes in human cells that allow them to grow and spread until they eventually kill an entire organism? Ideas abounded: infection by an outside agent, like a virus; inherited genetic instructions; metabolic effects of diet; sporadic mutations affecting normal genes.
Now, forty years later, we know a lot about how cancerous change happens, although far from everything we need to know. We know that changes in the form and function of a subset of genes are central to the dramatic transformation in cell behavior that produces a tumor. We know that blocking the effects of those genetic changes can cause a cancer cell to stop growing or even die. And we know that new therapies--antibodies or small chemicals that inhibit bad proteins--can be used to benefit cancer patients and sold at handsome prices.
My preparation for a role in this story was unusual and perhaps instructive. In the early 1960's, I cut short my graduate training in English literature in favor of medical school and clinical training. I was then thrust into a basic science laboratory at the National Institutes of Health at the advanced age of twenty-eight when I sought a way to avoid military service in the Vietnam War.
It was my first exposure to experimental work and revealing in at least three ways. First, I learned how thrilling science can be when asking unanswered questions--peering over the edge of what is known--rather than performing cookbook laboratory exercises as part of a formal course. Second, I saw the virtues of model systems. I was studying what might have been considered an abstruse problem in the control of genes in bacteria. But we were controlling genes with a chemical, called cyclic AMP, that was already known to be a mediator of hormone action in mammalian cells. The simple and powerful system we used revealed a mechanism of gene regulation that would have then been difficult, if not impossible, to discern with mammalian cells. Third, I found that the path to scientific success depends as much on technology as ideas. If you have a good "assay"--a way to measure something important precisely and efficiently--the future is bright. In my case, the technology was a powerful new test to detect the activity of specific genes. The assay depends on the pairing of DNA bases--the apposition of A's with T's and C's with G's--that holds together the strands of the "double helix." When strands of DNA (or its cousin, RNA) are closely related, they hybridize to form a double helix, allowing gene detection. The method, called molecular hybridization, was later critical for detection of specific genes implicated in cancer.
After two years at the NIH working on bacterial genes, I sought a chance to work on a problem closer to what I knew best--the diseases I had studied as a medical student and treated as a doctor. In retrospect, it would be easy to say that I chose cancer research because my mother had been diagnosed with breast cancer two years before and was very likely to die of it, as she did a year later. But the major motivations lay elsewhere; I had no illusions that I was going to save her life by studying her disease.
Of course, I knew that cancer was an important problem and a common one. Then, as now, nearly a third of women and half of men received a diagnosis in their lifetimes, and cancer's death toll was high, exceeded only by cardiovascular disease. The great opportunity at the time was to address the underlying problem in cancer research--how does a normal cell become cancerous?
Scientists trying to understand the origins of cancer and those seeking better ways to treat cancer were nearly completely disconnected, working in non-overlapping spheres, believing they had little to say to each other. But a propitious moment, one that would eventually bring these two worlds together, had been created by advances in basic research. Molecular biology permitted studies of individual genes in complex cells, thanks to new methods like molecular hybridization. Even more importantly, we learned that simple viruses caused cancers in animals.
The viruses were crucial, especially in the era before recombinant DNA technology was available, as it is now, for routinely isolating and propagating individual genes from animals in bacteria. Most cancer viruses contain only a small number of genes, less than five or ten, and yet are able to convert an animal cell into a malignant cell. This seemed a lot more tractable than working directly with mammalian cells, with their still unknown number of genes, likely tens of thousands. Surely cancer viruses would teach us something important about how cancer arises.
I chose to pursue these new leads in San Francisco with a new mentor, and soon-to-be long-term colleague, Michael Bishop. Our object of study might have seemed an improbable source of useful knowledge about human cancer: a virus that had been isolated from a tumor in chickens about sixty years earlier. But a few important discoveries encouraged our attention. A graduate of this University, Steven Martin, then working at UC Berkeley, had isolated a highly instructive mutant of the chicken virus. His work convinced nearly everyone that at least one viral gene (called src) was essential for turning normal cells into cancer cells after infection. What sort of gene was this src gene? It made cells grow excessively, but didn't help the virus grow. So why did a virus carry it? Where did it come from?
Using molecular hybridization to look for the origins of the viral src gene, Mike and I and our colleagues soon discovered that all normal animal cells--from chickens, other birds, all mammals, and even insects--contained a gene very closely related to the viral src gene in their chromosomes. That gene had not been placed in chromosomes by an earlier viral infection: the gene had all of the attributes of a genuine cellular gene, not a viral gene.
Thus the cancerous ingredient in the chicken cancer virus was derived from a normal cellular gene--one that had been jealously conserved during evolution. But what did this cellular src gene do normally? And what role might it play in causing cancers? If a slightly altered version present in a chicken virus could drive a cell to behave like a cancer, perhaps the cell precursor could cause cancer in its natural setting, an animal cell, if similar changes occurred.
Our discovery only raised these conjectures; it did not tell us how cancers usually arise--in birds or humans. Over the next several years, many other cancer viruses were found, and many carried cancer-causing genes derived from normal cellular genes.
Then a very important bridge was crossed. Some of these cellular genes were shown to be altered, by mutations, in cancers from animals and, especially important, in human beings, even without being captured by viruses. At the same time, some functions of these genes were deciphered. Most encode proteins that control the growth and division of cells or other critical events that are altered in cancer cells. Further, some of those proteins were shown to be enzymes and thus, in principle, subject to inhibition by small chemicals. Thus a chemical inhibitor might work as a precisely targeted cancer drug.
Still, to this point, ten to fifteen years after our discovery of the cellular src gene, molecular studies of cancer had had no direct bearing on the care of cancer patients. The new knowledge provoked imaginative experiments and provocative ideas, but provided no direct means to prevent, diagnose, or cure cancers.
Over the past fifteen years, however, things have changed. Academic scientists, biotech companies, and pharmaceutical firms--independently and together--have classified cancers based on mutations in cancer genes; assessed an individual's risk of cancer based upon inherited mutations; and made drugs and antibodies that slow the growth of cancer cells ,or even kill them, by attacking mutant proteins. Several of these new drugs and antibodies are effective treatments and approved for use in cancer patients. More are in clinical trials or in later stages of development.
Let's consider an example of this new phase of cancer research that began just down the road, at the Wellcome Trust's Sanger Center on the Hinxton campus. Over the past twenty years, cancer viruses have been eclipsed as tools for the discovery of cancer genes by the methods that were used to characterize cell genomes. Using these methods, the Sanger Center reported in 2002 that about 60% of melanomas, an often lethal cancer that usually begins on the skin, contain mutations in a gene called BRAF--a gene closely related to one of those cellular genes from which viral cancer genes are derived. The BRAF gene, like several other cancer genes, directs the cell to make an enzyme. Drug companies rapidly developed small chemicals that inhibit the mutant BRAF enzyme. Only eight years after the discovery of the BRAF mutations, one of these chemicals produced dramatic regressions in most cases of advanced, metastatic melanomas with BRAF mutations.
The interval between finding a cancer-causing mutation and showing that an inhibitor of it is clinically effective seems wonderfully short in the annals of drug development. But we also need to recognize the many years of more basic science required to set the stage properly to look for and to appreciate the BRAF mutation.
Inhibition of BRAF is but one of several examples of how the new conception of cancer that emerged in the 1970's is producing improved cancer treatments over thirty years later. Other examples include the antibody Herceptin in breast cancer; the drug Gleevec in an adult leukemia; another drug, Tarceva, in certain forms of lung cancer. None of these is fully curative and most are only transiently beneficial, for a few months or years, since drug resistance is common. But a barrier has been breached. Rational therapies, based on an understanding of cancer's origins, have been developed and shown to work, even if they are not fully successful.
So this is an exciting time in cancer therapy. There is an established path to making better diagnostic tests and drugs that will help patients soon---a compelling argument to support more funding of cancer research. The need for funds is especially compelling because most of the necessary knowledge is not yet in hand: the genetic damage in the many different forms of cancer; the functions of damaged genes and their proteins; the best targets for drugs; then the drugs themselves; and how cells become resistant to these drugs.
As we proceed down this path--and we must--other deeper issues should also be considered. At least 20 to 30 years of fundamental science, most not apparently relevant to human cancer, was needed to get us to the point of applying what we've learned about cancer to the problems of better diagnosis and treatment. Yet there remain vast areas of ignorance about the causes of cancer and the means by which cancer cells thrive and spread throughout the body.
Without more fundamental work, the prospects will be limited for preventing more cancers, for finding them at their earliest phase, and for destroying them at metastatic sites. For example, we know that obesity promotes cancer, but don't know why. We know that some commonly used drugs, like aspirin, can prevent certain cancers but we don't know how. We know that some conventional chemotherapies, like the platinum drugs I've mentioned, can cure, not just suppress, some cancers, like testicular cancers; but we don't understand what is different about those cancers. We know that the incidence of certain cancers varies dramatically from region to region, but don't know the causes of variation.
If we are going to control the tidal wave of cancerous diseases that is likely to accompany the aging of populations in all parts of the world , as much attention needs to be given to these and other open questions as to the beckoning field of targeted therapeutics.
New standards for evaluating corrupt science
I have reviewed the recent history of cancer research in some detail to provide an example of how medical science makes progress. But this has diverted me from commenting more broadly on the conduct of science. So before closing, I'd like to touch briefly on two related issues: how we evaluate science and the role of science in poorer parts of the world.
I argued earlier that, in stressful economic times, we place more emphasis on the pragmatic outcomes of science, on translation and innovation, and tend to neglect the fundamentals. This trend undermines the appreciation of discovery in troubling ways. When we tout the potential to "translate" what we have learned into commercial products, we risk undermining the way we evaluate science at all stages, especially at its earliest phase.
We begin to look at the palpable rather than the conceptual, using questionable metrics rather than professional judgment. Instead of hoping to discover what no one has seen before, our graduate students dream of authoring a paper in a famous journal that has a high "impact factor," because that is said to be the way to secure a good job. Colleagues judge each other for appointments and promotions based on aggregated impact factors, rather than assessments of the design and revelatory power of their experimental work. Universities themselves, especially in Europe, are evaluated by metrics that are deIusionally quantitative, in exercises like the UK Research Assessment. In this way, we risk turning the ideas, practices, and discoveries of scientific work into commodities and deliverables, such as papers, citations, patents, and products.
The new requirements to measure tangible products of research threaten the joy of discovery and the fun of talking about it. They also affect the atmosphere in which science is done and diminish traditional impulses to share findings and materials. The high value placed on publishing in what are deemed the most prestigious journals has also slowed the adoption of new, more equitable, and more effective publishing practices. For instance, the careerist demands of publishing in those journals has been among the most persistent barriers to success of the movement to so-called "open access" journals--journals that make their content fully available to anyone with an Internet connection and promote better use of new knowledge by other scientists.
Science and the developing world
The second issue concerns the role that science will play in the developing world. In his 1959 Rede Lecture, C.P. Snow argued that science would surely deliver the world from poverty within 50 years---that is, by 2009. He can't be said to be entirely right. At the same time, he was not entirely wrong. Science and technology, improved economies, and better governance have helped to deliver food, water, jobs, telephones, computers, and better health to many. The results have been longer lives and higher living standards in many places, although far from all. Snow would have been pleased to see science growing in selected fields in some countries--witness computer sciences and pharmacology in India, or genomics and energy research in China, or molecular biology in Singapore and electronics in Korea. These countries may still be building on basic discoveries made in the US and Europe, but they are inciting concerns in more advanced economies about growing competition in the applied sciences.
There are some ironies here. Fears of competition in science, technology, and education from some of what were the world's poorest countries half a century ago are causing rich countries to repair their own weaknesses in these domains. At the same time, China and India in particular are beginning to solve not only their own problems, but helping their neighbors elsewhere in Asia and in Africa. CP Snow's belief that science and technology would reverse the conditions of poverty in a half-century may, in large part, be realized in the next half-century by those poor countries themselves.
I would like to close by returning briefly to Amherst College, the place where this lecture began.
Last summer I was invited to give the opening lecture to the first-year students. My ambition was to stimulate their interest in science by describing its intellectual satisfactions rather than its material benefits. When I was in their place as an Amherst freshman, the launch of Sputnik by the Soviet Union spurred international competition in science and technology. I was hoping to provide other kinds of inspiration.
I was helped by the requirement to assign a text. My wife, who self-identifies as science-deprived in her youth, recommended The Age of Wonder by yet another Cambridge graduate and fellow of Churchill College, Richard Holmes. Holmes's book describes some remarkable individuals in the late 18th and early 19th centuries--Joseph Banks, William Herschel, Humphrey Davy, and others--people who explored the earth and its remote populations; the skies, with telescopes and balloons; and the chemical composition of matter. Their discoveries inspired poets and the public, even when they offered nothing more practical--at least in the short run--than to reveal something fascinating about the universe that no one ever knew before.
This is the spirit that needs reinvigoration in the natural sciences. We can do this--even while we are struggling to find funds to support them, and even while we are advertising their potential to improve lives and enrich societies. If we fail, the best minds will look elsewhere for intellectual satisfaction, and fundamental features of nature--seeds of future invention--will remain undiscovered.