For science this is both the best and the worst of times. The best because its research institutions have never been so impressive, its funding never more lavish. This is the era of Big Science, the financing of whose mega projects is now routinely measured in tens or hundreds of millions of dollars. While the total science research budget for the US just prior to the second world war ran to only $230m, by 1998 that figure had leapt several orders of magnitude. Biomedical research alone received $62bn and over the last ten years that figure has almost doubled again, soaring past the hundred billion dollar mark and dwarfing the GDP of a dozen countries. During this period, capital investment for new research facilities tripled to $15bn.
This endeavour is immensely productive, generating a tidal wave of research papers in scientific journals, whose thick shiny volumes occupy a greater acreage of library space every year. In 1980 a year's worth of the Journal of Biological Chemistry (to take one example) already ran to a daunting 12,000 pages. By last year its size had grown eightfold to 97,000 pages or 25m-odd words, filling an entire library shelf. And this is just one of hundreds of scientific and medical journals. Put them all together, and it is possible to glimpse the scale of the explosion in new knowledge in the recent past.
So the best of times—but also the worst. Pose the question, What does it all add up to? and the answer, on reflection, seems surprisingly little—certainly compared to a century ago, when funding was an infinitesimal fraction of what it has become. In the first decade of the 20th century, Max Planck's quantum and Einstein's special theory of relativity would together rewrite the laws of physics; Ernest Rutherford described the structure of the atom and discovered gamma radiation; William Bateson rediscovered Mendel's laws of genetic inheritance; and neurophysiologist Charles Sherrington described the "integrative action" of the brain and nervous system. The revolutionary significance of these and other discoveries were recognised at the time, but they also opened the door to many scientific advances over succeeding decades.
By contrast, the comparable landmarks of the recent past have been rather disappointing. The cloning of a sheep generated much excitement but Dolly is now a stuffed exhibit in a Scottish museum and we are none the wiser for the subsequent cloning of dogs, cats and cows. It will no doubt be a similar story with Craig Venter's recent creation of "artificial life." Fabricating a basic toolkit of genes and inserting them into a bacterium—at a cost of $40m and ten years' work—was technologically ingenious, but the result does less than what the simplest forms of life have been doing for free and in a matter of seconds for the past three billion years.
The practical applications of the massive commitment to genetic research, too, is scarcely detectable. The biotechnology business promised to transform both medicine and agriculture—but in the words of Arthur Levinson, chief executive of the pioneering biotechnology company Genentech, it has turned out to be "one of the biggest money-losing industries in the history of mankind." There are promises that given 30, 40 or even 100 years all will become clear, that stem cell therapy will permit the blind to see and the lame to walk and we will have a theory of everything—or, as Stephen Hawking puts it, "know the mind of God." But they remain promises.
More than a decade ago, John Horgan, a staff writer for Scientific American, proposed an explanation for the apparent inverse relationship between the current scale of research funding and scientific progress. The very success of science in the past, he argued in his book The End of Science (1996), radically constrains its prospects for the future. We live "in an era of diminishing returns." Put simply, the last 60 years have witnessed a series of scientific discoveries that taken together rank among the greatest of all intellectual achievements, in permitting us for the first time to hold in our mind's eye the entire history of the universe from its inception to yesterday. So, within living memory, we have learned how the universe came into being at the moment of the big bang 15bn years ago. We know how the first stars were formed and how within their fiery interiors the chemical elements were created by the process of nuclear fusion. We have learned how 4bn years ago a vast cloud of intergalactic gas and particles coalesced to form our solar system; and how our earth acquired its life sustaining atmosphere and how the movement of massive plates of rock beneath its surface created the continents and oceans. We have identified the very first forms of life that emerged 3bn years ago and that "universal code" strung out along the double helix by which all living things replicate their kind. And we now know the details of the physical characteristics of our earliest ancestors and the transformation to modern man. It is difficult, even impossible, to imagine how so comprehensive an achievement can be surpassed. Once it is possible to say "this is how the universe came into being," and so on, anything that comes after is likely to be something of an anticlimax.
Almost to his surprise, Horgan found many prominent scientists he interviewed concurred. "We have been so impressed by the acceleration and the rate of magnificent achievements," observed H Bentley Glass, a former president of the American Association for the Advancement of Science, "we have been deluded into thinking it can be maintained indefinitely." The physicist Richard Feynmann once expressed a similar view: "We live in an age of the discovery of the fundamental laws of nature. It is very exciting, but that day will never come again. Like the discovery of America, you only discover it once."
But others predictably disagreed, leading to Horgan being "denounced," he proudly admitted, by no less than a dozen Nobel laureates and the editors of both Nature and Science. The contention that "science has reached its limits," his critics argued, had been expressed many times in the past only to be consistently disproved. Famously, Lord Kelvin at the close of the 19th century predicted the future of the physical sciences were to be looked for "in the sixth place of decimals": that is, in futile refinements of the present state of knowledge. Within a few years Einstein had proposed his theory of relativity and the certainties of Lord Kelvin's classical physics were overthrown. But, Horgan responded in a robust defence of his views, the current situation is different, for by the time science encompasses the two extremes of matter—the minuscule structure of the atom and the vastness of the cosmos—then the opportunities for further progress is clearly limited.
Countering his critics' charge that there remain many unanswered questions (Why is there something rather than nothing? What prompted the big bang? Why is the cosmos intelligible?), Horgan retorted that such issues are not resoluble by the methodology of science. The proposed explanations, such as the superstring theory that would have the simplest elements of matter vibrating in ten dimensions, or the multiverse hypothesis of there being billions of parallel universes to our own—are "unconfirmable speculation."
For all its plausibility Horgan's "end of science" scenario is inconsistent with the exponential surge in research funding of the recent past—and the sagging library shelves worth of knowledge it generates. His thesis also fails to take into account the significance of major technical developments originating in the 1980s that promised to resolve the two final obstacles to a truly comprehensive account of our place in the universe: how it is the genetic instructions strung out along the double helix give rise to that near-infinite diversity of form and attributes that so readily distinguish one form of life from another; and how the electrical firing of the brain "translates" into our subjective experiences, memories and sense of self.
Those technical developments are, first, the ability to spell out the full sequence of genes, or genomes, of diverse species—worm, fly, mouse, man and many others—and, second, the sophisticated scanning techniques that permit neuroscientists to observe the brain "in action," thinking, memorising and looking out on the world. Both sets of developments signalled a radical departure from conventional laboratory-based science, instead generating petabytes (or tens of thousands of trillions of bytes) of raw data which require supercomputers to analyse and interpret. This explosion of data has indeed transformed our understanding of both genetics and neuroscience—but in ways quite contrary to that anticipated. (See Philip Ball's article, Prospect June 2010).
The genome projects were predicated on the reasonable assumption that spelling out the full sequence of genes would reveal the distinctive genetic instructions that determine the diverse forms of life. Biologists were thus understandably disconcerted to discover that precisely the reverse is the case. Contrary to all expectations, there is a near equivalence of 20,000 genes across the vast spectrum of organismic complexity, from a millimetre-long worm to ourselves. It was no less disconcerting to learn that the human genome is virtually interchangeable with that of both the mouse and our primate cousins, while the same regulatory genes that cause, for example, a fly to be a fly, cause humans to be human. There is in short nothing in the genomes of fly and man to explain why the fly has six legs, a pair of wings and a dot-sized brain and that we should have two arms, two legs and a mind capable of comprehending the history of our universe.
The genetic instructions must be there—for otherwise the diverse forms of life would not replicate their kind with such fidelity. But we have moved in the very recent past from supposing we might know the principles of genetic inheritance to recognising we have no conception of what they might be.
It has been a similar story for neuroscientists with their sophisticated scans of the brain "in action." Right from the beginning, it was clear that the brain must work in ways radically different from those supposed. Thus the simplest of tasks, such as associating the noun "chair" with the verb "sit" cause vast tracts of the brain to "light up"—prompting a sense of bafflement at what the most mundane conversation must entail. Again the sights and sounds of every transient moment, it emerged, are fragmented into a myriad of separate components without the slightest hint of the integrating mechanism that would create the personal experience of living at the centre of a coherent, unified, ever-changing world. Reflecting on this problem, Nobel prize-winner David Hubel of Harvard University observes: "This abiding tendency for attributes such as form, colour and movement to be handled by separate structures in the brain immediately raises the question how all the information is finally assembled, say, for perceiving a bouncing red ball. These obviously must be assembled—but where and how, we have no idea."
Meanwhile the great conundrum remains unresolved: how the electrical activity of billions of neurons in the brain translate into the experiences of our everyday lives—where each fleeting moment has its own distinct, intangible feel: where the cadences of a Bach cantata are so utterly different from the taste of bourbon or the lingering memory of that first kiss.
The implications are obvious enough. While it might be possible to know everything about the physical materiality of the brain down to the last atom, its "product," the five cardinal mysteries of the non-material mind are still unaccounted for: subjective awareness; free will; how memories are stored and retrieved; the "higher" faculties of reason and imagination; and that unique sense of personal identity that changes and matures over time but remains the same.
The usual response is to acknowledge that perhaps things have turned out to be more complex than originally presumed, but to insist these are still "early days" to predict what might yet emerge. Certainly both genetics and neuroscience could generate further petabytes of basic biological and neuroscientific data almost indefinitely, but it is possible, in broad outline, to anticipate what they will reveal. Biologists could, if they so wish, spell out the genomes of each of the millions of species with which we share the planet but that would only confirm they are composed of several thousand similar genes that "code" for the cells from which all living things are made. Meanwhile, the really interesting question of how they determine the unique form and attributes of such diverse creatures would remain unresolved. And so too for observing the brain "in action," where a million scans of subjects watching a bouncing red ball would not progress understanding any further of how those neuronal circuits experience the ball as being round and red and bouncing.
The contrast with the supreme intellectual achievements of the postwar years is striking. At a time when cosmologists can reliably infer what happened in the first few minutes of the birth of the universe, and geologists can measure the movements of continents to the nearest centimetre, it seems extraordinary that geneticists can't tell us why humans are so different from flies, and neuroscientists are unable to clarify how we recall a telephone number.
Has science perhaps been looking in the wrong place for solutions to questions that somehow lie outside its domain—what it might be that could conjure that diversity of form of the living world from the monotonous sequence of genes, or the richness of the mind from the electrochemistry of the brain? There are two possible reasons why this might be so. The first, obvious on reflection, is that "life" is immeasurably more complex than matter: its fundamental unit—the cell—has the capacity to create every thing that has ever lived and is billions of times smaller than the smallest piece of machinery ever constructed by man. A fly is billions upon billions upon billions of times more complex than a pebble of comparable size, and possesses properties that have no parallel in the inanimate world: the capacity to transform the nutrients on which it feeds into its own tissues, to repair and reproduce itself.
And so too the laws of biology, where the genetic instructions strung out along the double helix determine the living world must similarly be commensurately billions upon billions of times more complex than the laws of physics and chemistry that determine the properties of matter. So while it is extraordinary that cosmologists can infer the physical events in the wake of the big bang, this is trivial compared to explaining the phenomena of life. To understand the former is no indication of being able to explain the latter.
The further reason why the recent findings of genetics and neuroscience should have proved so perplexing is the assumption that the phenomena of life and the mind are ultimately explicable in the materialist terms of respectively the workings of the genes and the brain that give rise to them. This is a reasonable supposition, for the whole scientific enterprise for the past 150 years is itself predicated on there being nothing in principle that cannot ultimately be explained in materialist terms. But it remains an assumption, and the distinctive feature of both the form and "organisation" of life (as opposed to its materiality) and the thoughts, beliefs and ideas of the mind is that they are unequivocally non-material in that they cannot be quantified, weighed or measured. And thus, strictly speaking, they fall outside the domain of the methods of science to investigate and explain.
This then is the paradox of the best and worst of times. Science, the dominant way of knowing of our age now finds itself caught between the rock of the supreme intellectual achievement of delineating the history of the universe and the (very) hard place of the apparent inscrutability to its investigations of the phenomena of life and the mind.
Still, the generous funding of science research will continue so long as the view prevails that the accumulation of yet more petabytes of data will, like a bulldozer, drive a causeway through current perplexities. But, that view undoubtedly has its hazards for, as the saying goes, "under the banyan tree nothing grows." And the banyan tree of Big Science threatens to extinguish the true spirit of intellectual inquiry. Its mega projects organised on quasi-industrial lines may be guaranteed to produce results, but they are inimical to fostering those traits that characterise the truly creative scientist: independence of judgement, stubbornness and discontent with prevailing theory. Big Science is intrinsically conservative in its outlook, committed to "more of the same," the results of which are then interpreted to fit in with the prevailing understanding of how things are. Its leading players who dominate the grant-giving bodies will hardly allocate funds to those who might challenge the certainties on which their reputations rest. And when the geeks have taken over and the free thinkers vanquished—that really will be the end of science.