Science is too elitist and needs to be more democratic, or so it is often said. A House of Lords committee on science and technology argued in 2000 that it is condescending to talk about a lack of public understanding of science, when public ignorance is mainly the fault of scientists failing to communicate. What is more, since science is not value-free, scientists will more readily gain public confidence if they declare the values which underpin their work and engage with the values and attitudes of the public. The public should be more involved in the direction of scientific research and risk assessment, since it "understands uncertainty and risk well." Only if science and technology is made more accountable and democratic can we restore public trust. Above all, we must be cautious. In order to avoid the mistakes made over BSE, we must pay more attention to public apprehensions and apply the precautionary principle.
The deep flaws in this approach are exposed in an important book by Adam Burgess, Cellular Phones, Public Fears and a Culture of Precaution. The focus of the book is the 1999-2000 Stewart inquiry into the safety of mobile phones. Most of the book is a meticulous analysis of the origins of fears about microwaves, the reasons behind the creation of the inquiry, the evidence or lack of it about the harm microwaves cause and the very different reactions of different countries. The inquiry was a bizarre episode in the history of public inquiries, since there was no compelling reason for setting it up and because some of its recommendations seemed to contradict its own findings.
Before the safety of mobile phones was questioned, there had been a number of alarms about microwave ovens and VDUs. But much more important was a protracted dispute in the US, from the 1970s until the mid-1990s, over whether electromagnetic fields (EMF) from overhead power lines were a cause of leukaemia in children. Despite exhaustive epidemiological studies, no evidence that this was so was ever found. (The White House science office estimated the cost of the scare, mainly from re-siting power lines, at $25bn.) Similar fears surfaced in Europe about possible harm from mobile phones. These fears spread when analogue phones were replaced by digital ones requiring more phone masts, which attracted greater attention. Articles appeared in the British press claiming that mobile phones "cooked your brains" (Sunday Times), caused hypertension, loss of memory, miscarriages and so on. It was alleged that people living near mobile phone masts developed cancer, and the Daily Express even claimed that "sickly pupils" recovered after leaving a school sited near a mobile phone mast. Of the hundreds of media reports about mobile phones or masts reviewed by the Stewart inquiry, 79 per cent alleged adverse health effects; only 9 per cent referred to the absence of scientific evidence. Concern was expressed about mobile phone masts sited near schools, though most complaints from the public were about the ugliness of masts and their effect on property prices.
The Stewart inquiry was not forced on the government by widespread public concern. Despite scare stories in the press, people liked their mobile phones and kept buying them. Nor was there any scientific evidence of actual or potential harm; on the contrary, a mountain of evidence was available from the US suggesting that claims about the dangers of EMF from power lines were unsubstantiated. An inquiry in the Netherlands had reached the same conclusion about mobile phones. The government acted because it wanted to show it was being ultra-cautious and, in the words of Tessa Jowell, the health minister at the time, "to keep ahead of public anxiety."
In due course, the Stewart inquiry confirmed the American findings that there was no evidence of any harmful effects. Nevertheless, it advocated caution and suggested that children should use mobile phones as little as possible and hold them away from their bodies. The committee also recommended further research, and the body which has been set up to carry it out has been authorised to spend a further ?7m. The report pleased campaigners, who could argue that although it found no evidence of danger there was clearly something to worry about. Not surprisingly, it baffled the press. Were mobile phones a risk or not? The Mirror described it as "shambolic," and the Evening Standard talked of "safety chaos." Campaigns against mobile phone masts carry on; sales of mobile phones continue to rise.
The Stewart inquiry is an example of the government's policy of trying to pacify critics of a controversial technology by meeting them halfway. It illustrates the defensive posture that scientists and experts have been forced into since the BSE disaster. The Royal Society adopted a similarly defensive attitude in 2002 when it published an update on its report on genetically modified crops and human health. The update confirmed the society's previous findings that there was no evidence of danger to human health from GM crops. It went on to say, quite reasonably, that special care should be taken to monitor new food products for allergenic effects, whether they are derived from conventional or GM crops. However, its accompanying press release was phrased to convey the impression that the Royal Society was not making a case in favour of GM technology and did not regard it as free from risk. The press reported that the Royal Society had abandoned support for GM crops. Clearly the society felt it would command more public trust if it was seen to take public fears seriously. Yet the effect was to increase mistrust.
The government has now decided to take the first step towards licensing GM crops on a case by case basis, by allowing the cultivation of a herbicide-tolerant maize, which has not only passed the EU's safety standards but which has been shown after a prolonged and thorough field trial to have beneficial effects on biodiversity. This is a brave decision, and contrasts with the government's previous policy of appeasement. The leading opponents of GM crops are no longer influenced by scientific evidence, but have launched a semi-religious crusade. Greenpeace and Friends of the Earth do not object to genetically modified drugs, but they reject genetically modified plants as Frankenstein inventions. Yet precisely the same technique is used in the modification of a GM crop as is used to transfer a gene from one species to another to make human growth hormone, or human insulin for people with diabetes, or a blood-clotting agent for haemophiliacs. It is, of course, true that there is a wider impact on the environment with the planting of GM crops. But no rational principle can judge it right to make better drugs to protect us from disease but wrong to modify plants to make them resistant to insect pests. What the opposition to GM crops lacks in logic it makes up for in passion. When I recently wrote an article for the Guardian in favour of GM crops, I was deluged with abusive emails within hours.
It is argued that democratic governments should allow public opinion to decide what crops should be licensed. But this argument tends to conflate democracy with the activities of green campaigning groups. There is clearly an important role for public consultation over controversial scientific developments, but it needs to be structured carefully to prevent domination by special interests. The public discussion that took place in a largely non-adversarial atmosphere before the parliamentary votes on the use of human embryos for stem cell research was an example of effective consultation. On the other hand, the botched public debate on GM crops was not. Anti-GM lobby groups were allowed to dominate the exercise, while the public in general showed little interest. Imagine what the result of the stem cell debate would have been had the Roman Catholic church been selected to represent the public. The broadcast media tend to make the same mistake about who represents public opinion. They assume that green lobbies, such as Greenpeace, Friends of the Earth and the Soil Association are disinterested representatives of the public instead of interested parties with their own agendas.
Demands to increase the public's involvement in the future direction of science have some merit. Scientific developments that raise moral issues, such as human cloning, cannot be left to a scientific elite and forced on the public, whatever its misgivings. If being "democratic" means that scientists should be open about the work they do, it is obvious that work that is done openly tends to allay suspicions about possible dangerous consequences. That was the main lesson of the BSE debacle. Unfortunately, because of the work of animal rights terrorists or the needs of commercial secrecy, openness is not always possible.
Science policy is primarily a matter for the elected government, with appropriate public consultation. But direct political control is fatal to good science, just as state control is the death of art. The birth of modern science at the time of the Enlightenment was made possible by the end of the domination of the church. Galileo famously challenged the right of the church to interfere with the scientist's quest for truth: "This would be as if an absolute despot, being neither a physician nor an architect, but knowing himself free to command, should undertake to administer medicines and erect buildings according to his whim - at grave peril of his poor patients' lives and speedy collapse of his edifices." His statement still applies when demos is substituted for despot. In the US there have been several attempts to impose politically-correct criteria on research to prevent it straying into areas such as possible genetic explanations for crime or sexual proclivities. Democratic control means more oversight by publicly accountable committees. The result is a bias against unorthodoxy, originality and excellence.
Perhaps the most common reason for demanding more public involvement lies in the belief that the assessment of risk should not be left to scientists, because conventional science is thought to have an in-built bias against discovering dangers to health. This was the implicit idea behind the Stewart inquiry and one possible explanation for its curious recommendations. Burgess traces the history of this belief to a reaction against "a rationalistic, probabilistic approach to problems." This reaction was a feature of the "anti-toxic movement" that sprang up in the US in the 1970s, stimulated first by Rachel Carson's 1962 book The Silent Spring, and later by the battle waged at Love Canal, in Niagara, New York in 1978. In that case, a community living in homes built on top of an old chemical waste tip claimed that they suffered an unusual incidence of birth defects, cancers and other diseases. The families were evacuated and the affair has been much celebrated as a triumph for local action. A later study by the Centres for Disease Control found "that the illnesses afflicting the residents of Love Canal were not unusual, but were to be expected in a normal community of that size."
What evidence is there that science has an in-built bias against discovering risks whereas, in the words of the Lords committee, "the public understands uncertainty and risk well"? If the latter proposition were true, the lottery would collapse tomorrow. Parents would accept the MMR vaccine and worry much less about their children being murdered by strangers (it happens to fewer than one in a million children each year). Three quarters of the population would not worry about pesticide residues in food, as polls show they do. John Krebs, the head of the Food Standards Agency, has pointed out that one cup of coffee contains more carcinogens than we ingest from pesticide residues in food in a whole year. It sounds democratic and flatters the public to say it understands risk well, but the claim is false.
What role could lay opinion play in setting safe levels for pesticide residues in food, for example? Deciding whether the concentration of a particular chemical is harmful is a technical process that depends on expert knowledge. First, the presence of that compound must be detected in the food and the amount present measured, then it must be determined whether this amount will cause harm. The last stage may be difficult and expert opinions may well differ, but on the basis of the best available evidence a committee of experts sets minimum permissible standards, many times above the perceived safety level. If they are not ultra-cautious and someone is poisoned as a result, they will all be blamed.
At least, that is the rational way of proceeding. Yet when the EU sought to replace the precautionary limit for total pesticide residues in drinking water by a science-based standard, it had to abandon its proposal after receiving 12,000 protest letters in a campaign organised by Greenpeace. Burgess gives a similar example. The Environmental Protection Agency in the US tendered for public comment on "safe" levels for arsenic in water. But how can popular instinct, or the special insight of green activists, improve on the assessment of experts? How can intuition get it right except by pure accident? If consulted in unemotional circumstances, people generally acknowledge that the assessment of risk which depends on technical knowledge should be left to experts. Unless you are a Christian Scientist, you expect a specialist to diagnose whether you have a brain tumour or not. If you then need an operation, you employ a brain surgeon.
Even less logical is the demand that because science can never be value-free, scientists should openly declare the values that underpin their work. This is a hangover from the postmodernist and relativist fallacy that there is no objective truth and that science is only one truth among many, a social construct whose hypotheses depend on the standpoint and background of their proponents. Of course scientists have moral and social values. Science does not. In the end, the motives of those who do research are irrelevant. Scientists may embark on a particular research project because they hope it will help mankind, make them famous, increase the profits of their company, or simply because they can get it funded. Whatever their motives or values, the results of their research will be subjected to objective scrutiny. Do the findings stand up to critical examination? Are they reproducible? Can they be verified or falsified? If the results are biased by the researcher's prejudices, they will be worthless and his or her reputation will suffer. Bad motives can produce good results and good motives bad ones. Similarly, privately financed research can make valuable discoveries just as the results of publicly financed research can be disappointing. Of course, corporate research is more likely to back research if its findings will profit the company, while publicly financed research is more likely to be directed at the public good. In the end, the results are what count. The irrefutable response to the supposed relevance of a scientist's background, values and motivation is to ask the question put by Robin Fox of Rutgers University: what does it matter that Mendel was a European monk? His findings about the genetics of peas would have been no less valid had he been a black, Arabic-speaking atheist.
Perhaps the most worrying conclusion of Burgess's book is that the Stewart inquiry was a response to the new politics of precaution, which now plays an unduly prominent role in public life. It might have been justified if there had been overwhelming public concern. It would have been justified, even in the absence of public concern, if there had been scientific grounds for believing that mobile phones could be harmful to health. Neither was the case. Instead it was an application of the precautionary principle.
This principle has not received the critical attention it deserves. Because it seems to reflect common sense - "better safe than sorry," "look before you leap" - it commands general assent. But it is a principle either so obvious that it is a useless guide to policy, so vague that it is meaningless, or an incentive for inaction that is positively dangerous to the well-being of our society. Leaving aside the wealth of conflicting definitions (there are at least 14 official ones), it is most frequently invoked in circumstances where there is popular disquiet, uncertainty about the effects of innovation and an absence of scientific evidence. If we do not know whether something will cause harm, runs the argument, we would be wise not to take a chance. But this is a reaction born of pessimism about science and the distrust of experts engendered by the experience of BSE.
Why should we assume that unforeseen consequences must be harmful? They are just as likely to be beneficial. No one foresaw that aspirin would be a wonder drug, not only useful as a painkiller but as an agent to stop blood-clotting, heart attacks and strokes. Viagra was developed as a drug to combat angina. When the optical laser was invented in 1960, it was dismissed as an invention looking for a job. To assume that consequences must be harmful is the expression of a society that is fearful of the future.
The advocates of the principle ignore the need to weigh benefit against harm, or to consider the effect of taking no action - a failing common among environmentalists. Some NGOs, for example, are demanding a worldwide ban on the use of DDT, claiming it has been one of modern technology's disasters. Yet DDT has been the most successful agent ever invented for saving life. The WHO estimates that it has prevented more than 50m deaths from malaria. Since an effective ban came into operation, malaria has returned to many regions with a vengeance and now kills over 1m people a year in Africa.
The politics of precaution projects harmful consequences into the future without allowing for future solutions. There is talk of "irreversible effects." But many diseases once thought incurable can now be treated. Nearly all forecasts about technological developments decades ahead have proved false. Who foresaw the rapid worldwide spread of the mobile phone? Anticipated problems are often solved before they overwhelm us.
Adherents of the precautionary principle are especially vocal among the opponents of genetic modification. They declare that absence of evidence of harm to human health or to the wider environment from GM crops is not enough. Since it is impossible to prove a negative, this is in effect a demand for a total ban. Martin Teitel, the former leader of the Council for Responsible Genetics in the US, has admitted that rather than make the politically difficult demand that the science of biotechnology be shut down, activists should force scientists to abide by the precautionary principle, because having to prove a negative means that "they don't get to do it, period."
The precautionary principle represents the triumph of the Spartan spirit, ever fearful of the terrors change may bring, over the Athenian spirit that looks for new worlds to conquer. If we aim to avoid all activities that might conceivably cause harm, we would do nothing. If the policy of precaution prevails, we will have to stop the world because some people want to get off.