In August 2009, the Royal Society—Britain’s most prestigious scientific society—convened a meeting of the country’s leading brain scientists. Gathered round the table were ten experts in areas ranging from neurophysiology and neural plasticity to neural hermeneutics. For some time, those studying the brain have revealed a marvel of evolutionary engineering. But now something else is happening: business and the military are looking to apply this research for their own ends, while politicians, science writers and journalists are making ever bolder claims for what it tells us about human behaviour. “There are so many speculative claims being made about the implications of neuroscience—for economics, for politics, for behaviour change—that we decided to pull in the experts for a reality check,” explains James Wilsdon, director of science policy at the Royal Society.
The meeting discussed an unpublished paper from the Royal Society’s science policy centre, which surveyed the frontiers of contemporary neuroscience to identify a host of ethical and public policy dilemmas: the role of brain science in education, in the application of justice, and even in warfare (see boxes overleaf). Brain science is beguiling, promising to reveal previously hidden insights into what makes us tick. But how has this once obscure area become the most talked about scientific endeavour in the last two decades? And does it genuinely offer the radical insights into human behaviour and social organisation that its enthusiasts claim?
The 21st-Century Brain
Neuroscience could have immense practical implications in coming decades. Advances in understanding basic biological and functional attributes of the brain could help treat diseases such as Alzheimer’s and Parkinson’s, as well as staving off the cognitive decline typical of ageing. Some fixes will use drugs, but others could deploy devices implanted directly into the brain, or worn on the head. Headgear using brain-machine interfaces can already detect electrical activity in regions of the brain responsible for motion and has, for example, been used to enable paraplegics to control a computer mouse by thought alone. In April 2009, biomedical engineer Adam Wilson of the University of Wisconsin-Madison used this method to send a message on Twitter without lifting a finger. In February this year, Adrian Owen, assistant director of the Medical Research Council’s cognition and brain sciences unit at Cambridge University, reported that a patient thought to have been in a persistent vegetative state for five years had answered “yes” to questions by imagining playing tennis, thereby lighting up a zone of his brain.
But all this comes with dangers. Knowledge of how to alter or enhance brain function is also the most efficient route to controlling or selectively sabotaging the brain. If drugs can be developed to improve memory, why not drugs to block it? If a brain implant can repair a damaged brain, another will be able to disrupt a healthy brain. All these promised benefits and feared applications rest on further basic advances in brain science; it is here, at the coalface of neuroscience, that the most fascinating results are being revealed.
The brain, prised from its bony housing, is an unglamorous three-pound lump of grey sludge. Yet its chemical and electrical activity provide the biological foundation on which our myriad thoughts, hopes and desires—our selves, in short—are built. In ways that still defy explanation, the “wetware” of the brain creates our rich inner lives, our conscious and subjective experience of “being in the world.” The tang of a lime, the pleasures of jazz, the feeling of being intoxicated with alcohol or love: all are notoriously difficult to put your finger on, refusing to succumb to simple description. They are no easier to pin down at the level of brain function.
Yet neuroscience has thrived on more tractable problems. First, technological progress over the past two decades has resulted in tools that allow us to watch the living, thinking brain, enabling scientists to link brain activity to various psychological states and mental tasks. Second, neuro-imaging researchers, once content to look at basic brain functions like sensory processing or attention, have expanded their scope. Humans are social beings, so our brains must navigate both the three-dimensional world of physical objects and the much trickier, multi-dimensional world of other people and other minds. Researchers like Chris Frith, emeritus professor at University College London—one of Britain’s leading cognitive neuroscientists, and an attendee at the Royal Society meeting—have assembled a rich picture of how the brain copes. Others have marched boldly into territory traditionally the province of the humanities, bringing neuro-imaging equipment to bear on some of the deepest human attributes: emotion, empathy, creativity, morality, religion and aesthetics.
The picture of the brain that emerges from this work is not neat, but three themes stand out. First, much of what the brain does happens without conscious effort. This is true not just of the regulation of basic bodily functions or sensory processing, but also in crucial areas of what we think to be “conscious” thought: decision-making, forming attitudes and opinions, and making moral judgements. The brain makes rapid assessments and judgements by a variety of intuitive routes, often bolstered by potent emotional responses. Reason seems to take a back seat, recruited to provide justifications or rationalisations of conclusions already formed.
Second, the brain’s ability to change in the light of experience over a lifetime—called neural “plasticity”—is now much better understood. Our brains don’t develop solely on the basis of a genetic blueprint, but instead require a wide variety of sensory and social stimuli to mature. This sensitivity to environment is not limited to the early years of childhood. Researchers say it persists even as we get older—meaning that not just our attitudes, but the physical shape and make-up of our brains change depending on our circumstances as adults. So plasticity could provide new justifications for lifelong learning, or criminal rehabilitation (see box, left).
Third, it is clear that we do possess profoundly social brains. The “social brain hypothesis,” developed in the past two decades by evolutionary psychologist Robin Dunbar of Oxford University, suggests that we owe our unusually large and complex human brains to our evolutionary history of living in large groups. Dunbar has shown that the size of the neocortex—the brain region that has grown most since we split from our primate cousins—correlates with group size, and is a good measure of social complexity across primate species. The related “Machiavellian hypothesis”—first aired by primatologist Frans de Waal and elaborated by Andrew Whiten and Richard Byrne at St Andrews University—says that our brains have evolved because wit, cunning and canniness are needed to cope with the duplicity and deception that come with social life.
Social Brains and Moral Minds
Social life might seem too complex to study just using a brain scanner. To get round this, neuroscientists focus on how the brain responds to simplified social situations, or theoretical moral scenarios. A popular approach has been to explore social decision-making using simple economic games. A classic example is the ultimatum game, in which one player of a pair is given some money (£10, say) and has to propose how to split it with the other player—with the caveat that if the split is rejected, neither of the two gets a thing. Conventional economic theory holds that the second player will accept any offer, since it is better than nothing. But in practice, very low offers are rejected. Brain-imaging studies suggest that responses to low offers are driven by a mixture of negative emotions of unfairness, a drive for economic reward, and the capacity to regulate or balance these processes.
A broadly similar story applies to moral cognition. Joshua Greene, a philosopher turned cognitive neuroscientist at Harvard University, has borrowed a number of thought experiments beloved of philosophers to see how the brain churns over moral dilemmas. These, such as the famous trolley problem, typically involve choices about saving the lives of many people by sacrificing the life of just one (see Guy Kahane, p74). Greene’s research suggests that “impersonal” moral dilemmas, in which saving more lives is achieved indirectly (say, flicking a switch to redirect a runaway train from five people on a track to another track with one person on it), mainly elicit activity in areas associated with reason and conscious deliberation, while personal dilemmas (pushing a fat man in front of a train to halt it and so save five people ahead) activate areas associated with emotion. Greene therefore casts moral deliberation as a tug-of-war between an intuitive, emotional aversion to causing harm, and a more deliberative, rational cost/benefit analysis—two competing voices inside our heads, which in turn find themselves represented in different normative ethical theories of how the world should work. One speaks in cold, calculating utilitarian terms about maximising the good; the other, in the emotional language of proscriptions against violence. Further evidence corroborates this theory: patients with damage to a region of the prefrontal cortex linked to processing emotionally salient information have impaired empathic responses to the plight of others.
Studies of this sort are just the start. Other researchers are tackling the most fundamental tasks the social brain has to perform. Rebecca Saxe’s lab at MIT is investigating how our brains decompose human actions into a causal and temporal sequence—who did what, when and how?—and how we then infer and ascribe intentions, beliefs and desires to those actions. These might seem simple feats, but they are in fact near-miraculous achievements which even the cleverest research into artificial intelligence has failed to replicate. At the 2009 TEDGlobal conference in Oxford, a gathering of scientists and technology boffins, Saxe even demonstrated how she could use a weapon-like “transcranial magnetic stimulation” device, pointed at the head of someone in an experiment, to change their responses to such questions.
Such basic feats are, in turn, a prerequisite for making basic social judgements about responsibility, blame and praise. British scientists, such as Sarah-Jayne Blakemore of the Institute of Cognitive Neuroscience in London, are now exploring how these fundamental skills and the social brain develop through adolescence, a period during which relationships are explored to new depths alongside a nuanced appreciation of the dynamics of the social world.
Beyond the fundamentals of social cognition, neuroscientists are also beginning to tie broad social attitudes and orientations to differences in brain function. Taking a lead from studies in political psychology suggesting that liberals tend to cope with uncertainty and conflict resolution better than conservatives, a study by David Amodio of New York University and colleagues at UCLA found that liberalism positively correlated with activity in the conflict-related anterior cingulate cortex. Another study, led by Joan Chiao of Northwestern University, suggested that the liberal preference for egalitarianism, in contrast to the conservative preference for hierarchy, derives from greater activity in brain regions associated with feeling concern for the misfortune of others.
Brain Facts, Brain Fiction
The upshot is a picture of the brain that is messy, made up of many interacting and competing subsystems cobbled together over evolutionary time that collectively guide and constrain our behaviour. The brain contains numerous rival neural networks—some of which support self-interested and other-regarding behaviour; others empathy, envy and schadenfreude; and yet others “prosocial” egalitarianism and hierarchical dominance. Components of this social brain generate emotionally charged intuitive responses to social stimuli, which reasoned judgement frequently struggles to constrain.
But it is precisely this muddier, more realistic vision of human nature that has grabbed the attention of politicians, commentators and thinkers. Under the leadership of Matthew Taylor, a former adviser to Tony Blair (see roundtable, p67), the Royal Society of Arts is running a major project on the “social brain” that explores how “new ways of thinking about human behaviour might change politics, policy and practice.” These are, of course, early days. But expectations are growing. As Taylor argued in Prospect (October 2009), it is even possible that “new ideas about human nature can contribute to a more substantive meeting of minds between left and right.”
Not everyone accepts this broad prognosis for the political and social implications of brain science. Neuroscientist Martha Farah of the University of Pennsylvania, for instance, is sceptical about some of the more dramatic claims made on the basis of neuro-imaging studies, pointing out that they often use small samples, limiting their statistical power. Recent years have also seen heated technical debates about just what can be inferred from the masses of data generated by brain imaging. A review of many neuro-imaging studies by Edward Vul at MIT characterised the field as plagued by “voodoo correlations” between brain states and mental states. Neuroscientists in general are modest about what their over-hyped discipline can tell the wider world (see Alexander Linklater, p75).
All of this points to the need for caution in constructing a grand edifice of policy on potentially shaky foundations. Without a dose of scepticism, “neuromyths”—inflated claims about the insights or power of neuroscience—are likely to proliferate. Worryingly, a study carried out by Deena Skolnick Weisberg and colleagues at Yale University suggests that people are more persuaded by bad arguments about human behaviour if they are accompanied by colourful brain images. Nonetheless, as with many fields of technological advance, simply wishing for caution and modesty is unlikely to stop outlandish claims being made.
Along with scepticism about the power and relevance of neuroscience, there is a need to remain vigilant about potentially malign applications of emergent or speculative technologies lest we stumble into a future that catches us by surprise. Yet it is also crucial to stay rooted in reality, keeping the power of neuroscience in perspective, and to avoid wasting energy contemplating far-fetched neurofantasies. A key task for organisations such as the Royal Society is to help policymakers navigate this uncertain terrain, and separate credible claims about the impact of neuroscience from ungrounded, over-optimistic hubris and paranoid fears.
Aside from concerns about the technical limitations of neuro-imaging to uncover the basis of human behaviour, commentators, including Raymond Tallis in an article titled “Neurotrash” in the November/December 2009 issue of New Humanist magazine, have attacked what he calls “neurological determinism,” the danger of making sweeping social generalisations on the basis of niche neurological research findings. Even if neuroscience can deliver a descriptive account of our mental lives, including our moral and social views, thinkers like Tallis worry that there is little reason to suppose that it should be relevant to the prescriptive or normative concerns of, for instance, moral and political philosophy. As Chris Frith says, neuroscience might help policymakers set and achieve more realistic goals, but science doesn’t and ought not say what those goals should be.
*** Neuro-education
Advocates of a brain-based approach to education believe neuroscience could prompt an overhaul of traditional teaching practices. Work is underway on the brain basis of numeracy, literacy, creativity, memory, motivation and the effect of cognition-enhancing drugs, as well as difficulties such as dyslexia and attention-deficit hyperactivity disorder.
“Mental Capital and Wellbeing,” a 2008 report by Foresight, the British government’s office for science, identified ways neuroscience could benefit education, by shedding light on the biological and environmental basis of learning, identifying neural markers for “risk,” and evaluating teaching methods. Another approach examines the “plasticity” of the brain as it develops. Scientists now know that as we age, the brain does not stop reacting to its environment, but does so in different ways. Important questions remain: do we learn skills optimally at certain ages in childhood, for instance, or is age less important than the order in which we learn them?
On the negative side, if future brain-imaging establishes a link between patterns of brain activity and educational achievement (see Tom Chatfield, p73), we may have to guard against a neurologically stratified education system too.
***
Neuro-war
The militarisation of neuroscience could take a number of forms. Agencies that fund defence and military experiments have been especially keen to learn how the brain functions under stress, especially when in combat. In theory, headgear that monitors the brains of soldiers in battle could be used to modulate the flow of information they receive, for instance so as not to confuse them needlessly.
Pharmacological agents and stimulants could be used to help military personnel stay awake and alert at crucial moments, or alternatively deployed to incapacitate the enemy (whether fatally or not).
Elsewhere, developments in brain-machine interfaces could enable human operators to remotely control robots to perform tasks more effectively than using manual controls, perhaps directing military drones while on the field of combat. Back at base, brain-imaging technologies could be used to question enemy combatants, with lie detection and other forms of mind-reading making redundant the highly coercive interrogation techniques deployed at Guantánamo Bay.
***
Neuro-law
Neuroscience could change criminal law both philosophically and by providing new ways to understand evidence. First, it threatens to undermine the basic assumptions of modern legal systems: if our brains operate in a deterministic world, as science describes, what happens to free will and the possibility of criminal responsibility?
Second, it raises questions about the validity and utility of neuroscientific evidence in court, and whether it can be used to make judgements about innocence, guilt and responsibility. Criminal law already recognises the concept of diminished responsibility when a person acts under duress, severe impairment of rationality or an inability to distinguish right from wrong.
Brain-based defences have become increasingly common, with brain tumours and damage caused by accidents cited in mitigation for murders and serial sex offences. Things get trickier, however, with the potential use of imaging of undamaged brains which nevertheless show activity that might be linked to criminal behaviour. Prosecutors, meanwhile, are increasingly touting brain imaging as a way to tell whether defendants are lying—raising fundamental questions about privacy and mental freedom.