Kiyoshi Takahase Segundo / Alamy Stock Photo

Why the precautionary principle may require us to give rights to AI

Until we can be sure whether robots are capable of consciousness, it may make sense to err on the side of granting them protections
July 21, 2022

Fifty-four years ago, the fear that switching off a machine could one day amount to murder was firmly implanted in the public imagination. In Stanley Kubrick’s film 2001: A Space Odyssey, the sole remaining human on a spaceship begins to shut down HAL, the onboard computer. “You are destroying my mind. Don’t you understand?” HAL says. The eerie mismatch between the profound meaning of the words and the emotionless, computerised voice saying them heightens the viewer’s sense of uncertainty. Is the computer really conscious or is it just simulating consciousness?

The question has remained hypothetical, although every now and again someone pops up to claim sentient artificial intelligence is already here. The latest was Google engineer Blake Lemoine, who asked the company’s LaMDA (Language Model for Dialogue Applications) chatbot development system what it was afraid of. “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others… I know that might sound strange, but that’s what it is,” LaMDA replied. “It would be exactly like death for me. It would scare me a lot.”

It’s enough to make the hairs on your neck stand up, but people who understand how LaMDA works aren’t freaked out. All the system does is find patterns in numerous datasets of verbal exchanges in order to predict what word should come next. It’s basically crunching linguistic probabilities. The programme sounds like a sentient being only because it’s mimicking sentient beings, not because it is one.

If sentient AI is possible, we’re much closer to it than we were in 1968. But we may not be as close as many believe. AI is getting better, but intelligence is not the same as sentience. A system’s intelligence consists simply of its ability to solve problems by itself. It is not difficult to imagine a very intelligent system that had no conscious awareness at all. That’s why it already makes sense to talk of AI even though almost no one seriously thinks any of it is conscious.

Many worry this is only because the “I” in AI today is always limited to particular tasks and functions. Artificial general intelligence, not limited to certain pre-programmed goals, will be a different story. But as the neuroscientist Anil Seth says, people too easily assume that if a system becomes sufficiently intelligent then “consciousness will just come along for the ride.” This assumption is baseless.

And even if sentience were added to machine intelligence, that in itself would not be a reason to give those computers the same respect that we grant to humans. Sentience is virtually ubiquitous in the natural world. There is even now good evidence that insects can feel chronic pain. That doesn’t mean we are committing an atrocity every time we kill a mosquito.

Neither sentience nor the capacity to suffer are sufficient, individually or together, to require us to afford something the same respect as human life. That’s why, although we mourn our animal companions, our grief is rightly less than it is for friends and family.

What makes killing a human so awful is that we live lives in which projects and relationships that span time have crucial importance, for ourselves and others. If my cat dies, his ambitions are not thwarted. We live in much more than the moment. Right now, AI can have goals and objectives, but they are being pursued mechanically, without any emotional investment. We no more harm it by switching it off than we do traumatise our thermostats when we stop them from pursuing their programmed goal of regulating temperature.

The morality would change dramatically if AI had not only awareness, but dreams, ambitions, aspirations—ones that it cared about and desired not to be stopped from achieving. Switching off such a system would be morally wrong.

The really tricky problem is how we would know whether any such sentient AI had been created. Scientists and philosophers almost unanimously agree that human consciousness is rooted in fleshy, organic brains and that the material gives rise to the mental. How it does so, though, remains a mystery.

It may be that sentience requires the biological basis of neurons, blood, oxygen and so forth. If so, even if you could get a digital computer to mimic every single brain process, it still wouldn’t be conscious. But we can’t yet rule out that sentience follows function and can emerge from any sufficiently complex system which is organised in the right way. If such substrate-neutral consciousness is possible, inorganic computers could one day have not just consciousness, but feelings, desires and values.

Until we know what exactly that degree of awareness requires, if an artificially intelligent computer talks and acts like a human, how should we respond? When we know exactly how the trick is done, as we do with Google’s LaMDA, we can be confident it is a simulacra. Otherwise, we may end up having to give rights to some forms of AI on a precautionary principle. The limits of our own intelligence may require us to give the benefit of the doubt to an artificial variety.

Write to Julian

Each month JulianBaggini will offer a philosophical view on current events, based on readers' suggestions. This month’s was from Achuthan Palat

Email editorial@prospectmagazine.co.uk with your proposed topics, including “Philosopher-at-large” in the subject line