It was a triumph, not of science, but of superstition. That is what we usually call it when people take a physical process devoid of intention, emotion, or agency and fill the dark shadows with loosely plausible imaginings. The spectacular defeat of Garry Kasparov brought on an effusion of these projections. An RS/6000 computer with a catchy name (Deep Blue), an effective programme and a solicitous entourage was loaded down with "brilliance," "seeing the end," and "playing like a god," despite the fact that it no more deserves these human attributes than a storm cloud or a chainsaw. Indeed, it has no interesting attributes at all except the ability to generate, rather reliably, the least bad next move in a game of chess.
Superstition quickly gave way to dread. "I'm a human being," said Garry Kasparov, easily grasping the heart of the matter. "When I see something that is beyond my understanding, I'm afraid." So, too, were the other humans. The machines have won, we said to each other, this time in the highest arena of intellect. What will be their next move?
The answer is that they will not make one. We will. We make the moves in all these odd, unequal games, because of a fundamental, unfrightening asymmetry. Kasparov's protestation of alarm was a fine illustration of how utterly dissimilar the two performers in the recent 64-squared circus actually were. Deep Blue has no capacity for seeing, understanding or fear. All it has are mathematical rules for generating output from input. It cannot imagine, anticipate, analyse or employ any of the other basic styles of thought with which the human mind sketches out stories of the future. Kasparov knows this and, in knowing it, he shows that he does indeed understand the computer, because that is all there is to know. What it appears he cannot understand is how that meaningless process produces moves to which his mind immediately ascribes the most profound of meanings.
This is not because there is something odd about computers. It is because there is something odd about people. Whether they do it by science or by superstition, people invest meaning into everything. Indeed they are so good at smearing meaning out over the world that they can create a situation where two utterly different things-human thought and mechanical calculation-can miraculously be seen as being part of the same process. They called it chess, they called it a contest, and after the game was over they said that their side lost. But there were not two sides there. There was only one side: humanity. There was a prodigious calculating machine, sure, but it was not a player. It was not even something that could know what a player might be. It just sat there, embedded in a world of programmers, minders, opponents and commentators which turned the remorseless shuffling of certainties in its semiconductors into a battle. It just sat there and soaked up all the meaning there was to be had.
Machines are not our enemies, our successors or our friends. They are our extensions. We are busily smothering our world with machinery in much the same way we smother it with meaning, and by and large it is a pretty good thing. Most of the time we avoid thinking that the extension is a threat. We have become used to the fact that machines are better at moving earth, calculating factorials and carrying messages. And there is not much doubt that we are making them better. That means they will surprise us, as Deep Blue surprised Kasparov, by not always meeting our expectations, by not doing things the way they used to.
There may be limits to what we can do with our present machines. There may turn out to be things that machines cannot do until we make them think like us rather than just act in a way we can make sense of. At this point, we do not know. But as we go on playing our games of progress we will doubtless find out, as long as we remember what the nature of the game is. As Andy Clark put it in his recent book Being There (Bradford Books): "We use intelligence to structure our environment so that we can succeed with less intelligence. Our brains make the world smart so we can be dumb in peace." We structure our environment with meanings and with machines, and sometimes we do it so well that we forget we are doing it. We slip into thinking that it is the world that makes sense, not us. That is when we start getting scared of our shadows.