Creator and saviour? Turing spent the second world war at Bletchley Park, where he devoted himself to breaking the Enigma code
Digitized: The Science of Computers and How it Shapes Our World by Peter J Bentley, (Oxford University Press) Turing’s Cathedral: The Origins of the Digital Universe by George Dyson (Allen Lane) Alan Turing: The Enigma by Andrew Hodges (Centenary Edition, Vintage) Alan M Turing by Sara Turing (Centenary Edition, Cambridge University Press)
A hundred years ago this June saw the birth of the man from whose brain sprang what is arguably the single most powerful idea of the 20th century, the idea that has done more than any other to influence our lives: the idea of the universal, digital computer.
The digital universe, writes George Dyson in Turing’s Cathedral, owes its beginnings to two prophets: the 17th-century German philosopher and mathematician Gottfried Leibniz, and the 20th-century mathematician and logician John von Neumann. “Alan Turing,” says Dyson, “arrived in between.” As Dyson’s own book makes clear, however, this account seriously understates Turing’s role in shaping the world in which we live.
Alan Mathison Turing was born in London on 23rd June 1912 into a well-off upper middle-class family. His mother, Sara, devotes the first chapter of her biography of him to a chronicle of his distinguished ancestors, which on his father’s side included, apart from various knights and baronets, his grandfather John Robert Turing, who was a mathematics don at Cambridge and Chaplain of Trinity College. On his mother’s side, he was descended from Irish Protestant landed gentry.
All this might lead one to picture Alan Turing as having the social confidence that came from being a member of the class that ruled the largest empire the world had ever seen. But he was throughout his life retiring and unassuming, cursed with a stammer that made social intercourse difficult. His academic brilliance, however, was manifest from an early age. Though regarded as rather a misfit at Sherborne boarding school, he won several school prizes and a scholarship to study mathematics at King’s College, Cambridge. At King’s, he was made a Fellow at the age of 22, an achievement the rarity of which was commemorated in a clerihew:
Turing
Must have been alluring
To get made a don
So early on
In 1936, still not yet 24 years old, Turing wrote his classic paper, “On Computable Numbers, with an Application to the Entscheidungsproblem,” in which he presented for the first time the idea of the universal digital computer. At that time, the word “computer” carried no connotations of machinery; it meant a person who calculates, just as a conductor is one who conducts. Turing, however, offered in this paper a precise description of computation as a purely mechanical procedure and this—a decade or so before any electronic digital computers had been built—allowed one to imagine computers as machines.
Of course, Turing was not the first to imagine calculating machines. In the 17th century Leibniz invented a machine, the “Stepped Reckoner,” capable of performing all four arithmetical operations—addition, subtraction, multiplication and division. However, this machine (examples of which were built during Leibniz’s lifetime) was more a calculator than a computer; it could tell you the answer to complicated sums, but it could not use those sums to represent other forms of “computation.” It was not, that is, a universal machine. And it is the universality of modern computers that has made them such an indispensable part of modern life. What we have learned in the 20th century is the power of binary arithmetic. Using mathematical operations that have as their values just two possibilities—0 and 1—we can design incredibly sophisticated word processing programs, spreadsheets, and databases, as well as digitally encode pictures, music and movies. Almost everything, it seems, can be reduced to a series of zeros and ones.
Though his “Stepped Reckoner” was a long way from being an all-purpose computer, Leibniz has some claim to being the first to recognise the all-embracing power of binary arithmetic. Early in his career he drew attention to the fact that all numbers can be represented in a binary system. He was also the first person to have the vision of a universal computer in the sense that he envisaged a language, a “universal characteristic” in which all truths, including those of mathematics and morals, might be expressed with the rigour of mathematics. “If controversies were to arise,” Leibniz wrote, “there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in their hands, to sit down to their slates, and to say to each other (with a friend as witness, if they liked): ‘Let us calculate.’” But this vision, impressive though it is, does not suffice to establish Leibniz as the person who gave us the logic of universal computation, since he does not tell us how this “universal characteristic” is to be achieved.
Another man often credited with having anticipated the modern computer is the British mathematician and inventor, Charles Babbage. Babbage designed two machines, neither of which was built during his lifetime. The first, the “Difference Engine,” designed in the 1820s, was capable of performing impressively complex mathematical calculations, but like Leibniz’s “Stepped Reckoner,” it was more of a calculator than an all-purpose computer. The second, the “Analytical Engine,” designed 20 years later, comes closer to being a universal computer in the sense that it is programmable. However, it differs from the modern computer in the important respect that, while its programs would have been on punched cards, its calculations would have been carried out by a complex series of wheels and cogs. As Turing’s biographer, Andrew Hodges, emphasises, this is very different from the machine that Turing thought of.
The truly vital idea embodied in the modern computer, Turing’s idea, is that of the internally stored modifiable program, the idea of storing programs in the same form as data and intermediate workings. To understand the importance of this, Hodges writes, “think of what happens when you want a new piece of software. You can download it from a remote source, and it is transmitted by the same means as email or any other form of data.” The program, like the data it uses and the outputs it provides, is “just a sequence of electronic on-or-off states which lives on hard disk… along with everything else.” It is only with this idea that one can really harness the power of those zeros and ones, and this idea follows from “the deep principle that Alan Turing saw in 1936: the Universal Turing Machine.”
Turing had a deep interest in technology and was keen to see his idea implemented in physical form, but, to begin with, this idea was a contribution not to engineering but to pure mathematics. In particular, it was a way of answering a question posed by the great German mathematician, David Hilbert. This question—the Entscheidungsproblem (decision problem) in the title of Turing’s 1936 paper—concerns whether it is possible to construct a general method for deciding, for any given mathematical proposition, whether it is provable or not. Turing’s notion of a “universal computing machine” was offered as a means of making more precise the idea of a “decidable method”: a mathematical problem is effectively decidable if (and only if) it is computable by a Universal Turing Machine. Using this definition, Turing was able to show that the answer to Hilbert’s question is “no,” there is no general method for deciding the provability of mathematical propositions.
Turing’s answer to Hilbert’s question was important enough, but the idea he developed to answer it, the idea of the universal computing machine, has changed all our lives. As Peter J Bentley puts it in Digitized: “The Universal Turing Machine became the theoretical blueprint for all electronic computers in the world.” When the paper was first published, however, Turing was disappointed at the reaction. He knew that it was a fundamentally important piece of work, but it attracted little comment or discussion. One of the very few people who recognised its significance was the man Dyson credits with being the prophet of the computer age: John von Neumann. One of the strange things about placing Turing between Leibniz and von Neumann is that von Neumann, eight years older than Turing, was already a well-established, internationally famous mathematician when Turing was starting his career. “If a mentally superhuman race ever develops,” the physicist Edward Teller once said, “its members will resemble Johnny von Neumann.”
Born into a wealthy Hungarian-Jewish family, von Neumann made his reputation in the 1920s with his work on set theory, before leaving Budapest, first for Göttingen, where he worked with David Hilbert, and then to Princeton, where, in 1933, he was given a permanent professorship at the recently established and prestigious Institute for Advanced Study.
As both Hodges and Dyson emphasise, Turing and von Neumann could hardly have been more different. Where Turing was (to his mother’s mortification) slovenly and unkempt, von Neumann was always impeccably turned out in a smart and fashionable business suit. While Turing stammered, von Neumann spoke clearly, confidently and precisely, and was at ease, as Turing was not, in the corridors of power. When he travelled, Turing went third class and stayed in hostels; von Neumann always travelled first class and stayed in the most expensive hotels. Finally, while von Neumann had a reputation as a ladies’ man, Turing was openly and unapologetically gay. As Hodges says, however, “mathematics did not see these things,” and mathematically the two were extremely close; both logicians with an interest in computability, keen to see their ideas embodied in machine form. Despite their differences, therefore, it was inevitable that their paths would cross at several points. Turing’s very first academic publication was a response to, and a small improvement on, a paper by von Neumann, who, during Turing’s first year as a Fellow of King’s, spent a summer visiting Cambridge. Then, in the academic year 1936-37, during which “On Computable Numbers” was published, Turing was at Princeton, where he was given an office close to von Neumann’s.
Had it not been for the war, it seems possible that the world’s first electronic digital computer might have been the result of a joint project between these two intellectual soulmates. As it was, both Turing and von Neumann, working on either side of the Atlantic, made fundamental contributions to the Allied war effort. Turing spent the war at the British cryptanalytic headquarters at Bletchley Park (midway between Oxford and Cambridge), where he devoted himself to breaking the Enigma code that the Germans used in naval communications. Meanwhile, von Neumann was employed at the highest level on a number of military projects, including the Manhattan Project, his chief part in which was to solve the mathematical problems which impeded the building of an implosion-type atom bomb.
When the war ended, both Turing and von Neumann were determined to build an electronic machine that would embody the computational principles Turing had published in 1936. Dyson’s book is a detailed account of the project led by von Neumann: the Electronic Computer Project based at Princeton. The result of this project was a room-sized computer that stored five kilobytes of information. This computer was conceived at the end of 1945, and up and running in 1951. The story of this project and of the people involved in pursuing it makes for an absorbing read, but Dyson is surely over-egging it when he says that “the entire digital universe can be traced directly to [it].” After all, as he himself says, the machine “was the physical realisation of Alan Turing’s Universal Machine,” and not the first such realisation either: “It was not the first computer. It was not even the second or third computer.”
The first proper computer, that is the first realisation of a Turing machine, was built in Manchester in summer 1948 by a team led by Turing’s old colleague, Max Newman, who had worked at Bletchley Park. Soon after, Turing was appointed director of the team and spent the rest of his life in Manchester. Tragically, the rest of his life amounted to just six more years. In 1952 he was arrested and tried for indecency after the police learned of his homosexual relationship with a young Manchester man called Arnold Murray. After being found guilty (he was open about this relationship and that he saw nothing wrong with it), he was spared prison only after agreeing to a course of “chemical castration” that involved injections of the female hormone oestrogen. Two years later he was found dead in his bedroom, a half-eaten apple lying beside him. The cause of death was cyanide poisoning. It was almost certainly (though his mother Sara was never reconciled to this) suicide.
A few years earlier, in 1950, Turing published a paper that has come to rival “On Computable Numbers” as his most famous work. The paper asked a question that has reverberated in philosophy, cognitive science and computer science ever since: can machines think? Just as he had done with Hilbert’s “decision problem,” Turing sought to bring a new precision to this question by proposing an effective procedure—what is now known as the “Turing Test”—for determining whether we should or should not attribute intelligence to a machine.
The test involves three participants in three different rooms—a person, a computer and a judge. The judge asks the other two participants questions in an effort to determine which is the person and which the computer. If the judge cannot tell which is which, then, Turing argued, we would have to say of the computer that it is intelligent. “I believe,” Turing wrote, “that in about 50 years’ time it will be possible to programme computers, with a storage capacity of about 109 [bits], to make them play the imitation game so well that an average interrogator will not have more than a 70 per cent chance of making the right identification after five minutes of questioning.”
Those 50 years have long since passed and, though we now think of computer storage of 109 bits (about 120MB) as puny (most mobile phones have more storage than that), the Turing Test has not yet been passed. It turned out to be harder than Turing realised for a computer to fool us into thinking that it is a person. On the other hand, Turing’s more general prediction—“I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted”—seems to have come true. We now talk unselfconsciously and apparently non-metaphorically about the “artificial intelligence” in a computer program in a wide variety of contexts, ranging from medical diagnosis to chess.
“Our world,” says Peter J Bentley, “is digitised… Every year new pioneers exploit the technologies of computer science for startling and imaginative applications.” Very soon, the only kind of television that it will be possible to watch is one in which the pictures have been coded and transmitted digitally. The music we listen to, the books we read, the movies we watch will soon all be digital, as will our means of shopping, banking and communicating. And every time we do any of these things we are implementing an idea that Alan Turing had in 1936: the idea that it is possible to represent any effectively computable function as the operation of a Universal Turing Machine. Leibniz and von Neumann might be prophets of the digital universe that is the product of this idea, but it is clear who the creator is. Who are the worshippers in “Turing’s Cathedral”? We all are.
Digitized: The Science of Computers and How it Shapes Our World by Peter J Bentley, (Oxford University Press) Turing’s Cathedral: The Origins of the Digital Universe by George Dyson (Allen Lane) Alan Turing: The Enigma by Andrew Hodges (Centenary Edition, Vintage) Alan M Turing by Sara Turing (Centenary Edition, Cambridge University Press)
A hundred years ago this June saw the birth of the man from whose brain sprang what is arguably the single most powerful idea of the 20th century, the idea that has done more than any other to influence our lives: the idea of the universal, digital computer.
The digital universe, writes George Dyson in Turing’s Cathedral, owes its beginnings to two prophets: the 17th-century German philosopher and mathematician Gottfried Leibniz, and the 20th-century mathematician and logician John von Neumann. “Alan Turing,” says Dyson, “arrived in between.” As Dyson’s own book makes clear, however, this account seriously understates Turing’s role in shaping the world in which we live.
Alan Mathison Turing was born in London on 23rd June 1912 into a well-off upper middle-class family. His mother, Sara, devotes the first chapter of her biography of him to a chronicle of his distinguished ancestors, which on his father’s side included, apart from various knights and baronets, his grandfather John Robert Turing, who was a mathematics don at Cambridge and Chaplain of Trinity College. On his mother’s side, he was descended from Irish Protestant landed gentry.
All this might lead one to picture Alan Turing as having the social confidence that came from being a member of the class that ruled the largest empire the world had ever seen. But he was throughout his life retiring and unassuming, cursed with a stammer that made social intercourse difficult. His academic brilliance, however, was manifest from an early age. Though regarded as rather a misfit at Sherborne boarding school, he won several school prizes and a scholarship to study mathematics at King’s College, Cambridge. At King’s, he was made a Fellow at the age of 22, an achievement the rarity of which was commemorated in a clerihew:
Turing
Must have been alluring
To get made a don
So early on
In 1936, still not yet 24 years old, Turing wrote his classic paper, “On Computable Numbers, with an Application to the Entscheidungsproblem,” in which he presented for the first time the idea of the universal digital computer. At that time, the word “computer” carried no connotations of machinery; it meant a person who calculates, just as a conductor is one who conducts. Turing, however, offered in this paper a precise description of computation as a purely mechanical procedure and this—a decade or so before any electronic digital computers had been built—allowed one to imagine computers as machines.
Of course, Turing was not the first to imagine calculating machines. In the 17th century Leibniz invented a machine, the “Stepped Reckoner,” capable of performing all four arithmetical operations—addition, subtraction, multiplication and division. However, this machine (examples of which were built during Leibniz’s lifetime) was more a calculator than a computer; it could tell you the answer to complicated sums, but it could not use those sums to represent other forms of “computation.” It was not, that is, a universal machine. And it is the universality of modern computers that has made them such an indispensable part of modern life. What we have learned in the 20th century is the power of binary arithmetic. Using mathematical operations that have as their values just two possibilities—0 and 1—we can design incredibly sophisticated word processing programs, spreadsheets, and databases, as well as digitally encode pictures, music and movies. Almost everything, it seems, can be reduced to a series of zeros and ones.
Though his “Stepped Reckoner” was a long way from being an all-purpose computer, Leibniz has some claim to being the first to recognise the all-embracing power of binary arithmetic. Early in his career he drew attention to the fact that all numbers can be represented in a binary system. He was also the first person to have the vision of a universal computer in the sense that he envisaged a language, a “universal characteristic” in which all truths, including those of mathematics and morals, might be expressed with the rigour of mathematics. “If controversies were to arise,” Leibniz wrote, “there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in their hands, to sit down to their slates, and to say to each other (with a friend as witness, if they liked): ‘Let us calculate.’” But this vision, impressive though it is, does not suffice to establish Leibniz as the person who gave us the logic of universal computation, since he does not tell us how this “universal characteristic” is to be achieved.
Another man often credited with having anticipated the modern computer is the British mathematician and inventor, Charles Babbage. Babbage designed two machines, neither of which was built during his lifetime. The first, the “Difference Engine,” designed in the 1820s, was capable of performing impressively complex mathematical calculations, but like Leibniz’s “Stepped Reckoner,” it was more of a calculator than an all-purpose computer. The second, the “Analytical Engine,” designed 20 years later, comes closer to being a universal computer in the sense that it is programmable. However, it differs from the modern computer in the important respect that, while its programs would have been on punched cards, its calculations would have been carried out by a complex series of wheels and cogs. As Turing’s biographer, Andrew Hodges, emphasises, this is very different from the machine that Turing thought of.
The truly vital idea embodied in the modern computer, Turing’s idea, is that of the internally stored modifiable program, the idea of storing programs in the same form as data and intermediate workings. To understand the importance of this, Hodges writes, “think of what happens when you want a new piece of software. You can download it from a remote source, and it is transmitted by the same means as email or any other form of data.” The program, like the data it uses and the outputs it provides, is “just a sequence of electronic on-or-off states which lives on hard disk… along with everything else.” It is only with this idea that one can really harness the power of those zeros and ones, and this idea follows from “the deep principle that Alan Turing saw in 1936: the Universal Turing Machine.”
Turing had a deep interest in technology and was keen to see his idea implemented in physical form, but, to begin with, this idea was a contribution not to engineering but to pure mathematics. In particular, it was a way of answering a question posed by the great German mathematician, David Hilbert. This question—the Entscheidungsproblem (decision problem) in the title of Turing’s 1936 paper—concerns whether it is possible to construct a general method for deciding, for any given mathematical proposition, whether it is provable or not. Turing’s notion of a “universal computing machine” was offered as a means of making more precise the idea of a “decidable method”: a mathematical problem is effectively decidable if (and only if) it is computable by a Universal Turing Machine. Using this definition, Turing was able to show that the answer to Hilbert’s question is “no,” there is no general method for deciding the provability of mathematical propositions.
Turing’s answer to Hilbert’s question was important enough, but the idea he developed to answer it, the idea of the universal computing machine, has changed all our lives. As Peter J Bentley puts it in Digitized: “The Universal Turing Machine became the theoretical blueprint for all electronic computers in the world.” When the paper was first published, however, Turing was disappointed at the reaction. He knew that it was a fundamentally important piece of work, but it attracted little comment or discussion. One of the very few people who recognised its significance was the man Dyson credits with being the prophet of the computer age: John von Neumann. One of the strange things about placing Turing between Leibniz and von Neumann is that von Neumann, eight years older than Turing, was already a well-established, internationally famous mathematician when Turing was starting his career. “If a mentally superhuman race ever develops,” the physicist Edward Teller once said, “its members will resemble Johnny von Neumann.”
Born into a wealthy Hungarian-Jewish family, von Neumann made his reputation in the 1920s with his work on set theory, before leaving Budapest, first for Göttingen, where he worked with David Hilbert, and then to Princeton, where, in 1933, he was given a permanent professorship at the recently established and prestigious Institute for Advanced Study.
As both Hodges and Dyson emphasise, Turing and von Neumann could hardly have been more different. Where Turing was (to his mother’s mortification) slovenly and unkempt, von Neumann was always impeccably turned out in a smart and fashionable business suit. While Turing stammered, von Neumann spoke clearly, confidently and precisely, and was at ease, as Turing was not, in the corridors of power. When he travelled, Turing went third class and stayed in hostels; von Neumann always travelled first class and stayed in the most expensive hotels. Finally, while von Neumann had a reputation as a ladies’ man, Turing was openly and unapologetically gay. As Hodges says, however, “mathematics did not see these things,” and mathematically the two were extremely close; both logicians with an interest in computability, keen to see their ideas embodied in machine form. Despite their differences, therefore, it was inevitable that their paths would cross at several points. Turing’s very first academic publication was a response to, and a small improvement on, a paper by von Neumann, who, during Turing’s first year as a Fellow of King’s, spent a summer visiting Cambridge. Then, in the academic year 1936-37, during which “On Computable Numbers” was published, Turing was at Princeton, where he was given an office close to von Neumann’s.
Had it not been for the war, it seems possible that the world’s first electronic digital computer might have been the result of a joint project between these two intellectual soulmates. As it was, both Turing and von Neumann, working on either side of the Atlantic, made fundamental contributions to the Allied war effort. Turing spent the war at the British cryptanalytic headquarters at Bletchley Park (midway between Oxford and Cambridge), where he devoted himself to breaking the Enigma code that the Germans used in naval communications. Meanwhile, von Neumann was employed at the highest level on a number of military projects, including the Manhattan Project, his chief part in which was to solve the mathematical problems which impeded the building of an implosion-type atom bomb.
When the war ended, both Turing and von Neumann were determined to build an electronic machine that would embody the computational principles Turing had published in 1936. Dyson’s book is a detailed account of the project led by von Neumann: the Electronic Computer Project based at Princeton. The result of this project was a room-sized computer that stored five kilobytes of information. This computer was conceived at the end of 1945, and up and running in 1951. The story of this project and of the people involved in pursuing it makes for an absorbing read, but Dyson is surely over-egging it when he says that “the entire digital universe can be traced directly to [it].” After all, as he himself says, the machine “was the physical realisation of Alan Turing’s Universal Machine,” and not the first such realisation either: “It was not the first computer. It was not even the second or third computer.”
The first proper computer, that is the first realisation of a Turing machine, was built in Manchester in summer 1948 by a team led by Turing’s old colleague, Max Newman, who had worked at Bletchley Park. Soon after, Turing was appointed director of the team and spent the rest of his life in Manchester. Tragically, the rest of his life amounted to just six more years. In 1952 he was arrested and tried for indecency after the police learned of his homosexual relationship with a young Manchester man called Arnold Murray. After being found guilty (he was open about this relationship and that he saw nothing wrong with it), he was spared prison only after agreeing to a course of “chemical castration” that involved injections of the female hormone oestrogen. Two years later he was found dead in his bedroom, a half-eaten apple lying beside him. The cause of death was cyanide poisoning. It was almost certainly (though his mother Sara was never reconciled to this) suicide.
A few years earlier, in 1950, Turing published a paper that has come to rival “On Computable Numbers” as his most famous work. The paper asked a question that has reverberated in philosophy, cognitive science and computer science ever since: can machines think? Just as he had done with Hilbert’s “decision problem,” Turing sought to bring a new precision to this question by proposing an effective procedure—what is now known as the “Turing Test”—for determining whether we should or should not attribute intelligence to a machine.
The test involves three participants in three different rooms—a person, a computer and a judge. The judge asks the other two participants questions in an effort to determine which is the person and which the computer. If the judge cannot tell which is which, then, Turing argued, we would have to say of the computer that it is intelligent. “I believe,” Turing wrote, “that in about 50 years’ time it will be possible to programme computers, with a storage capacity of about 109 [bits], to make them play the imitation game so well that an average interrogator will not have more than a 70 per cent chance of making the right identification after five minutes of questioning.”
Those 50 years have long since passed and, though we now think of computer storage of 109 bits (about 120MB) as puny (most mobile phones have more storage than that), the Turing Test has not yet been passed. It turned out to be harder than Turing realised for a computer to fool us into thinking that it is a person. On the other hand, Turing’s more general prediction—“I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted”—seems to have come true. We now talk unselfconsciously and apparently non-metaphorically about the “artificial intelligence” in a computer program in a wide variety of contexts, ranging from medical diagnosis to chess.
“Our world,” says Peter J Bentley, “is digitised… Every year new pioneers exploit the technologies of computer science for startling and imaginative applications.” Very soon, the only kind of television that it will be possible to watch is one in which the pictures have been coded and transmitted digitally. The music we listen to, the books we read, the movies we watch will soon all be digital, as will our means of shopping, banking and communicating. And every time we do any of these things we are implementing an idea that Alan Turing had in 1936: the idea that it is possible to represent any effectively computable function as the operation of a Universal Turing Machine. Leibniz and von Neumann might be prophets of the digital universe that is the product of this idea, but it is clear who the creator is. Who are the worshippers in “Turing’s Cathedral”? We all are.