When the Duke of Wellington surveyed the field of Waterloo on the morning of 18th June 1815 he was the inheritor of a long tradition. For millennia, commanders from Cyrus to Alexander, Caesar and Frederick the Great had used their capacity for sentient judgment to dominate a battlefield. The numbers engaged at Waterloo, the speed at which they manoeuvred and the calculus of firepower and attrition they exchanged could still be held within the cognitive powers of a single human brain. It would never happen again. And now we are entering a period in which the human brain may be removed from the realm of conflict. The question then arises—who will oversee such a battlefield? And what sorts of decisions will they have to make?
By the time armies of similar size deployed again in the 1860s on the battlefields of the American Civil War or the German wars of unification they were fighting a different form of warfare, the result of technological change. Three 19th-century innovations had changed the face of battle for ever. First, railways allowed rapid mass transportation and the concentration of forces of unprecedented size; second, rifled technology gave small arms and field artillery greater range and lethality; and, third, the telegraph allowed an element of command to be exercised remotely from the battlefield. The combined effect was to create larger, less dense but more dangerous battlefields and a form of command that owed more to the collective analytical powers of a professionalised general staff than the native genius of a single individual. As had happened previously with the invention of the stirrup (and hence heavy cavalry) or gunpowder, technology was not decisive in itself; what made it so was the human response.
As the inspirational effect of a single animating will was no longer sufficient to cut through the chaos of battle, alternative methods had to be found and the Prussian General Staff, under Helmuth von Moltke, led the way. Concepts like aufragstaktik (mission-centred command) and schwerpunkt (point of main effort) began to appear and eventually formed a coherent doctrine dependent not on the communicated will of the commander but on his premeditated intent. Perhaps there was something atavistically Teutonic in the embrace of chaos, but what the Prusso-German armies from 1860 to 1945 did consistently well was act intuitively in pursuit of the effects they were required to achieve rather than act mechanically in response to prescriptive instructions. In doing so, they provide the definitive historical illustration of human agency shaping and improving a technological edge to bring about decisive outcomes in battle. In the course of 80 years, very little altered in that fundamental equation. Today, the technologies are smarter and the pace of change is quicker, but things may be about to change.
Since 1945, America has kept a step ahead of its military peer competitors by the creation of a series of “offset strategies.” “Offset” in this sense means the exploitation by the United States of its technological advantage—along with its enormous investment resources and production and manufacturing capacity—in order to create a strategic advantage over its competitors and adversaries. This way, the US sought to achieve a strategic capability to which its adversaries had no response. The first offset strategy was developed in the 1950s and saw the development of a sophisticated nuclear armoury to counter the Soviet advantage in conventional forces. By the 1970s the Russians had achieved nuclear parity and so the US developed a second offset strategy comprising a generation of precision guided munitions that restored the US’s conventional edge over its enemies and which have found full expression in the wars since 9/11. That edge is now being eroded by technological proliferation and massively increased investment in defence research by both Russia and China.
“The traditional military skills of physical stamina and resilience will be of little use when machines will have an infinite capacity for endurance”
America therefore now seeks a third offset strategy that will allow it to fight at oceanic range in the confined waters of the Taiwan Strait against a prospective Chinese enemy with overwhelming advantages of scale and proximity. One of the ways in which American advantage might be restored is to move beyond the human dependent, semi-automated techniques of today and towards a fully-autonomous battlefield.
The aircraft carriers upon which much of US strategy in the Western Pacific is based are hideously vulnerable. The advantage of swarming drones firing hypersonic weapons needs little explanation and the military advantages of autonomous systems, when considering a conflict of this sort, are self-evident. They are automatic force multipliers in that fewer human warfighters are required and the efficacy of each human committed is exponentially greater; they can go where human beings can’t, to space and the deep ocean; they are capable of operating at a tempo faster than humans can possibly achieve; and, they reduce human casualties. They are also relatively cheap. US Department of Defense 2013 figures show that the cost of maintaining a single soldier in Afghanistan was roughly $850,000 a year; soldiers require pensions and long term health care, robots don’t. We are still some distance from what the US Defense Science Board terms multi-agent co-ordination comprising “multiple robots, software agents or humans” but it is only a matter of time before the Artificial Intelligence (AI) necessary to facilitate this means that human agency might become a discretionary element of battle.
It is axiomatic that robots are more mechanically efficient than humans; equally they are not burdened with a sense of self-preservation, nor is their judgment clouded by fear or hysteria. But it is that very human fallibility that requires the intervention of the defining human characteristic—a moral sense that separates right from wrong—and explains why the ethical implications of the autonomous battlefield are so much more contentious than the physical consequences. Indeed, an open letter in 2015 seeking to separate AI from military application included the signatures of such luminaries as Elon Musk, Steve Wozniak, Stephen Hawking and Noam Chomsky. For the first time, therefore, human agency may be necessary on the battlefield not to take the vital tactical decisions but to weigh the vital moral ones.
So, who will accept these new responsibilities and how will they be prepared for the task? The first point to make is that none of this is an immediate prospect and it may be that AI becomes such a ubiquitous and beneficial feature of other fields of human endeavour that we will no longer fear its application in warfare. It may also be that morality will co-evolve with technology. Either way, the traditional military skills of physical stamina and resilience will be of little use when machines will have an infinite capacity for physical endurance. Nor will the quintessential commander’s skill of judging tactical advantage have much value when cognitive computing will instantaneously integrate sensor information. The key human input will be to make the judgments that link moral responsibility to legal consequence.
In this way, the rules-based nature of conflict will be retained. The training ground is as likely to be the seminary or the chambers as the assault course or the staff college, but both the moral, legal and physical dimensions will need to be understood for each to provide context to the other. The soldier moralist might sit best in academia, though the evolution of relatively bloodless conflict waged between our machines and theirs might make recourse to conflict routine and war endemic.
It is difficult to negotiate the dystopian landscape of autonomous warfare, but we’ve been here before and if the human capacity for adaptation can accommodate both the revolution in military affairs in the 19th century and the development of nuclear weapons in the 20th, it can handle this. Where we will be in uncharted territory is that sentient analysis has previously sought to maximise technology’s impact on the battlefield to bring about decisive effect. In future the challenge might be to moderate it within clearly defined moral limits.