A recurring theme throughout science fiction dystopia is the idea of nations handing control over their arsenals to AI systems which eventually become self-aware and turn those weapons against their own government. While self-aware AI systems are still quite a way off, the idea of militaries handing increasing control over defensive weapons systems to AI is becoming ever closer to reality as modern weapons are closing the response time gap. The rise of hypersonic weapons, with just minutes between launch and impact, may be the development that pushes countries to begin exploring automating their missile defense systems.
The correlative deep learning systems of today are a far cry from the sentient and adaptable general intelligences of Hollywood. At the same time, today’s AI is more than capable of taking on counterstrike responsibility, monitoring a nation’s air defense and global launch monitoring systems to autonomously respond to incoming existential threats before human military decision-makers are even aware there is a problem.
The challenge, of course, is that their correlative nature means current AI systems are extraordinarily brittle, with the contours of their knowledge and their edge case failure points largely unknown and difficult to predict. This makes them capable but extremely dangerous allies when placed in command of systems that can quite literally start wars.
Despite these limitations, AI is rapidly making its way into warfare. While AI-powered “killer robots” in the form of bipedal Terminator robots marching through a battlefield or blending in with human populations are a way off, the capacity for self-targeting aerial drones using facial recognition to locate their targets is already here.
Yet today’s tentative AI-powered weapons systems still rely on humans to define their targets or boundary conditions and are limited in the scale of lethality they can command.
As offensive weaponry becomes faster, with hypersonic weaponry potentially reducing the time between launch and impact to just minutes, militaries will come under increasingly pressure to automate the counterstrike decision-making process
When heads of state have just minutes to evaluate an incoming attack, determine an effective response strategy and then communicate that to the necessary commands, governments are likely to recognize that such time schedules no longer leave sufficient time for even the most rudimentary of reasoning about options and outcomes.
While delays of minutes may be acceptable in the era of nuclear weapons, as hypersonic weapons invade that window, humans may not longer be capable of overseeing the response process.
Instead, it is likely that once such weapons move beyond the prototype stage to battlefield reality and enter service in a real conflict, governments will be faced with the uncomfortable reality that current authorization protocols are insufficient to meet the time pressures of such an environment.
In turn, this is likely to speed today’s theoretical conversations about AI-driven deterrence. Smaller, more technologically advanced countries, faced with existential threats from their neighbors are likely to be the first to place portions of their deterrence arsenals under hybrid control, with AI systems evaluating threats and presenting options, with a human commander acting as the final approval to prevent misfires.
Over time, however, it is likely that the unique time pressure incurred by the rise hypersonic weapons will lead even the largest nations to place portions of their deterrence arsenal under machine control, designed to strike back at the origin of an incoming attack in the absence of a countermanding order within a particular time interval.
While once upon a time the thought of having a black box computer algorithm controlling missiles might have seemed the stuff of fiction, militaries have become increasingly accepting of algorithmic intervention and control.
The need for such systems to make a range of decisions about targeting, countermeasures and responding to what is likely an extremely fluid environment means they will almost certainly incorporate some amount of deep learning in their programming.
It is worth noting that early AI deterrence prototypes might initially be deployed in command of cyberweapons rather than physical ones, though the interconnection of the civilian and military worlds means even a military-focused cyberweapon is likely to cause very real harm to civilians, potentially even leading to widespread causalities.
Putting this all together, today’s militaries have been rightfully cautious in deploying correlative deep learning systems in command of their deterrence arsenals. Hypersonic weapons, with their launch-to-impact delays of just minutes may finally be the development that pushes governments to begin handing over control of portions of their kinetic or cyberweapon arsenals to deep learning systems, just as science fiction novelists have been warning us for years. The only consolation is that today’s systems are far from self-aware. Unfortunately, they are also far from stable.