Impact of AI on the Security Dilemma Between the US and China

840

Recent breakthroughs in machine learning and artificial intelligence (A.I.) have prompted breathless speculation about their national security applications. Yet most of that work has focused narrowly on their implications for autonomous weapons systems, rather than on the broader security environment. Apart from Michael Horowitz and a handful of others, few scholars have sketched out how A.I. might affect core questions of international relations and foreign policy.

One key challenge stands out: What influence will A.I. have on security dilemmas between great powers? With the two leading producers of A.I., the United States and China, already eyeing each other warily, the question is far from an idle one. If we are to maintain a stable international order, we need to better understand how artificial intelligence may exacerbate the security dilemma—and what to do about it.

THE SECURITY DILEMMA

Political scientists have speculated about the security dilemma ever since it was first formulated in 1949. Fresh from two world wars, scholars of international relations puzzled over a disturbing possibility: What if war could break out even when neither side wanted it? If one country wasn’t sure about the military intentions and capabilities of its rival, then it would be rational for that country to stockpile weapons and build up its military in response. The rival might take that precaution as a sign of aggression and respond in kind, sparking further military build-ups and setting the two countries on a path toward war. In essence, the quest for security can make a state less secure.

From the longbow to nuclear weapons, major developments in military technology have always compounded the security dilemma. New technologies introduce uncertainty about military capabilities: Each advance brings with it uncertainty about how it will be used, or even how powerful it will be. In the 1930s, for instance, each major power knew the general capabilities of radar, mechanized artillery, and aircraft. But what no one knew for sure—at least not until Germany blitzkrieged its way through Poland and France—was how they would be used in battle. Likewise, early in the Cold War, both the United States and the Soviet Union worried that the other might develop nuclear missiles more powerful than their own. The result was a nuclear arms race.

Artificial intelligence introduces both forms of uncertainty. No one yet knows exactly how A.I.-enabled weapons will be used on the battlefield, much less how powerful those weapons will be.

At the tactical level, A.I. introduces significant uncertainty by virtue of being an enabling technology. Rather than constituting a single weapon system itself, A.I. is instead being built into a wide variety of weapons systems and core infrastructure. Tanks, artillery, aircraft, submarines—versions of each can already detect objects and targets on their own and maneuver accordingly. Similarly, A.I. is also being deployed within command-and-control centers and logistical infrastructure. Yet it’s unclear how those innovations will change the nature of conflict. What effect will swarms of unmanned submarines have on naval warfare? What happens when today’s commodity A.I. isn’t just bolted on to existing weaponry and command-and-control centers, but baked into them from the bottom up?

Which military will do the best job of integrating A.I. into its weapons systems and tactics, and how much of a battlefield advantage will it convey? Despite the rampant speculation about these questions, answers are still elusive—and, to some extent, beside the point. From the perspective of a military strategist, what matters most is that the questions need to be asked at all. The prospect that a rival power might use A.I. weapons systems in innovative and unexpected ways is enough to exacerbate existing security concerns.

Read the source article from Brookings.