Analysis: How AI will impact deterrence - M5 Dergi
Defence NewsÖne Çıkan

Analysis: How AI will impact deterrence

Abone Ol 

Despite AI’s potential to enhance military capabilities by improving situational awareness, precision targeting, and rapid decisionmaking, the technology cannot eradicate the security dilemma rooted in systemic international uncertainty.

In the realm of defense and security, Artificial Intelligence (AI) is likely to transform the practices of deterrence and coercion, having at least three complementary effects on power, perception, and persuasion calculations among states.

The substantial investments in AI across governments, private industries, and academia underscore its pivotal role. Much of the discussion, however, falls into narratives portraying either Terminator-style killer robots or utopian panaceas. Such extreme-limit explorations leave questions about AI’s potential influence on key strategic issues unanswered.

Therefore, a conversation about the ways in which AI will alter the deterrence and coercion equation and the ways to address the strategic challenges this raises is essential.

At its core, deterrence is about influencing an adversary’s behavior through the threat of punishment or retaliation. The goal of deterrence is to convince an opponent to forgo a particular action by instilling a fear of consequences, thereby manipulating their cost-benefit calculations. While deterrence aims to prevent an opponent from taking a specific action in the future, compellence seeks to force a change in the opponent’s present behavior.

Both concepts fall under the broader concept of coercion. Actors engaged in this dynamic must carefully consider how to communicate threats to their adversaries to make them reconsider their will to undertake specific actions. Each move and countermove in the coercion calculus carries significant escalatory risks that can trigger unintended consequences. Hence, decisionmakers must consider each step judiciously, drawing on history, psychology, and context to communicate credible threats to dissuade adversaries from crossing red lines.

Let’s look at each of the essential elements of coercion: power, perception, and persuasion.

Power has several dimensions. An actor’s military capabilities, economic wealth, technical advancements, diplomatic relations, natural resources, and cultural influence are among them. Besides actual power, the ability to signal its possession is critical. As Thomas Hobbes states in Leviathan, the “reputation of power is power.”

Hobbes’ conception remains relevant, given that power cuts across hard capabilities. It also informs our perceptions of others, including understanding their fears, ideologies, motives, and incentives for how they act, as well as the means that actors use to persuade others to get what they want in their relationships.

However, this dynamic interaction of power that drives cooperation, competition, and conflict will likely become increasingly volatile due to AI and the ambiguity it would inject into decisionmakers’ minds when interpreting an actor’s defensive or offensive ambitions. For instance, if an actor already perceives the other as malign, leveraging AI is likely going to reinforce the bias that a competitor’s military posture increasingly reflects an offensive rather than a defensive orientation. Reinforcing biases could mean that it will become more challenging for diplomacy to play a role in de-escalating tensions.

AI’s inherent lack of explainability, even in benign applications, poses a significant challenge as it becomes increasingly integrated into military capabilities that have the power to do immense harm. Decisionmakers will have to grapple with interpreting their counterparts’ offensive-defensive equation amid this ambiguity.

For instance, imagine if an AI Intelligence, Surveillance, and Reconnaissance (IRS) suite tracking Chinese military exercises in the South China Sea analyzed the exercises to be a prelude to an attack on Taiwan and recommended that the United States deploy carrier groups to deter China’s move. U.S. decisionmakers trust the recommendation because the AI ISR suite has processed far more data than humans could and actions the recommendation. However, Beijing cannot be sure whether the U.S. move is in response to its military exercises or is intended for other purposes. The Chinese leadership is also unsure about how the United States arrived at this decision and what its intentions are, which adds more fog to its interpretation of American strategic motives and the extent to which these motives are informed by the AI’s advice versus human cognition. Such a dynamic would amplify misperceptions and make it inherently harder to forestall a dangerous spiral that could lead to kinetic conflict.

Another pressing concern is whether AI could exacerbate the formation of enemy images, prompting worst-case scenario assessments used to justify punishment or violence. This risk is not hypothetical; biased data in data-driven policing has resulted in disproportionate targeting of minorities. In the military domain, algorithmic bias, stemming from data collection, training, and application, could have lethal consequences. Humans may shape AI, but the novel technology may, in turn, shape their future decisionmaking.

The permanence of uncertainty in the international state system means that perceptions will remain prejudiced. No technical fixes, including AI, can override these deep human insecurities. Cognitive imageries, meaning an actor’s perception of their counterpart, cannot be reduced to data, no matter how sophisticated the multi-vector datasets feeding the AI capability are, partly because data cannot capture the unique feel of any particular situation.

So, despite AI’s potential to enhance military capabilities by improving situational awareness, precision targeting, and rapid decisionmaking, it cannot eradicate the security dilemma rooted in systemic international uncertainty. At best, the increased adoption of AI in political, defense, and military structures by actors globally could result in about the same perceptions.

However, we should also be prepared for greater volatility as states race to get ahead of their competitors, convinced that AI could accelerate their place in the international state system, amplifying the security dilemma. As a result, states often prepare for the worst since they can never truly know their competitor’s intentions.

A central challenge lies in effectively communicating algorithm-based capabilities. There is no equivalent of measuring AI capability to physical weapons platforms such as tanks, missiles, or submarines, which only increases the uncertainty in deterrence terms.

Thirdly, the art of persuasion is also likely to become more complex with the adoption of AI. Advances in AI have already demonstrated the power of AI systems that can persuade humans to buy products, watch videos, and fall deeper into their echo chambers. As AI systems become more personalized, ubiquitous, and accessible, including in highly classified and sensitive environments, there is a risk that decisionmakers’ biases could influence how they shape their own realities and courses of action.

Civilian and military leaders like to believe they are in control of their information environments. Still, AI could qualitatively change their experiences, as they, too, would be subject to varying degrees of powerful misinformation and disinformation campaigns from their opponents. Hence, our engagement with AI and AI-driven persuasion tools will likely affect our own information environment, impacting how we practice and respond to the dynamics of coercion.

The increasing adoption of AI in the military domain poses significant challenges to deterrence practices. AI’s lack of explainability makes it difficult for decisionmakers to accurately interpret their counterparts’ intentions, heightening the risk of misperception and escalation. Early AI adoption may reinforce enemy images and biases, fostering mistrust and potentially sparking conflict. While AI enhances a broad spectrum of military capabilities, it cannot eliminate the underlying insecurity that fuels security dilemmas in interstate relations. As states vie for AI-driven strategic advantages, volatility and the risk of escalation increase. Ultimately, the tragedy of uncertainty and fear underscores the need for cautious policymaking as AI becomes more prevalent in our war machinery.

Abone Ol 

Related Articles

Abone Ol 
Back to top button
Close
Close