The government that can utilize information more efficiently will have an advantage over its rivals. AI is simply the next stage of this continuous historical dynamic.
While the world is momentarily concerned with how to best regulate artificial intelligence (AI)—be it UK prime minister Rishi Sunak’s call for his country to take the lead in AI regulation, or the European Union’s AI Act—important geopolitical tensions are brewing with this technology at the center. AI is already having a transformative impact on the world, driving economic growth and playing an increasing role in state conflicts. It is essential to understand these emerging tensions, but it is equally important to recognize that in many ways the issues emerging around AI are not new.
For centuries, governments have been in the business of collecting, storing, and exploiting information to gain competitive advantages over rivals. The government that can utilize information more efficiently will have an advantage over its rivals; the ability of a government to acquire and exploit data is key for its long-term sustainability. AI is simply the next stage of this continuous competition, driven by the world’s demand for information and knowledge.
The most important unresolved geopolitical tensions related to AI are emerging: what is the future of AI—authoritarian, democratic, or both?
Authoritarian actors want to build an authoritarian future for AI, viewing it as a strategic tool that enhances their ability to enforce control domestically and internationally.
Such ambitions have been evident for years, illustrated by the collaborative efforts of Russia and China in promoting a more authoritarian and centralized Internet. The Internet is a disruptive technology that is governed by global initiatives—similar in nature to those being discussed for AI. Yet it is precisely these global initiatives that are used by illiberal states like Russia to drive and support the development of authoritarian norms for the Internet.
Democratic actors, in turn, are mobilizing to develop global AI initiatives that promote the responsible use of AI. Yet, while it is essential that governments—collectively—present a unified front to curtail growing authoritarian interest and power in the world’s technological developments, global regulatory regimes are likely not enough on their own.
It is precisely for this reason that success in this AI geopolitical conflict will be determined not by a new global regulatory regime but by these three factors: 1) technological dominance and digital innovation; 2) ownership of AI’s infrastructure; 3) strategic integration of private sector industry.
Technological Dominance and Digital Innovation
AI is a technology, and those who build and use the best AI systems, will maintain a geopolitical advantage. To remain competitive, innovation is essential. Most AI-related innovations will be driven by the private sector, competing against their economic rivals for dominating positions in the world’s markets.
AI is rapidly impacting our world, often in undesirable ways. For citizens, AI does bring risks of bias and discrimination, but AI has also amplified several important national security threats from bioweapons to cyber threats. In many countries, these risks are taken seriously, together with their associated moral and ethical issues.
However, AI development supported by authoritarian regimes is not stymied by the same concerns. Instead, these authoritarian actors enjoy a competitive advantage. This advantage is driven by their ability to gather data and deploy AI systems in a way that most states will never be able to do, especially when it comes to obtaining and utilizing sensitive personal or biometric data. This advantage, when combined with strategic initiatives and funding to drive the creation of more authoritarian AI systems, could allow for the West’s current dominance in the AI industry to be challenged.
Ownership of AI’s Infrastructure
Unfortunately, AI systems are often discussed in the context of models, algorithms, or data—something ethereal, of this world, existing, but not tangible. This could not be further from the truth. Instead, one must think of cables, concrete, and steel. AI systems are part of an increasingly global economic network.
AI systems are trained on data that has been carefully annotated by humans, often living in developing countries. This data is sent across the globe, traveling the world’s interconnected, complex tangle of undersea cables. The algorithms or systems that use this data are powered by servers and GPUs, based in incredibly large data centers constructed of steel and concrete. Each of the above is a physical, tangible asset—control over which is essential for coming out ahead in AI.
Currently, the United States has a clear advantage when it comes to the infrastructure of AI. The deep-sea cables that represent control over the world’s internet and data flows are predominantly owned by American private sector companies. A majority of the world’s data and cloud computing centers are similarly owned by America’s private sector industry.
China, however, is tightening control over the Internet’s subsea cable infrastructure, which is being actively resisted by the United States and its Western allies. As part of China’s “Digital Silk Road” project, Chinese telecommunication providers are investing rapidly in the development of new data centers, especially in areas where Western providers have a relatively limited presence. China was developing into a world leader in the production and manufacturing of computer chips, which led the United States to initiate new rules and regulations to push back and delay this advantage from manifesting.
Strategic Integration of Private Sector Industry
In all cases, large private sector corporations are, and will continue to be, an essential part of the AI geopolitical question. These corporations own the infrastructure and are largely responsible for driving the development and innovation of AI. Due to the centrality of these companies in the world’s emerging AI supply networks, they are well-positioned to set the rules of the game: controlling which states, governments, or companies can use their infrastructure to develop, train, or deploy AI.
This new type of power is described by Henry Farrell and Abraham Newman in their 2019 article, “Weaponized Interdependence: How Global Economic Networks Shape State Coercion,” whereby they argue that states can “weaponize networks to gather information or choke off economic and information flows, discover and exploit vulnerabilities, compel policy change, and deter unwanted actions.” The private sector is a critical component of these networks.
To lead in AI, it will be essential for governments to support the private sector and include them as a critical component of a broader geopolitical strategy.
The Coming World of AI
What does this tell us about the future of AI for the world? At a minimum, we can expect states to devote large amounts of resources to developing and supporting their domestic AI industries.
We will see governments experiment with developing and procuring their own infrastructure that can be used to train and deploy AI systems. This is already happening and is best seen in the EU’s current push for digital sovereignty and support of the GAIA-x initiative.
Ultimately, new investments into the AI and technological industry, driven by the emerging geopolitical tensions over AI, will radically transform our world and how we experience it. Whether or not this future will be democratic, authoritarian, or a combination of the two is, at this moment, still being written.