Politics

AI Coordination Stalls as Governments Disagree on AI

U.S. and global policymakers want AI coordination, but disagreement over what AI is—and how fast it changes—keeps action fragmented.

A global push for AI coordination is running into a stubborn, almost philosophical obstacle: governments can’t agree on what artificial intelligence actually is.

That mismatch sits at the center of today’s international paradox.. In speeches and summits. countries broadly call for engagement or coordination—whether through export-promotion style deals. governance frameworks. or other forms of international oversight.. Yet when it comes to concrete, enforceable steps, the momentum repeatedly stalls.. Even commitments discussed at AI summits held in India have remained voluntary. and efforts to enforce or build on earlier pledges from the Seoul AI Summit have been inconsistent.

One reason the debate is so difficult to translate into policy is that “AI” has become an umbrella term with competing meanings.. Some people associate AI primarily with systems like ChatGPT and large language models.. Others use the label to describe superintelligent technologies that would surpass human capabilities.. Still others apply “AI” to more everyday machine-learning tools.. That definitional problem can make it harder for governments and the public to pin down which specific technologies should be governed—and what rules would even apply.

But beyond definitions. there is a deeper epistemic disagreement about the kind of technology AI is likely to become and how fast it might change society.. On one end of the debate are economists such as Daron Acemoglu. who argues AI’s economic effect will be “nontrivial but modest. ” concentrated mainly in certain white-collar areas over decades.. On the other end is a view associated with figures including Dario Amodei. who argues that AI could rapidly reshape most sectors of the economy and society within a short timeframe. with the possibility of superintelligence arriving in the next few years.

These rival timelines don’t necessarily mean agreement or disagreement about whether AI will be good or bad.. Even those who worry about cybersecurity risks from advanced systems may share broad expectations about where progress is heading.. The key dividing line is how governments and policymakers think AI will evolve—whether the transformation stays localized and incremental or accelerates toward civilization-scale implications.

That difference in expectations is not just a matter of prediction; it shapes strategy and foreign policy choices.. If a government concludes that superintelligence capable of transforming human civilization is imminent. it has strong incentives to move quickly toward whoever controls that technology.. If. instead. it believes AI is likely to be important but slower-moving and more sectoral. it may prioritize integrating AI across major parts of the domestic economy—pursuing competitiveness while trying to retain control over essential infrastructure so foreign actors cannot simply “turn off” access.

A useful way to frame this competition, the report argues, is to look at two underlying axes.. One axis is how countries perceive the speed and scale of the transformation AI could drive. ranging from localized or sectoral effects to potential civilization-wide change.. The second axis is how self-sufficient they believe their domestic AI capabilities are—spanning the view that a country controls its AI pipeline from chips to models. all the way to a stance of dependency on leading foreign firms. including those in the United States or China.

Within this framework. the report places the United States in a camp that combines relatively fast expectations about AI’s potential with a sense that key domestic capabilities are largely under national control.. It points to U.S.. frontier labs—Anthropic, OpenAI, and Google DeepMind—as well as influential voices in U.S.. political life, including some members of the Biden administration and Sen.. Bernie Sanders.. This group expects AI could become. though not guaranteed. a civilizationally important force in a relatively short period. potentially reaching artificial general intelligence or even superintelligence that would shift the global balance of power across industries.

Even inside that faster-moving camp, the report notes internal divisions. Some emphasize the need for testing advanced AI systems, while others are more optimistic about AI’s near-term impact. Still, the underlying epistemic belief about the pace of progress is described as broadly shared.

Another grouping highlighted in the analysis includes parts of Silicon Valley and certain actors in China’s government.. The report says China watchers have described these views, including Jordan Schneider and Kyle Chan.. Here. the expectation is less about imminent AGI or superintelligence. but AI is still treated as a highly important general-purpose technology that could diffuse more slowly across the entire economy.

The strategy in this grouping. as described. still chases frontier capabilities—because even a slower timeline does not remove the competitive pressure—but it emphasizes accelerating diffusion of AI capability.. The report cites China’s efforts as including open-source and state-led initiatives such as the “AI Plus” plan.

The analysis further describes China’s push toward building an “agentic economy. ” with local governments using systems like DeepSeek for tasks such as proofreading and consumers racing to adopt AI-enabled phone features such as “OpenClaw.” It also cautions that these groups are not monolithic; some Chinese labs. including DeepSeek. are also portrayed as pursuing AGI.

Outside the United States and China, governments are described as sharply divided on how quickly AI capabilities might become transformative.. Some European scientists and civil society actors are described as rejecting near-term emergence of civilizationally transformative superintelligence.. By contrast. the report points to governments that foresee transformative AI arriving more rapidly. including the United Arab Emirates and the former Sunak government in the United Kingdom.

Even within that broader “slower” majority outside America and China. the report says countries differ in their predictions about not just when AI advances. but how quickly those capabilities will spread through economies and society.. Those competing time horizons translate into different policy agendas—what risks should be addressed first. and what benefits policymakers think they should pursue.

The report describes how these divergences derail coordination on what to prioritize internationally.. In the superintelligence-focused camp, some may want to emphasize AI-enabled cyber risks, believing fast progress could empower offensive capabilities.. Others—those expecting a slower path—may dismiss those concerns and instead stress short-term harms.. It cites how France’s AI summit highlighted labor disruption or cultural erasure. while earlier summits had focused more on superintelligence-related risks.

In practice, that disagreement affects not only policy recommendations but the language countries use during coordination efforts. When governments cannot even conceptualize which benefits and dangers are most immediate, it becomes difficult to build a shared agenda that everyone can rally behind.

Yet the report argues that the stalemate is intensified by perceived dependency.. The United States and China are said to account for roughly 90% of global AI compute and to host most leading models. leaving many other countries feeling more dependent on American and Chinese capabilities than they are on their own.. Because AI governance is partly about who controls access. those assessments of dependency interact with each country’s beliefs about AI’s likely timeline.

This is reflected, the report says, in proposals that can look puzzling at first glance.. It cites the United Arab Emirates as an example of a government that sees transformative AI capabilities arriving more quickly and has proposed an AI “marriage” with the United States—an approach framed as a way to “bandwagon” with power centers that can provide access to advanced capabilities.

In contrast. the report says other governments believe either that AI capabilities will emerge more slowly or that they have enough time to build domestic autonomy.. It points to India as an example: the former head of India’s AI mission is described as saying India does not plan to chase AGI.. That position is presented as consistent with substantial domestic investment. including indigenous computing infrastructure. talent-building. and development of sovereign champions such as Sarvam.

The report’s central point is that these different epistemic views create incentives that are hard to reconcile.. U.S.. and Chinese officials who believe their AI ecosystems are comparatively self-sufficient. it says. have limited incentive to cede decision-making power to international bodies beyond bilateral arrangements.. At the same time. countries seeking to bandwagon with either Washington or Beijing may have little reason to defect because doing so could threaten their ability to secure access.

In the analysis. only governments that feel insufficiently equipped domestically—and believe AI progress is too fast for them to catch up on their own—are described as having stronger incentives to pursue international coordination.. But coordination efforts still largely fail. the report argues. because these governments lack the capabilities today to make their participation matter in a way that changes the practical outcome of global governance.

Taken together. the report argues the result is a “dying international ecosystem” for multilateral coordination on AI—driven not by a lack of willingness. but by incompatible epistemics.. The serious issues that would normally require shared planning—such as managing unexpected cross-border multi-agent interactions and creating global technical standards to support agentic economies—remain out of reach.

Whether this stalemate breaks is uncertain, the report says.. One possible catalyst could be concerns about misuse of AI capabilities by nonstate actors. leading some governments to focus on preventing that proliferation.. Another potential driver could be economic developments that increase the number of players needed to finance frontier model development. widening the circle of stakeholders in ways that force convergence.

Until then. the report concludes that global AI coordination likely remains stuck in a narrow window. with governments continuing to call for action while disagreeing on the foundational beliefs that would make coordinated policy feasible.. In Misryoum. the paradox is clear: even when the political will to coordinate exists. the shared language for doing so does not.

AI coordination U.S. AI policy international AI governance AI definitions epistemic disagreement frontier models

4 Comments

  1. So they want “AI coordination,” but nobody can even agree what AI is? That feels like trying to negotiate traffic rules while arguing what a car is.

  2. Jordan Mitchell makes a fair point. The definitional mismatch is basically the whole blocker: if one country means LLMs and another means future hypothetical superintelligence, the policy tools and timelines won’t line up. Voluntary commitments also don’t help when the underlying targets are vague.

  3. Great, another summit where everyone agrees on the vibe of cooperation, then stalls the second they have to define the thing they’re regulating. Jordan Mitchell is right, and Priya Shah nailed why it’s hard to translate into enforceable steps.

  4. Honestly, if governments can’t agree whether they’re talking about chatbots or sci-fi doomsday tech, any “coordination” is just political theater. Priya Shah, that voluntary-stuff angle is probably the reason it never gets traction.

Leave a Reply

Your email address will not be published. Required fields are marked *

Are you human? Please solve:Captcha


Secret Link