Newsroom / Blog

Why the U.S. Needs the UN in the Age of Artificial Intelligence

AI

Share

On Friday, March 20, the White House announced a sweeping national AI strategy, casting artificial intelligence as a defining pillar of U.S. economic strength and security. 

At its core, the strategy is a call for coherence: a single national framework built around six priorities, including protecting children, safeguarding communities, supporting creators, defending free speech and preparing the workforce. It seeks to move away from the growing patchwork of state-level rules, warning that inconsistency at home could undermine U.S. leadership abroad.  

Yet that tension – between rapid innovation and lagging governance – is a much greater challenge than a single national strategy.  

Last month, a UN Foundation article zoomed out to ask a fundamental question framing this very moment: “What does credible, inclusive global AI governance look like?”    

The urgency is hard to overstate. 

Early 2026 has seen everything from new data center agreements hit U.S. states to very public clashes between the Pentagon and AI firm Anthropic over loosening safety guardrails on its systems. 

Because the AI future isn’t coming – it’s already here. It’s fast-moving, high-stakes and only partially governed. 

The AI future isn’t coming – it’s already here. It’s fast-moving, high-stakes and only partially governed. 

The Domestic Governance Gap

Since ChatGPT debuted in 2022, America has led the world in AI innovation – from developing advanced models and chips to building cloud infrastructure. But until now, Washington has lacked a clear national rulebook – and key questions about how the new framework will be implemented remain unresolved.

Congressional concern is not the problem. Lawmakers across the spectrum agree AI poses real risks, including privacy violations, election interference and national security threats. In 2025 alone, President Trump issued seven AI-related executive orders and Congress more than 150 AI-related bills. No laws, however, were passed, leaving the new framework dependent on whether Congress can translate strategy into statute.

With stalemate at the federal level, states forged ahead. Last year, every U.S. state introduced AI legislation, and dozens enacted new rules governing issues such as algorithmic bias, automated decision-making and intellectual property. The result has been a patchwork of policies that vary widely across the country, forcing tech companies to tailor their operations to each jurisdiction – even though the technologies themselves operate across borders.  The Administration’s new framework explicitly seeks to end that patchwork, but doing so will require aligning federal ambition with state-level reality.

In 2025 alone, President Trump issued seven AI-related executive orders, Congress more than 150 AI-related bills and every U.S. state introduced AI legislation.

At the same time, even with a new federal framework taking shape, tensions between government and industry are intensifying. Defense officials want broader military AI integration, while some companies resist relaxing safeguards that could enable autonomous weapons or mass surveillance. Meanwhile, everyday uses — from workplace monitoring to synthetic political videos — are raising ethical questions faster than policymakers can answer them. 

Public opinion reflects the uncertainty. Americans want AI’s benefits but also strong protections, and many doubt either government or industry can strike the right balance alone. A 2025 Pew Research Center survey found experts broadly optimistic about AI’s long-term impact, while the public remains wary; most Americans are more concerned than excited, and only about one-quarter expect personal benefit. Majorities also worry regulation will fall short rather than go too far. 

A majority of Americans worry AI regulation will fall short rather than go too far.

Low trust in institutions compounds the problem. Many Americans doubt Washington can regulate AI effectively and question whether tech companies will police themselves.

Where the United Nations Fits

It’s against this backdrop — a new national strategy, but persistent fragmentation — that the United Nations has a key role to play.

The UN is working to coordinate national approaches using a model that has succeeded in other high-stakes domains such as aviation safety, telecommunications and nuclear nonproliferation: national laws anchored by shared international standards. 

The cornerstone of these efforts is the Global Digital Compact, negotiated in 2024. While not legally binding, it establishes common principles for safe and trustworthy digital technologies, including AI. Such frameworks give governments a baseline for policymaking across vastly different political systems. 

Two additional initiatives are emerging. An Independent International Scientific Panel on AI would provide authoritative assessments of technological advances and risks – a global source of expertise for governments struggling to keep pace. A Global Dialogue on AI Governance would create a standing forum where governments, companies and researchers can coordinate responses as new challenges arise. 

And for Washington, engagement — more importantly, U.S. leadership in these UN initiatives — offers tangible benefits.

First, it reduces fragmentation. Without coordination, companies can relocate to jurisdictions with weaker oversight, standards can diverge to the point systems cannot operate across markets and countries with limited regulatory capacity can become testing grounds for high-risk technologies. 

Without coordination, companies can relocate to jurisdictions with weaker oversight… and limited regulatory capacity.

Second, global standards shape market access. If rules are written without U.S. participation, American firms may be forced to comply with frameworks designed elsewhere. Engagement helps ensure international norms reflect U.S. interests, technological realities and democratic values. 

Third, cooperation enhances national security. Shared expectations for military AI, cyber operations and autonomous systems can reduce the risk of miscalculation or escalation. Coordinated responses to disinformation and synthetic media are especially critical, since election interference and online manipulation don’t respect borders. 

Finally, engagement preserves U.S. influence at a moment when the rules of the AI age are still being written. If Washington steps back from international forums, others will fill the vacuum – shaping standards and technical requirements that could disadvantage American companies, dilute democratic safeguards and embed values at odds with U.S. interests. 

Engagement preserves U.S. influence at a moment when the rules of the AI age are still being written.

Engagement Strengthens American Power

The United States faces a paradox: it leads the world in AI innovation but is only beginning to build a comprehensive governance strategy, even as AI’s impacts cross every frontier. Working alongside the UN ensures AI governance reflects American interests and values. 

Because in a world where AI cannot be governed by any nation alone, engagement is not about ceding authority. It’s about multiplying it.