Newsroom / Blog

The U.S. Has Much to Gain from Global Governance on Artificial Intelligence 

AI

Share

By Liz Métraux and Faith Leslie

A recent article in The New York Times featured the latest tool in Ukraine’s effort to demine a nation now riddled with two million unexploded remnants of Russia’s illegal war: AI demining drones. In a country once known as the breadbasket of Europe, a full third of farmland is contaminated with mines. That AI is being used as a force for good in Ukraine could make all the difference for our ally and the global food system that relies on Ukraine’s agricultural production. 

This is just one of countless examples of the invaluable and evolving role of AI, which now influences almost every facet of our daily lives – from shaping social media feeds to diagnosing disease. For better or worse, the world is increasingly running on bots.

Those bots need guardrails. 

Shaping the Digital Future

Although the UN has long facilitated global efforts to advance and safeguard digital technology, it was just a few years ago that the UN convened its first series of consultations on AI. These discussions brought together leaders across government, civil society and industry – even (perhaps especially) youth – to help understand how AI is shaping both the online and offline future. The meetings yielded a pledge by UN Member States to work toward a set of shared principles for an “open, free and secure digital future for all.” 

Those shared principles were recently codified in the ”Global Digital Compact” (GDC), released on September 21 during the much anticipated Summit of the Future. Passed by consensus through the UN General Assembly, the GDC is the first comprehensive global framework for digital cooperation and AI governance. It requires countries to take action on issues of data governance by 2030, efforts that have already started as the Summit’s Action Day saw stakeholders pledge $1.05 billion to advance digital inclusion. As President Biden stated in his final address to the General Assembly, efforts like the GDC support the “urgent effort to ensure AI’s safety, security and trustworthiness.”

Like many global agreements, myths around the reach (and overreach) of this digital declaration abound – not, however, because American policymakers or the U.S. private sector necessarily disagree with its aims. Last October, the Biden administration offered an Executive Order on “Safe, Secure and Trustworthy Artificial Intelligence” that formed the basis of the UN General Assembly’s first-ever AI resolution, adopted just months after and that informs recommendations contained in the GDC. These recommendations cover everything from efforts to protect privacy and close the digital gap, to innovation-friendly practices and strategies to disarm dangerous mis- and disinformation that threatens democracy and propels political propaganda (think Russia, Iran and Venezuela). The resolution was led by the U.S., with co-sponsorship from 125 Member States. 

And the impacts of U.S. leadership on this issue don’t stop there. A few days following the GDC’s launch, the Freedom Online Coalition (FOC) announced further support for expanded AI oversight in alignment with UN human rights standards. The FOC—composed of 41 governments including the United States—operates based on a series of UN Human Rights Council resolutions underlining the need for a human-rights based approach to internet governance. Not only is the U.S. a member of the coalition, but it held the FOC Chair in 2023 and currently stands on the FOC’s Steering Committee.

The fact is that U.S. policy on AI and UN recommendations are actually largely aligned.

Rather, much of the concern about AI language in agreements like the GDC seems to stem from a perception that they are being shaped by foreign actors with whom the U.S. holds opposing views, including nations with long histories of using technology to surveille and suppress their populations. The irony in this argument, however, is that we only risk having efforts at oversight run afoul of U.S. values and policies if the U.S. does, in fact, cede our participation in the post-GDC process to adversarial Member States. 

Yet many members of Congress are suggesting just that: cutting off our nose to spite our face. Transferring authority for writing the rules of AI from entrepreneurs and policymakers in the U.S. to nations like China would do just that. 

To be fair, it’s unsurprising that the chorus of opponents to the GDC are growing louder. In an age of political hyper-nationalism, multilateral engagement is, sadly, a favorite scapegoat. But one only has to look at the example of AI drones in the quest to demine a critical ally to see that good AI is also good national security policy. And good AI regulation is increasingly essential as a tool in America’s diplomatic toolbox to maintain open, secure and democratic digital spaces. 

What’s in the best interest of the U.S. in shaping the future of AI is full participation. In doing so, we keep adversaries at bay, bring allies close – and keep Americans safer. As President Biden said in his UN address, “There may well be no greater test of our leadership than how we deal with AI.” So let’s step up to the challenge.

Liz Métraux served on the Advisory Board of the UN Internet Governance Forum during the Arab Spring; Faith Leslie researches the intersection of misinformation and mass violence.