The CPD Blog is intended to stimulate dialog among scholars and practitioners from around the world in the public diplomacy sphere. The opinions represented here are the authors' own and do not necessarily reflect CPD's views. For blogger guidelines, click here.

AI and the Future of Public Diplomacy

Aug 22, 2023

by

Can artificial intelligence be a force for good in the practice of public diplomacy, despite the risks?  The short answer is a qualified “yes,” according to a distinguished group of experts during a virtual panel discussion hosted by the U.S. Advisory Commission on Public Diplomacy on June 14, 2023.

Alexander Hunt, Public Affairs Officer at the U.S. Embassy in Conakry, Jessica Brandt from the Brookings Institution, and Ilan Manor at Ben Gurion University of the Negev made a forceful case for AI’s “immense potential to support and enhance the work of public diplomacy.” See here for a full transcript of their remarks.

First, panelists argued that AI based sentiment analysis tools can help diplomats to “get a sense of prevailing perceptions of national policies or reactions to particular events.”  They can also be deployed to “better understand where authoritarian narratives are taking root” so that public diplomacy practitioners know where to focus attention and resources to counter disinformation and malign influence campaigns.

AI’s data collection capabilities can further help diplomats to “analyze how foreign media portray” their countries’ national interests and actions, and then “fine tune” local information and outreach efforts to improve influence capabilities. AI-enabled analysis of social media platform engagement and commentary can also be used to assess the performance of PD produced content.

In addition to their external analytic capabilities, the panelists noted that AI tools have the potential to streamline the curation of internal information collections through analysis of “diplomatic documents ranging from cables sent by embassies to media summaries, intelligence briefings and even diplomats’ analysis of local and global events.”

Panelists also pointed out that the use of ChatGPT and other machine learning tools to prepare media summaries and reactions, evaluate content impact, and analyze audience sentiment frees public diplomacy officers to focus on building relationships with key audience members: “ChatGPT is removing the drudgery of the work that we do…We have been able to…use that time to get out into the field and engage with our interlocutors.”

Finally, the panel noted that while “AI is here to stay,” it will not replace the public diplomacy officer. “ChatGPT and generative AI are really an amplifier of humans. They’re not a replacement.”

Though encouraging about AI’s potential to benefit information and influence activities, the experts were crystal clear about the associated risks. First, ChatGPT is “not a fact database” or “reasoning engine.” It will “confidently generate content that sounds authoritative even if it’s inaccurate, or inappropriate, or biased.”

In additional to its vulnerability to bias, ChatGPT also has the potential to “create a myriad of alternate realities,” making it difficult to discern the facts. Consequently, “with respect to countries such as Russia and even China, it’s not so much about whether the message is received or not. It’s about there being so many messages and so many different depictions of reality that there is no reality anymore. If there is no reality, America isn’t right. Russia isn’t right. No one is right.”

If the spread of false information online through AI generative models is also part of “the future of public diplomacy,” as the panelists argued, what is to be done?  Several specific policy and procedural recommendations emerged from the discussion.

The panel argued that the USG should consider the development of a “StateGPT” to allow diplomats to harness the power of AI to manage internal information flows.  In fact, State Department is just beginning to use AI tools to streamline the declassification of diplomatic cables.

Panelists also reinforced the need to “model good transparency around the use of generated content,” in order to avoid “setting precedents that we wouldn’t want others to follow,” and to “ensure that we’re using apps with solid cyber security practices.” 

In addition, the panel highlighted the importance of working directly “with the people developing AI to understand what the technological landscape is going to look like a year and a half or two years from now and then building towards that model.”

As a result of the use (and abuse) of AI platforms, the information space has become “the most consequential terrain over which states are going to compete in the decades to come.” Unfortunately, a democracy’s greatest advantage in this space—that “we care about the truth”—is also its greatest vulnerability.

As one of the panelists put it, “our democracy depends on the idea that the truth is knowable, and we are competing against entities who do not need to employ a bunch of humans to go back and fact check and make sure that the content is accurate, and unbiased.”

To manage the growing asymmetry of the global information space, public diplomacy practitioners must be permitted to take advantage of AI’s potential as an information collection and processing tool—with appropriate training and precautions.

At the same, State Department’s tech experts must do more to enable PD practitioners to manage AI’s threats as well as opportunities and make tools like ChatGPT a source of power, transparency, and resilience in the information domain.

STAY IN THE KNOW

Visit CPD's Online Library

Explore CPD's vast online database featuring the latest books, articles, speeches and information on international organizations dedicated to public diplomacy. 

Join the Conversation

Interested in contributing to the CPD Blog? We welcome your posts. Read our guidelines and find out how you can submit blogs and photo essays >