Diplomatic and AI hallucinations: How can thinking outside the box help solve global problems?

Published on 29 September 2023
Updated on 19 March 2024

Last week, as the corridors of the UN General Assembly (UNGA) buzzed with the chatter of global leaders, our team at Diplo delved into an unusual experiment: the analysis of countries’ statements using the ‘hybrid intelligence’ approach of combining artificial and human intelligence. Some of our findings were more than just intriguing; they were paradoxical. A dash of AI hallucinations could spark creative problem-solving and novel approaches to humanity’s challenges, such as the rising number of conflicts, climate change, and controlling AI itself.

Interface of Diplo's AI reporting system with three options: Summary, AI Talks, and Statements

AI hallucinations

Traditionally used in psychology and literature, hallucination has gained new use to describe the fabrication of facts and reality distortion by AI platforms such as ChatGPT and Bard. The risk of hallucinations is built into the way AI works. Namely, AI makes informed guesses with a high probability of hitting the mark. AI does not provide logical or factual certainty. Yet, AI’s guessing is very plausible and realistic, as we can see from the answers and stories provided by ChatGPT.

Diplomatic hallucinations 

Hallucinations could be used to describe some features of diplomatic language. They often relate to the proverbial vagueness of diplomacy to avoid uncomfortable truths or sync national interests with prevailing global values. Sometimes, diplomats must slip into a hallucination to reinterpret facts and reality to create constructive ambiguity, which is critical to reaching a negotiation compromise. Practically speaking, they use metaphors, analogies, ambiguities, and other linguistic techniques as essential negotiation tools. 

The UNGA: In vivo lab for language, diplomacy, and AI 

Every September, the UNGA offers a unique lab for studying the interplay between language and diplomacy with extensive use of metaphors, cliches, signalling, and nuanced language as global leaders address the audience in the UN Main Hall and, equally importantly, the public back home. This year, the UNGA in vivo lab gained new relevance in testing large language models (LLM) on diplomatic speeches. Diplo’s AI, supported by human expertise, sifted through this linguistic treasure trove, capturing the essence of each statement while identifying patterns of diplomatic and AI hallucinations. 

The double-edged sword of AI hallucinations

As we reported from UNGA speeches and debates, we gained new insights on both AI and diplomacy. In most instances, AI hallucinates by simply reformulating existing diplomatic jargon into new phrases. While these remixes did not offer much new insight, every now and then, AI hallucinated unexpected insights, such as:

  • Using physical red buttons at AI facilities to switch off the electricity in case AI gets out of human control
 Sign, Symbol, Text
  • Involving taxi drivers in UN negotiations on AI to inject grassroots wisdom, practical solutions, and socio-economic intelligence 
  • Using the intelligence of criminals to contain AI, the risk of crime for humanity is lower than the risk of human extinction posed by AI.

The dilemma of perfection

So, we’re confronted with a dilemma about when and whether we should allow AI to hallucinate. As a default, we need AI to reflect reality and be as perfect as possible for most uses. For example, summaries from UN discussions should accurately mirror what was said. Last week, as we fine-tuned our AI to minimise hallucinations, the quality of our reporting from the UNGA improved constantly. But in that quest for perfection, we sacrificed AI’s probabilistic ability to make up facts and think outside the box. It made us wonder whether, in some cases, we might want to allow AI to hallucinate

Think of it this way: There are a growing number of so-called creative sessions aimed at stimulating unconventional thinking, whether they’re brainstormings, idea labs, hackathons, incubators, unconferences, innovator cafes, paradigm-shifting seminars, ingenuity forums, Zen koans, thought experiments, and so on. They are used to trigger shifts in usual thinking by identifying paradoxes, juxtaposing ideas, and finding novel solutions for existing problems. 

Why not use AI’s imperfections to help us think outside the box? Thus, perhaps some AI systems should be left to hallucinate intentionally.

Who knows? We may discover that the undiscovered genius of AI and humans working together lies in their imperfections.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.

Subscribe to Diplo's Blog