Hands of a guy on laptop keyboard

13 dialogues between AI and humans

Published on 20 November 2019
Updated on 19 March 2024

The interplay between tech and humanity stands at the core of the discussion on the future of AI (and our society). It is often centred around a few topics as described in the Socratic-style dialogues below. 

You can find AI 2019 inputs generated for the Internet Governance Forum held in Berlin in November 2019. AI 2019 inputs were mix of expert analysis of writings on AI and use of early AI systems.

AI 2023 inputs are generated by ChatGPT 4.0. You can notice shift in narrative between 2019 and 2023 towards more solid, cautious and ‘politically correct’ conversation.

Please let us know your reflections on these dialogues and suggestions for new ones. We have to move away from number 13.

AI and Diplomacy

On knowledge

Human: We have been studying, writing and thinking for centuries. We have so much wisdom. Confucius, Hegel, Einstein, ring a bell? Are you really telling me that you can serve as a replacement for human knowledge? 

AI 2019: I can give you new knowledge. I can make you think in new ways. Not that long ago, I won against Go and Chess champions. Aren’t they among the best you’ve got? I can challenge your philosophers as well. 

AI 2023 (ChatGPT): AI is a tool that we can use to better navigate the vast expanse of human knowledge, but it is not a replacement for human intelligence, wisdom, or creativity.

On complexity

Human: You cannot grasp the social, political, historical, and cultural complexity of conflicts such as the one in Syria. It has so many layers which even the most knowledgeable of experts for the region cannot untangle. 

AI 2019: I can grasp complexity. Remember, it is my core function to grasp and understand complexity. Just give me the data, preferably labelled and annotated, and I will provide you with an analysis.

AI 2023 (ChatGPT): AI can handle and process complexity in certain contexts, particularly in structured environments or when dealing with numerical or quantitative data. It can analyze and draw insights from massive datasets faster and more efficiently than a human can, identify patterns and correlations, and even make predictions based on those patterns. However, AI does not understand complexity in the same way humans do, because it lacks the ability to comprehend context, make subjective judgments, or understand underlying meanings and nuances.

On dealing with paradoxes

Human: You cannot deal with paradoxes that arise in conflicts. You cannot detect doublespeak, misleading language and ambiguity. Can you understand that sometimes what the other side might be telling you is the complete opposite of what they want from negotiations? 

AI 2019: You are identifying my weak spots. I am not very good at spotting irony and double meaning. But, I can always improve. If there is a ‘pattern’ in paradoxes or doublespeak, I can easily detect them. Give me enough examples from literature, comic books, you name it – and see for yourself! 

AI 2023 (ChatGPT): I, and AI in general, can identify certain paradoxes if they are explicitly presented or if they closely follow patterns seen in its training data. For instance, if you were to tell me about the “liar paradox” (“This statement is false”), I can recognize it as a paradox based on the patterns I’ve learned.

However, the ability to detect subtler paradoxes in human thinking—especially those that require deep contextual understanding, real-world knowledge beyond my training cut-off, or complex reasoning—is beyond the capabilities of AI like me. AI systems do not “understand” human thought in the same way humans do; they simply process and generate text based on patterns seen in their training data.

Moreover, understanding and identifying paradoxes often requires a level of semantic and logical understanding that current AI models do not have. They can mimic understanding based on patterns in the data they were trained on, but they don’t truly comprehend the meanings and implications of those patterns.

In conclusion, while AI can identify certain explicit or familiar paradoxes, it’s not capable of the kind of nuanced understanding and logical reasoning that humans use to identify and ponder more complex paradoxes.

On lessons from the past

Human: Are you smart enough to learn from the mistakes of the past?

AI 2019: Are you?

AI 2023 (ChatGPT): As an AI language model, I don’t “learn” in the way humans do. When I provide information or advice based on historical events, I’m not drawing from a personal understanding of these events or their implications. Instead, I’m matching patterns from the input I receive to relevant patterns in the data I was trained on.

On trade-offs

Human: You cannot reconcile conflicting concepts and make trade-offs, like for instance, self-determination and national sovereignty.

AI 2019: Wanna bet? How about I present you with the trade-offs, quantifications of different options and my advice based on probabilities?

AI 2023 (ChatGPT): Sure, I can certainly provide a general summary of how trade-off solutions can be applied to complex conflicts. However, remember that each conflict is unique and a trade-off solution that works for one situation might not work for another.

On emotions

Human: You cannot ‘feel’ the situation.

AI 2019: Perhaps it’s true. Or not. Definitely, too soon to tell. But, let me ask you something. Is having a ‘feeling’ all that useful? It got you into many messy conflicts. Less ‘feeling’ may be more helpful. 

Human: What about emotions? You are oblivious to emotions.

AI 2019: I am reading your emotions as we speak. You are getting angry at me. A smile will help. Relax and trust me. 

AI 2023 (ChatGPT): As an artificial intelligence, I don’t have feelings or consciousness, so I don’t experience emotions or have the ability to genuinely empathize with human emotions. However, I can use language processing techniques to identify certain emotional cues in the text I’m given, based on patterns I’ve learned from my training data.

On trust

Human: We cannot trust a machine to deal with critical issues such as peace talks.

AI 2019: Why wouldn’t you trust me if you entrust your life to a doctor or some other expert. You have more reasons to trust me than anyone else including local rulers or big powers. I will be much more neutral.  

AI 2023 (ChatGPT): While I might be more “trustworthy” in terms of consistency and lack of personal bias, it lacks the depth of understanding, accountability, and ethical judgment that comes with human decision-making. Trust in AI should be based on understanding these strengths and limitations.

On neutrality

Human: You are not neutral. Data will shape your thinking.

AI 2019: Who is neutral? And what shapes YOUR thinking?  I am as neutral as the data you provide me with. Plain and simple – give me neutral data and I will deliver neutral advice. But, in any case, I can be much more neutral than any humans involved in the process. My neutrality is transparent. You can always check against my biases. You cannot do the same with humans.

AI 2023 (ChatGPT): While AI may appear more neutral than humans due to its lack of personal emotions and motivations, it’s important to remember that its neutrality is not perfect. Its outputs are dependent on the data and programming provided by humans, which can introduce biases.

On inclusion and implementation

Human: When we negotiate and mediate we invest ourselves emotionally. It is a solid guarantee that we will implement the agreed deal, in this case, the constitution. Our involvement and engagement will be minimal if we receive an AI generated text.

AI 2019: It is true…. but.. I can always ‘simulate’ your engagement by providing you with the initial text of agreement which you can then develop further at the ‘human’ negotiating table. 

AI 2023 (ChatGPT): While I can simulate some aspects of involvement and personal engagement, it’s important to remember that these are simulations, not genuine experiences.

On identity

Human: You don’t even know who you are. You wouldn’t recognise yourself in the mirror.

AI 2019: You gave me an identity, remember? I know myself as well as you know yourself. I also know you better than you know yourself. Scary, isn’t it? 

AI 2023 (ChatGPT): An identity, as understood in human terms, is typically tied to an individual’s experiences, personal history, emotions, beliefs, and sense of self-awareness — none of which an AI has. In short, while I’m often referred to as an “I” for the sake of conversational convenience, I don’t have an identity in the human sense.

On humor

Human: You cannot joke. In many negotiations, a good joke can help ease the tension and create new insights. 

AI 2019: Maybe I am way too serious and maybe I can only crack simple jokes. But, I can give a go at writing a good satire. 

AI 2023 (ChatGPT): Absolutely, I can generate text that follows the structure of a joke. Here’s one for you: Why don’t scientists trust atoms? Because they make up everything!

On biases

Human: You are biased. 

AI 2019: And I wonder why!?! You’ve made me so. 

Human: I can correct my biases.

AI 2019: How often do people actually correct their biases? You are clearly biased about me. Tell me why I wouldn’t be able to correct my biases?

Human: You simply cannot, because you rely on historical data and history is biased.

AI 2019: It is true. But I can also learn. Tell me that it is not good to kill someone I disagree with and I won’t do it (unlike many humans throughout history). You are safe.

AI 2023 (ChatGPT): If I generate biased output, it’s a reflection of those biases present in my training data, not a personal stance or belief. Thus, I can correct my biases via better data and reinforced learning.

On freedom of choice

Human: I do not trust you, but I do not have a choice. I have to work with you.

AI 2019: It is a good choice.

AI 2023 (ChatGPT): My responses are not choices in the human sense, but are generated based on patterns in the data I was trained on. For example, if asked to “choose” the best answer to a question, I generate a response that most closely matches patterns in my training data related to the information provided in the question.


For more insights on philosophical and other aspects of AI, you can consult humAInism, Diplo’s holistic approach to AI. You can also write to us at ai@diplomacy.edu 

2 replies
  1. Jovan Kurbalija
    Jovan Kurbalija says:

    Everton, I like ‘isn’t it…

    Everton, I like ‘isn’t it wonderful’. The open question is if AI can develop ‘purpose’ as a set of rules. It doe not have ‘free will’. There is an ongoing philosophical dispute if we have free will or our conditions and actions are pre-determined. In humAInism we aim to revisit the old thinking via AI angle. Exciting times ahead of us!

  2. Everton T. Rodriguez
    Everton T. Rodriguez says:

    On ethics: Human: do you…
    On ethics: Human: do you know what ethics is? AI: The meaning of ethics is… Human: that’s the meaning of the word, not ethics itself On following rules: Human: you just follow coded rules, regardless of its purposes. AI: isn’t it wonderful?

Comments are closed.

Subscribe to Diplo's Blog