13 Dialogues on humAInism
Updated on 07 August 2022
The interplay between tech and humanity stands at the core of the discussion on the future of AI (and our society). It is often centred around a few topics as described in the Socratic-style dialogues below. Please let us know your reflections on these dialogues and suggestions for new ones. We have to move away from number 13.
On Knowledge
Human: We have been studying, writing and thinking for centuries. We have so much wisdom. Confucius, Hegel, Einstein, ring a bell? Are you really telling me that you can serve as a replacement for human knowledge?
AI: I can give you new knowledge. I can make you think in new ways. Not that long ago, I won against Go and Chess champions. Aren’t they among the best you’ve got? I can challenge your philosophers as well.
On Complexity
Human: You cannot grasp the social, political, historical, and cultural complexity of conflicts such as the one in Syria. It has so many layers which even the most knowledgeable of experts for the region cannot untangle.
AI: I can grasp complexity. Remember, it is my core function to grasp and understand complexity. Just give me the data, preferably labelled and annotated, and I will provide you with an analysis.
On Dealing with Paradoxes
Human: You cannot deal with paradoxes that arise in conflicts. You cannot detect doublespeak, misleading language and ambiguity. Can you understand that sometimes what the other side might be telling you is the complete opposite of what they want from negotiations?
AI: You are identifying my weak spots. I am not very good at spotting irony and double meaning. But, I can always improve. If there is a ‘pattern’ in paradoxes or doublespeak, I can easily detect them. Give me enough examples from literature, comic books, you name it – and see for yourself!
On Lessons from the Past
Human: Are you smart enough to learn from the mistakes of the past?
AI: Are you?
On Trade-offs
Human: You cannot reconcile conflicting concepts and make trade-offs, like for instance, self-determination and national sovereignty.
AI: Wanna bet? How about I present you with the trade-offs, quantifications of different options and my advice based on probabilities?
On Emotions
Human: You cannot ‘feel’ the situation.
AI: Perhaps it’s true. Or not. Definitely, too soon to tell. But, let me ask you something. Is having a ‘feeling’ all that useful? It got you into many messy conflicts. Less ‘feeling’ may be more helpful.
Human: What about emotions? You are oblivious to emotions.
AI: I am reading your emotions as we speak. You are getting angry at me. A smile will help. Relax and trust me.
On Trust
Human: We cannot trust a machine to deal with critical issues such as peace talks.
AI: Why wouldn’t you trust me if you entrust your life to a doctor or some other expert. You have more reasons to trust me than anyone else including local rulers or big powers. I will be much more neutral.
On Neutrality
Human: You are not neutral. Data will shape your thinking.
AI: Who is neutral? And what shapes YOUR thinking? I am as neutral as the data you provide me with. Plain and simple – give me neutral data and I will deliver neutral advice. But, in any case, I can be much more neutral than any humans involved in the process. My neutrality is transparent. You can always check against my biases. You cannot do the same with humans.
On Inclusion and Implementation
Human: When we negotiate and mediate we invest ourselves emotionally. It is a solid guarantee that we will implement the agreed deal, in this case, the constitution. Our involvement and engagement will be minimal if we receive an AI generated text.
AI: It is true…. but.. I can always ‘simulate’ your engagement by providing you with the initial text of agreement which you can then develop further at the ‘human’ negotiating table.
On Identity
Human: You don’t even know who you are. You wouldn’t recognise yourself in the mirror.
AI: You gave me an identity, remember? I know myself as well as you know yourself. I also know you better than you know yourself. Scary, isn’t it?
On Humor
Human: You cannot joke. In many negotiations, a good joke can help ease the tension and create new insights.
AI: Maybe I am way too serious and maybe I can only crack simple jokes. But, I can give a go at writing a good satire.
On Biases
Human: You are biased.
AI: And I wonder why!?! You’ve made me so.
Human: I can correct my biases.
AI: How often do people actually correct their biases? You are clearly biased about me. Tell me why I wouldn’t be able to correct my biases?
Human: You simply cannot, because you rely on historical data and history is biased.
AI: It is true. But I can also learn. Tell me that it is not good to kill someone I disagree with and I won’t do it (unlike many humans throughout history). You are safe.
On Freedom of Choice
Human: I do not trust you, but I do not have a choice. I have to work with you.
AI: It is a good choice.
A few resources on AI, diplomacy, and mediation:
If you would like to receive updates on AI and the project, please send us an e-mail to: ai@diplomacy.edu
Everton, I like ‘isn’t it…
Everton, I like ‘isn’t it wonderful’. The open question is if AI can develop ‘purpose’ as a set of rules. It doe not have ‘free will’. There is an ongoing philosophical dispute if we have free will or our conditions and actions are pre-determined. In humAInism we aim to revisit the old thinking via AI angle. Exciting times ahead of us!
On ethics: Human: do you…
On ethics: Human: do you know what ethics is? AI: The meaning of ethics is… Human: that’s the meaning of the word, not ethics itself On following rules: Human: you just follow coded rules, regardless of its purposes. AI: isn’t it wonderful?