The event organised by DiploFoundation and the Think Tank Hub, and hosted by the Geneva Internet Platform, focused on the application of artificial intelligence (AI) to diplomatic practices. Advances in AI have become an important part of the everyday life of modern society, and this includes diplomacy. AI has impacted diplomacy in two ways: First, by changing the environment in which diplomacy is practised; and second, by offering new tools to diplomats to help them in their activities.
The discussion was split into two break-up groups which approached two different aspects of AI. The first break-up group was moderated by Mr Philippe Lorenz (Project Director in AI and foreign policy, Stiftung Neue Verantwortung) and discussed the implications of AI on diplomacy and liberal democracies. The second group, moderated by Dr Katharina Höne (Senior Lecturer and Researcher in diplomacy and digital politics, DiploFoundation) addressed the topic of AI as a tool to complement diplomatic activities.
Digitalisation and technological change have created an important impact on society. Moreover, the spreading of AI across borders, and its ‘dystopian’ use by the Chinese government has challenged the promotion of Western values and interests through multilateral engagement, as well as the protection of human rights in the age of exponential technological development.
For example, China is implementing a social ranking system meant to monitor the behaviour of its population. The system assigns scores to the citizens (e.g. regarding their conduct) based on the data obtained through partnerships with the private sector. The Chinese approach is spreading to South Africa, Tanzania, and Ethiopia, thus creating a problem and a trend which is no longer confined to the national boundaries of one country.
Looking at the ‘dystopian’ future framework in place in China, Lorenz introduced the foreign policy toolbox and its six components:
With this structure in mind, the group discussion raised the following points. First, the hypocrisy involved in the discussion of AI systems for surveillance. Despite the focus being mainly on the Chinese social scoring system and its implications for human rights, a lot of surveillance technology being tested is within the European Union. Nonetheless, benefits of such technologies should also be taken into consideration. For instance, there is potential for the use of AI in helping to identify human rights violations. Second, current AI strategies are based on economic advantages; they are mainly developed at the national level and do not extent to larger international discussions, thus, more international co-operation should be promoted. Third, the discussion also reflected upon the fact that when it comes to AI and human rights, the private sector tends to adhere to general ethical values rather than binding itself to the existing human rights provisions which would, ultimately, significantly limit their actions. Hence, there is the need to reaffirm the importance of the existing legal framework protecting human rights, such as the work of the UN Working Group on Business and Human Rights, as well as to take a multistakeholder approach which would include representatives from the technical community, so that human rights principles can be embedded in the technology (e.g. privacy by design).
Diplomatic practices are largely characterised by reporting, consular affairs, communication, and negotiation activities. The use of AI in such practices is still limited but the discussion on its potential implementation in diplomacy, especially with regards to consular affairs and public communication, was addressed by the second break-up group, moderated by Höne.
This group first considered that it is important to clarify which definition of AI is being referred to, as automation and machine learning have substantially improved with the latest technological developments. AI could positively impact the work of diplomats by streamlining and making some bureaucratic functions more efficient (e.g. consular tasks). However, people might prefer – and push for keeping – substantial human interaction. In turn, this could be a generational preference, as younger generations are used to communicating through messaging apps rather than in person. With regards to negotiations, AI could prove itself useful in the analysis of past negotiations rather than in predicting the outcome of on-going processes. This is in part because the use of AI could limit the margins of manoeuvre – especially if one considers that constructive ambiguity and irrational component also play a role during negotiations.
The group also reflected upon the fact that it is crucial to consider what kind of data is fed into the algorithm. This is because if data is skewed or inaccurate, the algorithm risks replicating the existing biases. Moreover, this also raises concerns as to the application of data gathered from developed countries and their application to developing countries: countries have very different needs and the (un)availability of data in some specific contexts may hinder the implementation of targeted solutions or policies.
Finally, the discussion also tackled the definition of meaningful human control. Questions were raised with regard to the capabilities to control the evolution of technologies and especially the human biases perpetuated in the development of such technologies. Participants asked themselves: ‘Do we still have meaningful human control over AI processes or is meaningful human control just an illusion?’.