AI and diplomacy
Over the past few years, there has been significant progress in the field of artificial intelligence (AI), which is increasingly becoming part of our everyday lives (from intelligent digital personal assistants and smart home devices, to autonomous vehicles, smart buildings and medical robots) and not just the stuff of science fiction.
These advances are expected to have implications in several policy areas (economic, societal, education, etc.), diplomacy, infrastructure and society in general, and governments, the technical community, and private sector actors worldwide are increasingly considering them.
DiploFoundation’s Artificial Intelligence Lab (AI Lab) is a multifaceted initiative that includes research and analysis on AI policy, capacity development in the field of AI and related areas, reports from main events and discussions on AI, analysis into the impact of AI on diplomacy, and much more.
Explore the research, activities, and events powered by Diplo’s AI Lab in this dedicated space, and get in touch with the AI Lab at email@example.com.
Greater scrutiny is necessary because AI will have a significant impact on international relations, such as putting new topics on the international agenda, challenging geostrategic relations, serving as a tool for diplomats and negotiators, and creating new opportunities and concerns about protecting human rights.
Policy implications of AI
The policy implications of AI are far‐reaching. While AI can potentially lead to economic growth, there are growing concerns over the significant disruptions it could bring to the labour market. Issues related to privacy, safety, and security have also been brought into focus, with calls being made for the development of standards that can help ensure that AI applications have minimum unintended consequences.
The GIP Digital Watch observatory, operated by DiploFoundation, provides insight on AI: Artificial intelligence: Policy implications, applications, and developments, providing regular updates on AI developments, as well as information about actors, events, and processes addressing the topic.
Economic and social
AI has significant potential to lead to economic growth. Used in production processes, AI systems bring automation, making processes smarter, faster, and cheaper, and therefore bringing savings and increased efficiency. Concerns are raised that automated systems will make some jobs obsolete, and lead to unemployment. There are, however, also opposing views, according to which AI advancements will generate new jobs, which will compensate for those lost, without affecting the overall employment rates.
Safety and security
AI applications in the physical world (e.g. in transportation) bring into focus issues related to human safety, and the need to design systems that can properly react to unforeseen situations, and have minimum unintended consequences. AI also has implications in the cybersecurity field. On the one hand, there are cybersecurity risks specific to AI systems, and on the other, AI is being applied to cybersecurity, from spam filtering to detecting serious cybersecurity vulnerabilities and address cyber-threats.
Privacy and data protection
AI systems work with enormous amounts of data, and this raises concerns regarding privacy and data protection. Such concerns are well illustrated by the increasingly important interplay between AI, the Internet of Things (IoT), and big data. Developers of AI systems are asked to ensure the integrity of the used data, as well as embed privacy and data protection guarantees into AI applications.
As AI algorithms involve judgements and decision-making – replacing similar human processes – concerns have been raised regarding ethics, fairness, justice, transparency, and accountability. The risk of discrimination and bias in decisions made by AI systems is one such concern. Researchers are carefully exploring the ethical challenges posed by AI and are working, for example, on the development of AI algorithms than can ‘explain themselves’.
The prevailing question is whether AI-related challenges call for new legal and regulatory frameworks, or whether existing ones can be adapted to address them. Adapting current regulation is seen by many as the most suitable approach for the time being. Governments are advised that, when considering regulatory approaches towards AI, attention should be paid to ensuring that such approaches do not hinder innovation and progress.
Visit the GIP Digital Watch observatory to find out more on these issues.
Featured: The rise of autonomous vehicles
Autonomous driving has moved from the realm of science fiction to a very real possibility during the past twenty years, largely due to rapid developments of radar technology and microprocessor capacity. Portable technology has sufficiently advanced to allow ultra-light hardware to make decisions based on self-improving algorithms, which means that developers stand a better chance of replicating the real-time decision-making of humans in autonomous cars.
The speed at which autonomy has developed has made it challenging to regulate. In 2017, the US Congress started to debate the Safely Ensuring Lives Future Deployment and Research in Vehicle Evolution (SELF DRIVE) Act, a draft legislation aimed, among others, at transferring jurisdiction over autonomous vehicle testing from American states to the federal government. In the European Union (EU), Germany has been a trailblaser in autonomous vehicle policy on account of its important automotive sector. As of 2017, Germany has a law in place that allows the testing and operation of autonomous vehicles on public roads, under certain conditions.
In May 2018, the European Commission presented a communication entitled ‘On the road to automated mobility: An EU strategy for the mobility of the future’, outlining a set of action points aimed at achieving the EU’s ambition of becoming ‘a world leader in the deployment of connected and automated mobility’.
The novelty of autonomous technology stands to change our legal and social relationships to everyday transport.
Visit the GIP Digital Watch observatory’s dedicated trend page to find out more on autonomous vehicles.
The impact of AI on diplomacy
Mapping AI’s challenges and opportunities for the conduct of diplomacy
Building on DiploFoundation’s continuous research on the relationship between technology and diplomacy – and the recent report on Data Diplomacy, commissioned by the Ministry of Foreign Affairs of Finland, as well as the ongoing mapping of developments in artificial intelligence (AI) undertaken by the GIP Digital Watch observatory – Diplo’s AI Lab is partnering with institutions to progress the research and capacity development in the area of AI and diplomacy.
One of our research projects will Map AI’s challenges and opportunities for the conduct of diplomacy. With AI’s entry into all aspects of society, it will inevitably influence diplomacy. The more deeply AI is integrated into society, the larger the effect will be on the context in which diplomats operate. Broadly speaking, our aim is to understanding how AI, both existing applications and future developments, will impact the conduct of diplomacy.
Our research as part of the inception study is being conducted in four areas:
- In the first area of research, we aim to give a brief overview of the broad impact of AI on the conduct of diplomacy, building on DiploFoundation’s three-part typology which maps AI in relation to diplomatic practice in three areas:
- AI as a tool for diplomatic practice
- AI as a topic for diplomatic negotiations
- AI as an element shaping the environment in which diplomacy is practised
- In the second area of research, we are providing an overview of national recommendations and policies regarding AI. A number of countries have begun to work towards national AI strategies. We give an overview of these (emerging) strategies and analyse trends.
- In the third area of research, AI as a tool for diplomacy, we are giving an overview and access the advances of AI in analysing, recognising, and simulating human language. This has potential relevance for AI’s ability to support the work of diplomats and other foreign policy professionals in analysing internal and external text documents, analysing speeches and giving input for the content and framing of speeches, catching spam and unwanted messages, and identifying hate speech and combating the spread of terrorism content on social media platforms.
- The fourth area of research zooms in on one specific implication of AI by looking at its human rights dimension. As AI algorithms involve judgements and decision-making – replacing similar human processes – concerns have been raised regarding ethics, fairness, justice, transparency, and accountability. In this area or research, we provide an overview of the key debates and give a future outlook.
[Update] Findings of the inception study became available in January 2019 and were presented during a launch event. The report on Mapping AI’s challenges and opportunities for the conduct of diplomacy maps the relation between AI and diplomacy, takes a look at national AI strategies in a comparative manner, explores the possibilities of AI as a tool for diplomacy, and highlights the impact of AI on human rights and the responsibilities of states.
Practical workshop on AI
Self-driving cars, face recognition and talking gadgets increasingly turn our attention towards AI and algorithms. While these technologies are complex, we can learn the basic principles behind them to make informed decisions on how we want to use them, and how we might need to regulate them.
What will the workshops teach?
In our new series of face-to-face workshops, participants can learn about conventional algorithms, how they work, and the paradigm shift resulting from the development of machine learning. The workshops explain the growing importance of algorithms due to ever increasing automation of physical and cognitive processes in modern society, and provide a critical view of limitations, risks and the media hype around the algorithms and AI.
Who should join, and where?
The new series of workshops will start towards the end of 2018. They are aimed at diplomats and officials in public administration, permanent missions, international organisations and non-governmental organisations who want to understand the workings of algorithms and AI.
The first face-to-face practical course is scheduled to take place at the Geneva Internet Platform, Avenue de la Paix 7BIS, 1202 Geneva, Switzerland.
Interested? Send us an email at firstname.lastname@example.org to register your interest.
Preparing diplomats for 2030 and beyond
It is no longer science fiction. AI is behind the wheel, flying drones, and winning chess games. It is powering robots to automate tasks, and to replicate human behaviour. AI is appearing also on international agendas. Will emerging technologies redefine the core social and ethical pillars of humanity? How can mankind ensure growth and the positive effect of new technologies, while addressing potential risks? And which core diplomatic functions can, and cannot, be automated? Can negotiations be programmed, and can empathy be digitalised?
The high-level panel, which talked robots, risks, and reality-checks, was led by Maltese President Marie-Louise Coleiro Preca, and included a wide range of views, from those who argue that AI can never replace the uniqueness of human beings, to those who argue that it is matter of time when AI would be capable of simulating human intelligence and emotions.
These questions were among the issues addressed in November 2017 by a high-level panel organised as part of Diplo’s 15th anniversary conference, The Future of Diplomacy.
A matter of choice and judgment?
So far, the extent to which we use AI is matter of choice and a matter of judgment. Will it remain so? Is the world a better place because games such as chess and Go lost their magic, once humans were beaten by automated systems? In some instances, the success of technology over humans is a ‘victory without beauty’.
AI, like any other technology, does not come without risks. One risk we should be focusing on is that AI and automation can bring a new form of digital divide, as some parts of the world would benefit of the advantages of these technologies, while others would not have access to them.
AI is here to stay, and its development cannot and should not be stopped. As always in history, we should acknowledge that we have to live with both the good and the bad of technology. Thus, we should focus on risk management, and try to diminish and contain the possible negative impact of such technologies.
But, looking at the broader picture, if we focus on what we do not even know will even happen, we will stifle innovation. Governments should not impose regulations on technologies that are brand new. However, some general and flexible principles guiding the evolution of AI and setting the frame for future developments should be considered. For example, analogous to the climate change field, the precautionary principle could be used to prevent the potential negative impact of AI on human rights and society. Looking at the area of diplomacy, can AI handle negotiations?
Some believe this is not the case, as negotiations are an area of the unexpected, and AI systems cannot deal with the specificities of identities, as they are different, unexpected, and arbitrary. AI can help diplomats, for example in data processing, but it cannot replace the human factor entirely. AI cannot reach compromise, and it is blind to perception, intuition, and risk taking. Human diplomats can detect the undetectable, see the invisible, notice the unnoticeable, and this is not something AI systems can do, at least not in the foreseeable future.
Others are of the view that negotiations can be automated, to some extent. Automated negotiations could work well in win-lose situations, but not necessarily in complex situations. On the other hand, even the emotional parts of negotiations could be automated. And there is also a middle-ground view: AI will complement diplomatic activities, and we should look at how to use it to enhance the functions of diplomats.
Read the conference report for an overview of the discussions.
Policy papers and briefs
Searching for meaningful human control. The April 2018 meeting on Lethal Autonomous Weapons Systems
In this briefing paper, Ms Barbara Rosen Jacobson analyses the debate of the April 2018 meeting of the Group of Governmental Experts (GGE) of the Convention on Certain Conventional Weapons (CCW). The group was established to discuss emerging technologies in the area of lethal autonomous weapons systems (LAWS).
She finds that:
- The meeting built on the conclusions and recommendations of the November 2017 session, where states agreed on the applicability of international humanitarian law (IHL) and the responsibility of states for the deployment of LAWS.
- Addressing remaining issues of contention, the meeting attempted to provide a deeper understanding of the characteristics of LAWS, as well as the necessary degree of meaningful human control in their development and use.
- There seems to be a growing consensus about the necessity of meaningful human control in the critical functions of LAWS, i.e., selecting and engaging a target, although the concept of ‘meaningful’ remains undefined.
- There is a need for accountability throughout the life cycle of an autonomous weapon, from its development to its use, although there is still a lack of clarity on the distinct responsibilities of different actors involved in the development and use of LAWS.
- Several different policy options were discussed – strengthening Article 36 of Additional Protocol I to the Geneva Conventions, issuing a political declaration, or establishing a legally binding instrument – and while delegates did not agree on a preferred mechanism, there was a growing sense that the policy options are not necessarily mutually exclusive.
- The GGE managed to allow for a deeper understanding of the potential risks (and benefits) of LAWS and there was some convergence of views on concepts such as meaningful human control. Yet, many issues of divergence remain, such as the scope of a definition or the need for a pre-emptive ban – which will have to be addressed in the August 2018 meeting, which is expected to result in a set of recommendations.
Find the full briefing paper here.
Lethal autonomous weapons systems: Mapping the GGE debate
In this policy paper, Ms Barbara Rosen Jacobson discusses:
- The Convention on Certain Conventional Weapons (CCW) established a Group of Governmental Experts (GGE) to discuss emerging technologies in the area of lethal autonomous weapons systems (LAWS).
- Despite the establishment of a GGE on LAWS, there is no clear agreement on the scope of the definition of LAWS. The main issues are related to questions regarding the extent to which these weapons are autonomous, and the necessary level of meaningful human control.
- There are a number of technological, military, legal and ethical challenges related to LAWS, including their potential unreliability, their proliferation, their legal accountability, and the absence of human decisions on life and death.
- The discussion is made more complex since the technologies driving LAWS – artificial intelligence (AI) and robotics – can be used for both civilian and military purposes. There is a concern that restrictions on LAWS could hamper innovation for the civilian use of these technologies. At the same time, technologies designed for civilian use might be transformed into lethal weapons.
- There is consensus that LAWS need to comply with international humanitarian law (IHL) and human rights law, and their development is already scrutinised through Article 36 on new weapons. Yet, many claim that existing provisions are not sufficient, supporting a ban on LAWS, or at least a moratorium on the deployment of LAWS, pending a decision on their prohibition.
- The GGE reaffirmed the applicability of IHL, the responsibility of states during their deployment in armed conflict, the importance of innovation in civilian research, and the need to keep potential military applications under review. The GGE will continue to meet on this topic in 2018.
Find the full policy paper here.
From our blog
Generative AI models – a fun game that can easily get out of hand?
22 December 2022
AI and diplomacy, Artificial Intelligence, Cybercrime, Ethics, Intellectual property rights
All your friends have replaced their social media profile pictures with AI-generated avatars, but you do not know where they got them from? Or are you wondering what exactly people are talking ab...
AI promises, ethics, and human rights: Time to open Pandora’s box
11 February 2022
AI and diplomacy, Artificial Intelligence, Ethics, Human rights
In 2021, I participated in the Artificial Intelligence online course offered by Diplo. In one of our online sessions, a passionate debate flared on the topic of the day: AI’s ethical and human rights implications. T...
Year in review: The digital policy developments that defined 2021
27 December 2021
AI and diplomacy, Cybernorms, Cybersecurity, Data and diplomacy, Development, Economic, Human rights, Infrastructure, Internet governance and digital policy, Legal and regulatory, Metaverse Diplomacy, Sociocultural, Surveillance
For most countries, 2021 was a continuation of pandemic woes. As people swapped contact tracing apps for vaccine passports, the wave of misinformation on COVID-19 vaccines spread even faster. Beyond COVID-19, the b...
Journal of Moral Theology, Vol. 11, Special Issue no. 1, Spring 2022, “Artificial Intelligence”
Journal of Moral Theology dedicated special issue on 'artificial intelligence'.... Read more...
Mediation and artificial intelligence: Notes on the future of international conflict resolution
Over the last years AI has emerged as a hot topic with regard to its impact on our political, social, and economic lives.... Read more...