AI and international peace: A new kid on the UN Security Council block
The UN Security Council had its first meeting on AI and international peace and security. Having in mind that diplomacy typically moves at a glacial pace (which is understandable due to many factors that shape diplomatic positions), it is astonishing to see that diplomats and governments reacted so quickly to an emerging technology: only 6 months after the explosion of the use of ChatGPT (which reached 100 million users in January 2023) and witnessing its potentials, we have seen a G7 ministerial summit and then the UN Security Council discussing AI perils. The USA, China, the EU, and the UK, among others, are promptly building national regulatory frameworks on AI.
One of the reasons for this timely reaction is that politicians and diplomats are becoming sensitised (but not yet aware, most likely) of just how big and fast of a change current and future AI models can make throughout our societies. The other reason for this prompt reaction (in particular by big states that are used to having control and a monopoly over cutting-edge technology) is that they got scared by the advances of AI in the open-source community around the world: bottom-up AI development promises even faster advancements than those in locked corporate settings but also makes AI available to almost anyone in the world, making accountability a very complex task.
But why are there concerns about AI’s impact on international peace and security?
One, AI is turning conventional weapons into lethal autonomous weapon systems (LAWS). If we let weapons choose what to do about targets on a battlefield (or off of it) without any human control, can we be confident it won’t make severe miscalculations over whom to shoot at and whom not to? Take, for instance, the difficulty of distinguishing civilians from armed rebels, or a wounded or surrendering group of soldiers from an ordinary battalion. Some political processes, like the Group of Governmental Experts under the Convention on Certain Conventional Weapons, have discussed this topic for years but without much success apart from confirming that the existing internal law applies to LAWS. However, UN Secretary-General Guterres expressed a need to formulate a legally binding agreement that would prohibit the use of AI in automated weapons for war by 2026, which is an extremely important but rather ambitious plan.
Two, AI is used to enhance cyberattacks. Cyberattacks have already come to the forefront of global security due to the digitalisation of everything: from energy supplies, schools, and hospitals via companies and institutions, to our everyday lives that fit into smartphones; and everything that is connected can (and ultimately will) be hacked. AI can, for instance, increase the proficiency of phishing messages (remember, 90% of attacks start with human error, in spite of the adage ‘don’t click on anything suspicious’), enable the automatisation of botnets or distributed denial-of-service (DDoS) attacks and ransomware, improve mechanisms to find and exploit vulnerabilities in products and avoid antivirus and other intrusion detection systems. Nevertheless, AI can also help the defenders, and most likely, this will be a game of cat and mouse. Yet, there is another layer of security challenges, as AI can be insecure: an exploited vulnerability in a software code that implements an AI algorithm, or a database that AI learns from altered by an attacker, can yield wrong results, possibly without even noticing there is a fault! Such a faulty or compromised AI system could be deployed in services varying from loan management, via elections, to military systems.
Three, AI brings disinformation to a whole new level. Artificially created images, audio, and video records can now hardly be distinguished from those created by humans. Deepfakes have become a buzzword, and it is only a matter of time before perfect mimics of a public speech of a president or prime minister will explode through social media and cause unrest, at best. Other, less bombastic, uses of AI algorithms include deciding what content we will see on social networks (which has already changed people’s perception about everything from vaccination to the conflict in Ukraine), as well as an increase in hate speech; the risk of losing what is left of the public’s trust in institutions, science, and society seems to be increasingly recognised by both academics and politicians.
Four, AI (especially with the support of upcoming quantum computing capabilities) will enhance breakthroughs in other fields of science like chemistry and biology. As always, this may bring many benefits, but it can also lead to the creation of new chemical weapons or bioweapons, with potential risks for mass destruction. Nothing new, one could say: governments have always been doing things like this. The problem is again the mass availability of AI as a tool to model such breakthroughs, without necessarily a need for expensive and complex research laboratories. As it is becoming harder to control if someone will use ChatGPT to learn how to make an exploding device from chemicals that everyone has in their bathroom, it may be hard to control if AI will be used for modelling future homemade weaponry.
Risks are becoming clearer. The pace of AI development is getting highly exponential. Regulations are emerging. Diplomats and governments have woken up. But, will there be agreements and how soon? Will those agreements be meaningful and implementable? Will governments realise there is a need to radically change their modus operandi and cooperate with other stakeholders in shaping and implementing the solutions? Let’s make sure they do.
‘In managing, promoting and protecting its [internet’s] presence in our lives, we need to be no less creative than those who invented it.’
–UN Secretary-General Kofi Anan, 2004