- Progressive development of international law and weaponry
The making of international law is not a one-time event. It cannot even be assimilated into the process of the negotiations, starting from day A and ending on day Z. Like love, the making of international law is in the air, everywhere we look around in multilateral diplomacy.
This explains why the founding jurists of the United Nations, in their infinite wisdom, retained in the Charter the tender terminology: progressive development and codification of international law.
Yet, this nice expression insinuates that there is always progress in the development of international law. This is true in the sense that there is more international law (more UN conventions, regional treaties, states parties, courts, and – nothing is perfect – even lawyers).
In fact, international law may face powerful obstacles and setbacks which not only block its progressive development, but also force it to take a step back occasionally. For example, some might say that the economic and financial crisis weakened or even annihilated several economic and social rights which were the pride of well-off societies in Europe for decades. Other will claim that anti-terrorist policies actually helped terrorists to fulfill their dream, if that was to damage Western democratic societies based on human rights and the rule of law. Even worse, new guys in town may say that current surveillance techniques and policies are making George Orwell turn in his grave and Big Brother look like a boy scout playing with Lego.
This kind of challenge makes people and organisations evaluate whether international law as we know it is badly damaged. Technological and military progress counts among the factors that may indeed block or reverse the progress of international law. This brings me to one of the recent debates around the Human Rights Council in Geneva.
Before proceeding, let me remind you of the truism that, in diplomacy, words are important and they have to be carefully chosen. In the ensuing paragraphs you will have a choice between two terms. Of course, this will happen only until (if ever) the UN General Assembly unanimously decides on the use of a single, boring expression, but one which will have the same meaning for everyone, in every sight, in every sound, as Paul Young might sing, had he been solicited to intervene in our debate.
The debates I am alluding to concern the possible impact of new, unmanned weapons aimed at international public law, in general, but in particular at international humanitarian law.
ALEX IVANOV - THE PRINCIPLE OF HUMANITY
- Killer Robots, the rude version
Human Rights Watch launched a study on Killer Robots and UNITAR organised a debate on the same issue. The study represents a comprehensive and articulated analysis of the complex relationship between a forthcoming generation of weapons and international law.
Killer Robots is a user-friendly term apparently handpicked to allow even fresh graduates in video games sciences to understand to what extent the new stuff may affect their future (whether they go abroad to fight terrorism or stay home as civilians).
Human Rights Watch defines Killer Robots as fully autonomous weapons that could select and engage targets without human intervention. These weapons are not only unmanned, they are also generally detached from any human involvement. They act on their own when selecting which targets to attack. Killer Robots are different from any weapons used today, including drones. The latter are also unmanned but they still require the human command to hit. At present Killer Robots do not exist as such, but technology is moving in the direction of their development. Faster than international law is moving in the direction of codification, I would add … if asked.
In his turn, Christof Heyns, the UN Special Rapporteur on extrajudicial, summary or arbitrary executions, presented a study on The Lethal Autonomous Robotics and the Protection of Life. He defines Lethal Autonomous Robotics as robotic weapon systems that, once activated, can select and engage targets without further intervention by a human operator.
Perspicacious readers as you are, you no doubt have already noticed that Human Rights Watch and Christof Heyns are talking about the same thing. The academic and euphemistic Lethal Autonomous Robotics are the same as the rude and vulgar Killer Robots.
Human Rights Watch believes that international law prohibits the development and use of such weaponry, given that they violate crucial requirements of international humanitarian law and the Martens Clause.
Killer Robots cannot fulfill three crucial principles of international humanitarian law, namely distinction, proportionality, and military necessity, and therefore ought to be prohibited.
The expert may skip the following lines, but I ought to remind colleagues who took Diplo’s courses on multilateral diplomacy that the principle of distinction requires states to distinguish between the civilian population and combatants in warfare. Killer Robots will not be able to make this distinction.
The principle of proportionality prohibits acts where civilian harm outweighs military benefits. Killer Robots will lack human skills in judgment and will thus not be able to judge how much harm is proportionate to any benefit.
The principle of military necessity implies that any military act must be necessary, in the absence of any other less violent remedies for the conflict. As efficient as they are supposed to be, Killer Robots will not be able to make reasonable judgments about what is militarily necessary and what is not.
The Martens Clause says that civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from dictates of public conscience. Killer Robots, with all due respect to their creators, contravene principles of humanity and public conscience.
Even worse, Killer Robots will increase the likelihood of conflict: states will not need to sacrifice their own people, so wars are likely to happen more frequently. Given the absence of human empathy in selecting and attacking targets, Killer Robots are likely to render conflicts more brutal.
Human Rights Watch believes that, undoubtedly, the development and use of Killer Robots contravenes international humanitarian law and recommends to states to:
- Prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument.
- Adopt national laws and policies to prohibit the development, production, and use of fully autonomous weapons.
- Commence reviews of technologies and components that could lead to fully autonomous weapons.
Disgusting killers, no? Well, it seems to be so. Do not worry, in Part 2, we will meet their more emancipated version, the Lethal Autonomous Robotics, and look at the differences. If any!
 If you forgive me the liberty of paraphrasing Paul Young on such a serious issue!
 Like someone you all know, but I will not mention his name, without his permission.
 Human Rights Watch & International Human Rights Clinic at Harvard Law School (2012) Losing Humanity: The Case against Killer Robots.
 Human Rights Council (2013) Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns. Document A/HRC/23/27, 9 April 2013.