Hands of a guy on laptop keyboard

AI promises, ethics, and human rights: Time to open Pandora’s box

Published on 11 February 2022
Updated on 05 April 2024

In 2021, I participated in the Artificial Intelligence online course offered by Diplo. In one of our online sessions, a passionate debate flared on the topic of the day: AI’s ethical and human rights implications. The discussion covered issues such as fairness, bias, privacy, discrimination, automation, inequalities, predictive policing, social scoring, and autonomous weapons. Despite all its promises, we agreed that AI posed serious threats to human rights online and offline, as well as growing ethical challenges. To fully benefit humanity, significant measures are needed, combining positive encouragement and stricter rules.  

The AI revolution

The World Economic Forum (WEF) calls it the ‘Fourth Industrial Revolution’ driven by a convergence of AI and other disruptive technologies such as big data, the internet of things (IoT), robotics, blockchain, and 5G.

There is no single definition of ‘artificial intelligence’. It is an ever-changing concept, an umbrella term referring to what machines have not achieved yet. AI generally refers to a set of sciences and techniques dedicated to improving the ability of machines to do things requiring intelligence. The Council of Europe defines AI as a machine-based system that makes recommendations, predictions, or decisions for a given set of objectives.

artificial intelligence brain

Clearly, at the heart of the new revolution with broad and dual-use technological application, AI, like electricity, is considered a general-purpose technology, impacting multiple industries and areas of global value chains. AI pervades our everyday world: machine learning is used for GPS and Netflix, and deep learning suggests email replies and unlocks smartphones with facial recognition. AI outperforms the best game players and cracks 50-yearold biological challenges, while various AI-based tools could contribute to most of the UN sustainable development goals (SDGs).

All these examples attest to the ubiquitous nature of artificial intelligence. However, as with any new technology, AI-driven opportunities have been moderated by concerns about its negative impacts as new ethical, human rights, and governance challenges need to be addressed. Yes, AI can help fight diseases and fast-track vaccine development, save lives and protect the environment, but if its development is unregulated, it can also lead to deeper inequalities, injustice, and unemployment for many people around the world.

So how can AI’s positive potential be maximised, while anticipating its negative ethical and human rights repercussions? In other words, borrowing from Greek mythology, if AI has been described as a ‘gift from God’, how can the ‘curse’ of opening Pandora’s box be avoided?

AI gaps

The divergent interpretations of ethics are based on culture and perspective. AI ethical standards and guidelines abound and remain voluntary, while human rights are enshrined in legally binding treaties such as the International Covenant on Civil and Political Rights (ICCPR), European Convention on Human Rights (ECHR).

For AI to respect ethics and human rights, several gaps need to be addressed:

  • Human-centric, ethical and trustworthy: Are humans being put first when planning, designing, building, training, deploying and governing AI systems?
  • Bias, discrimination, and fairness: Are biases being propagated with data sets used to train algorithms? How transparent and explainable are the decisions?
  • Surveillance, data rights, and privacy: Is AI monitoring and profiling behaviours without accountability and consent? Are rights to be forgotten or remain anonymous preserved?
  • Manipulation, freedoms, and democracy: Are fake news discrediting individuals and organisations or influencing elections? Is content moderation infringing on freedom of speech?
  • Life, dignity, peace, and security: Should policing and justice be delegated to algorithms? Is it ethical to entrust life and death decisions with autonomous drones?
  • Work, automation, and digital welfare: Is AI massively replacing or displacing jobs? Should social protection be outsourced to algorithms?

Both ethics and human rights help build effective safeguards: common rails promote scalable innovation across sectors, while guardrails prevent the misuse and abuse by big tech.

Unpacking Pandora’s box

Given the variety, interdependence, and complexity of the issues, multiple approaches need to be combined in order to meet the AI challenge. AI should not be treated as a matter exclusively for technologists, to be sprinkled with some ethical and human rights magic powder. Rather, these concerns should be integrated in the conversation from the beginning. Institutional regulation and self-regulation can be used in synergy, allowing businesses a degree of flexibility in an ever-evolving field.

artificial intelligence woman

Awareness and training

As a sociotechnical system, AI depends on goals, data sets, and contexts in which it is deployed. Impacts (positive or negative) are a reflection of the designers’ and operators’ values. Diversity in AI teams matters, and it requires the active inclusion of women and minorities. AI community members additionally need training on inclusion, ethics, and human rights, so they are embedded in models and data sets throughout AI’s life cycle. For policymakers, AI literacy is required to communicate effectively with developers. Making the right policy calls will benefit their constituents.

Self-regulation

As they design future AI systems, private companies have a strong incentive to adhere to ethical and responsible norms and high standards. These principles and codes of conduct are communicated by industry leaders or developed by standard-setting organisations such as the Institute of Electrical and Electronics Engineers (IEEE) or the International Organization for Standardization (ISO). This will reflect a genuine engagement to responsible and ethical AI in the eyes of consumers and governments, and may even lead to a competitive advantage. Inspired by Isaac Asimov’s laws of robotics, China recently unveiled its governance principles for AI development, while the UN Educational, Scientific and Cultural Organization (UNESCO) adopted a global agreement on AI ethics.

Policymaking

AI’s characteristics make it complex to regulate. Policymakers perform a balancing act between protecting society and preserving future innovation. Many are calling for immediate regulation to compensate for big tech’s enormous power without democratic legitimacy. Former UK leader Tony Blair compared them to ‘public utilities’ with no accountability that can do ‘enormous good but also harm’.

In April 2021, the European Commission proposed a regulation banning or restricting applications based on assigned risk levels, with potentially the same global impact as the General Data Protection Regulation (GDPR). More recently, China’s Cyberspace Administration released a draft proposal to regulate content recommendation systems, including some best practices promoting transparency and privacy protection. If confirmed, it would greatly expand the government’s control over data flows and speech.

AI is neither a curse nor a gift from God. It is up to human beings to develop the tools, to create institutions, and to build capacities to shape future AI technologies in a safe, ethical, and responsible way so they uphold human rights and human well-being. It takes a concerted effort by developers, businesses, policymakers, and civil society working collaboratively towards ethical, human-centric, and trustworthy AI.

Mouloud Khelif is a strategy and sustainability consultant based in Geneva, Switzerland, working both in the private and public sectors as well as in academia. He has recently received an Advanced Diploma in Internet Governance from Diplo. 

Browse through our alumni blog posts at Diplo Alumni Blog

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.

Subscribe to Diplo's Blog