The AI, Governance and Philosophy – A Global Dialogue is now finished, the impressions have settled down, so the time has come to reflect on the lessons learned and experience lived and try to answer the underlying question – how to ensure that it is us, humans, who will govern the future development of artificial intelligence (AI), rather than ending up being subjected to it.
The AI systems increasingly shape our lives: our work, communication, decision-making, and even diplomacy, but the question of properly governing such a powerful technology seems to be somewhat sidetracked by the pursuit of profit. While technical standards and ethical checklists abound, the deeper philosophical question remains unresolved: Can we build a truly inclusive and legitimate AI governance framework in a world of diverse worldviews?
In a deeply interconnected world of ours where technology intersects with diplomacy, philosophy, and power, we must ask not only what is technically possible, but what is ethically justifiable and culturally resonant. This means moving beyond superficial appeals to ‘universal values’ and embracing a model of governance grounded in philosophical pluralism, intercultural respect, and shared human aspiration.
Universal values and particular terminology
Talk of embedding ‘universal’ values into AI, such as fairness, transparency, or autonomy, often reflects values rooted in specific philosophical traditions, especially liberal Western thought. While these ideals are valuable, presenting them as globally agreed-upon can obscure cultural diversity and, in the worst-case scenario, lead to digital colonialism.
For example, the Western emphasis on individual autonomy may not resonate with Confucian values of relationality and harmony. Similarly, transparency as a moral imperative may take different forms across cultures, being refracted through ritual, trust, hierarchy, or collective accountability.
However, we must pause here and remind ourselves that behind all these different and particular terms, and definitions that are necessary but still limiting (which, again, resonates with daoistic tradition that cautions us how ‘the Dao that can be named in not the real Dao’) lie shared concerns and aspirations that are the same across cultures, time or belief systems, and deeply rooted in our biological needs – we want to be fed, clothed, safe, loved and live in peace in order to secure the well-being of our children.
In Confucian-style diplomacy, moral cultivation and respectful listening (ting, 听) are prerequisites for dialogue. Similarly, African Ubuntu philosophy emphasises relational humanity (‘I am because we are’), while Islamic thought grounds governance in justice and moral responsibility (adl and amanah). These traditions offer distinct pathways toward shared ethical commitments.
A genuinely inclusive AI governance framework, therefore, must acknowledge these different expressions of the same commitments as a resource that may help us expand our collective moral imagination. And this is why philosophical pluralism is not merely an academic idea – it has concrete relevance for diplomacy and digital governance. When designing global frameworks, we should both work on reaching a technical consensus AND foster intercultural dialogue.
Multilateralism, when informed by such traditions, becomes not only procedural but philosophically grounded, thus capable of mediating between competing worldviews without defaulting to a single ideological centre.
From ethical checklists to moral convergence
In practice, this means that rather than seeking a singular ethical blueprint for AI, we might aim for ‘moral convergence’: identifying overlapping values across traditions while respecting their differences in emphasis, expression, and justification.
Some promising areas of convergence include:
- Human dignity: Revered across Confucian, Kantian, and Islamic ethics.
- Responsibility and stewardship: Found in Christian, Daoist, and indigenous worldviews.
- Harmony and balance: A core value in Chinese and Buddhist thought, offering an alternative to zero-sum notions of control.
What we have learned during this Global Dialogue, but also seen being implemented in practice, is that China’s own rich ethical traditions offer critical insights for AI governance, often absent from global policy discourse.
For example, Confucianism stresses how moral duties arise from roles and relationships, not abstract individuals or deities (relational ethics). It promotes social harmony through cultivated practice guided by virtue rather than mere rules (rituals), and discusses how a leader’s moral character and duties are central to political and ethical life (responsibility over rights).
These values can inform governance models that prioritise relational accountability, ethical cultivation, and social cohesion, offering alternatives to transactional, compliance-driven frameworks.
This is why I dare here to say how the future of AI governance will be shaped not only by technical experts and regulators, but by philosophers, educators, diplomats, and cultural practitioners as well. One such possible governance framework would:
- Embed intercultural philosophical engagement into policy-making processes.
- Create transcultural forums where ethical priorities are debated, not assumed.
- Prioritise human-centric governance that protects not only individual rights, but community well-being, intergenerational equity, and planetary responsibility.
In that sense, rather than imposing universal values from above, we’d be cultivating them from bottom up, acknowledging intercultural ethics as a mosaic of traditions, values, and visions that can guide us toward common goals without erasing differences.
I am aware that this sounds like a platitude, or a wishy-washy utopia, but I see it as a pragmatic pluralism that acknowledges the fact that no single value system can claim global dominance, and insists that shared ethical life is still possible through humility, dialogue, and mutual recognition. In my view, this is a sine qua non for a future where humanity, as a whole, ensures an upper hand over the technology that is a good servant, but would otherwise prove to be a terrible master.
Click to show page navigation!