The design of AI systems is often framed in terms of innovation, efficiency, and speed. However, these metrics alone cannot tell us whether AI serves society well or quickly.
On day 5 of the ‘AI, Governance and Philosophy – A Global Dialogue’ journey, we had a chance to meet with young students from Jiaxing campus of the Yangtze River Delta Research Institute, and listening to their views and comments related to various aspects posed by the AI technology, and I started thinking what it is that we can learn from them as one of the stakeholder in global AI governance discussions?
Drawing inspiration from Chinese traditional philosophy (as discussed from the start of the Dialogue), we could find a robust ethical framework for grounding AI futures in continuity, harmony, and moral responsibility across time and generations.
The Confucian view: Relationship, responsibility, and continuity
In Confucian ethics, the relationship between generations is not adversarial but complementary. Elders and youth are bound together in a web of mutual obligations: elders offer moral guidance and long-term vision, while youth bring dynamism, creativity, and renewal. This practice is not a hierarchy of power but a system of relational balance, a perspective that offers a much-needed antidote to age-fragmented discussions around AI.
Over the past several days, I’ve noticed this is far from a ‘nice theory’ as having several undergraduate and postgraduate students accompanying us throughout this journey has allowed me to observe this kind of relationship in practice every day. On one level, the roles are clearly practised (and respected), but on the other, you can see very warm and personal relations between students and their teachers, not much different from the ones we see in parent-child dynamics.
Young people engage with technology in immediate and often deeply personal ways – from video games to social media to learning platforms – making their relationship with it more immersive than most policymakers or designers do. Yet their voices are rarely institutionalised in (global) technology and/or AI policy regulation debates dominated by the older generation.
If youth bring urgency, elders offer depth. In many cultures, especially in East Asia, elders are seen as custodians of collective wisdom, ethical tradition, and long-term rationale. In the face of fast-moving technologies, their role becomes more vital, not less. It can help remind us that the most difficult challenges around dignity, responsibility, and meaning are not new problems, but enduring human questions.
Traditional philosophy reminds us that the future is an extension of the past, not a clean break from it. With that in mind, youth should be treated not just as digital natives but as ethical stakeholders of AI futures, whose concerns, ranging from social justice to mental health and environmental sustainability, can bring critical perspectives to how AI is designed and deployed. Combine this with the traditional role of older people as providers of moral foundations that can anchor innovation, and we may end up with a possible way to develop and implement an AI-based society capable of weathering future storms.
Western tech culture often romanticises disruption: break things, move fast, leave the past behind. On the other hand, Chinese philosophical traditions take a different view, emphasising gradualism, harmony, and the evolution of moral conduct over time. This ethic of continuity over rupture offers a compelling philosophical alternative for AI development. Rather than reinventing the ethical wheel, we can draw from long-standing traditions that already know how to balance novelty with stability, innovation with reverence, and progress with caution.