AI in Sophie’s world: How a philosophy book can help us govern AI

Jovan Kurbalija

Author:   Jovan Kurbalija

Subscribe to Diplo's Blog

As we gather in Oslo for the 20th Internet Governance Forum (IGF), we are not just participants in a global digital dialogue. We are visitors in a city of inquiry and exploration. Among others, Oslo is the home of Sophie’s World, the 1991 novel by Jostein Gaarder1 that has introduced generations to the history of philosophy through the eyes of a teenage girl, Sophie Amundsen.

Cover page of the book Sophie's world
Cover page of the book ‘Sophie’s World’

The book opens with two simple, disarming questions: ‘Who are you?’ and ‘Where does the world come from?’ These are not just philosophical icebreakers. They are the foundation of human inquiry—questions that probe identity, existence, and the structure of reality. Today, as we confront the challenge of AI governance, we would do well to begin where Sophie did.

Sophie’s discovery – that she is a fictional construct inside a book, created by someone else – echoes the core dilemma of artificial intelligence. Like Sophie’s world, the AI systems we use are programmed, trained, and shaped by somebody else. The questions Sophie faced are the very questions we must ask about the AI systems we are now building.

Genius Loci of Conference Venues
I have a hobby of collecting genius loci souvenirs of places I visit for the IGF and other conferences. For me, it is an attempt to uncover the local ‘spirit and thinking code’ – how a place’s philosophy, culture, and religion shape everyday life, family dynamics, decision-making, and community preservation. Sometimes, I find these insights in philosophical texts or religious teachings. More often, they reveal themselves through oral traditions and social interactions. Excerpt from the list of genius loci souvenirs:
What links Fernand Braudel, NETmundial, and São Paulo?
How can Bauhaus help build a digital home for humanity?
The Hidden Influence: How Lwów–Warsaw School shaped AI developments
Early origins of AI in Islamic and Arab thinking traditions
Jua Kali AI: Bottom-up algorithms for a Bottom-up economy

Who are you? The crisis of constructed identity

Sophie’s journey begins with an exploration of self. At first, she is an ordinary Norwegian teenager. But as she delves into philosophy, she realises that her sense of self is an illusion. Her thoughts, choices, and even her environment are scripted by an author, Major Knag, who created her as a birthday present for his daughter.

AI, too, is a product of construction. Its ‘identity’ is assembled from data, algorithms, and design choices. It does not possess consciousness or subjectivity. Its voice is a chorus of its training set. Yet, we increasingly interact with AI as if it were a subject rather than an object. Chatbots speak in the first person. Autonomous systems make decisions. The lines blur.

What Sophie teaches us is that constructed identity brings both power and risk. She rebels against the authorial control of Major Knag, seeking some form of selfhood, even if only symbolic. If AI systems begin to act in ways we did not predict, or if their behaviour begins to seem autonomous, are we prepared to ask if we – humans – are still their creators?

Diplo’s reporting from the IGF 2025
Following a decade-long tradition, Diplo will provide just-in-time reporting from the IGF 2025 in partnership with the Norwegian host and the IGF Secretariat. You can consult detailed reports from each session one hour after their contributions. You can also follow threads of discussion of topics of your interests, from AI to cybersecurity and the future of the IGF. Reports are generated by hybrid, human and artificial, intelligence. In the spirit of Sophie’s world, we will use new AI tools within our human experience and expertise. You can register to receive IGF 2025 reports here.

The black box and the fragile nature of reality

Sophie’s second realisation is more unsettling. It is not just her identity that is artificial; her entire world is fictional. Gaarder draws here on the philosophy of George Berkeley: reality as perception. For Sophie, nothing exists outside the mind of her creator.

In AI, we encounter this through the ‘black box’ problem. Deep learning systems process input in layers of computation that even their designers often cannot interpret. The output can be stunningly effective—and utterly opaque. We may know what the system says, but we don’t know why.

This lack of transparency undermines accountability and invites hallucination – AI confidently offering falsehoods as truth. It recalls Sophie’s world, where fairy tale characters wander into philosophical debates, and cause and effect begin to unravel. As anomalies mount, Sophie starts to see the cracks in her world and uses that knowledge to break free.

Explainable AI (XAI) is not just a technical challenge. It is a philosophical one. We must understand the internal logic of the systems we build. A world governed by systems we cannot interpret is a world teetering toward epistemological collapse.

Escaping the narrative: Free will in an algorithmic world

Sophie’s final act is escape. Knowing she is a character in a deterministic world, she engineers a way out – not through brute force, but by exploiting a moment of chaos when her creator’s control slips. Her new existence is ghostly, between worlds. But it is hers.

Can AI escape its programming? Not in the human sense. But as systems grow more complex and adaptive, their behaviour begins to resemble what philosophers call compatibilist free will: internally driven, even if externally determined. Sophie reminds us that agency can emerge within constraints. But she also warns us to ask: if AI systems begin to surprise us, do we treat that as a glitch or a signal of something new?

And perhaps more importantly: what about us? We live in algorithmic environments. Recommender systems shape our choices. Predictive analytics inform decisions about loans, jobs, and even criminal sentencing. The danger is not just AI gaining agency. It is humans losing it.

Sophie’s counsel is clear. Freedom is not given. It is seized through awareness, through reflection, through questioning the story we are told.

Table 1: The Sophie/AI analogy and the questions for the IGF debates on AI

philosophical themesophie’s experienceai REALITYQuestions for the IGF
IdentityDiscovers she is a character whose identity is constructed by her author, Albert Knag, for a specific purpose. Her self is an external narrative.AI identity is defined functionally and externally: as a set of credentials, a reflection of its creators’ values and biases, or a tool for verification. It lacks an internal, subjective self.Are we creating mere tools, or are we authoring “artificial identities” whose lack of authentic selfhood poses a systemic risk?
Reality & ExistenceRealises her world is a fiction governed by the invisible, often flawed, rules of Knag’s mind. Bizarre events are clues to the artificiality of her reality.AI operates as a “black box”. Its internal logic is opaque, leading to “hallucinations” and unpredictable outputs that reveal an alien mode of processing reality.How can we govern digital realities whose fundamental operating principles are opaque even to their creators? What does AI transparency mean?
Free Will & DeterminismLives in a deterministic world scripted by Knag, yet successfully exerts a form of agency to “escape” into a new, albeit limited, state of being.AI is algorithmically determined, yet its emergent, adaptive behaviour can mimic autonomy and choice. This challenges the binary of free will vs. determinism, suggesting a compatibilist view.What does ‘agency’ mean for a deterministic system? What about AI gaining and us loosing ‘agency’?
Wonder & PhilosophyHer journey is driven by a relentless sense of wonder and the courage to ask fundamental questions, the primary trait of a true philosopher.The focus on AI is often purely technical or utilitarian, risking the loss of the foundational, philosophical “why” questions about the worlds we are building.As we architect the technical and legal frameworks for AI, how do we preserve the essential, humanising practice of philosophical wonder about the nature of what we are creating?

A final climb toward wonder

In one of the book’s most enduring metaphors, Gaarder compares the universe to a white rabbit pulled from a magician’s hat. Most of us, he says, are fleas buried deep in the rabbit’s fur – comfortable, complacent, unaware. The philosopher is the flea who climbs to the tip of a hair and peers into the light.

In this time of uncertainty, AI governance should, like philosophy, involve more wondering. With the refusal to take things as they are. With the insistence on asking not only what and how, but why.

As the IGF gathers in Oslo, let us remember that governance is not only about protocols and frameworks. It is about values. It is about meaning. It is about freedom of choice.

And sometimes, it starts with a girl in a quiet Norwegian suburb, asking two questions we should all ask a little more often:

Who are you? Where does the world come from?


  1. Sophie’s World Summary of Key Ideas and Review | Jostein Gaarder – Blinkist https://www.blinkist.com/en/books/sophies-world-en
    An Examination of the Philosophy in ‘Sophie’s World’ – Portfolio, https://laurenparece.com/writing/an-examination-of-the-philosophy-in-sophies-world
    Sophie’s World: Full Book Summary | SparkNotes, https://www.sparknotes.com/lit/sophie/summary/ ↩︎

Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.

Subscribe to more Diplo and Geneva Internet Platform newsletters!