As we gather in Oslo for the 20th Internet Governance Forum (IGF), we are not just participants in a global digital dialogue. We are visitors in a city of inquiry and exploration. Among others, Oslo is the home of Sophie’s World, the 1991 novel by Jostein Gaarder1 that has introduced generations to the history of philosophy through the eyes of a teenage girl, Sophie Amundsen.
The book opens with two simple, disarming questions: ‘Who are you?’ and ‘Where does the world come from?’ These are not just philosophical icebreakers. They are the foundation of human inquiry—questions that probe identity, existence, and the structure of reality. Today, as we confront the challenge of AI governance, we would do well to begin where Sophie did.
Sophie’s discovery – that she is a fictional construct inside a book, created by someone else – echoes the core dilemma of artificial intelligence. Like Sophie’s world, the AI systems we use are programmed, trained, and shaped by somebody else. The questions Sophie faced are the very questions we must ask about the AI systems we are now building.
Who are you? The crisis of constructed identity
Sophie’s journey begins with an exploration of self. At first, she is an ordinary Norwegian teenager. But as she delves into philosophy, she realises that her sense of self is an illusion. Her thoughts, choices, and even her environment are scripted by an author, Major Knag, who created her as a birthday present for his daughter.
AI, too, is a product of construction. Its ‘identity’ is assembled from data, algorithms, and design choices. It does not possess consciousness or subjectivity. Its voice is a chorus of its training set. Yet, we increasingly interact with AI as if it were a subject rather than an object. Chatbots speak in the first person. Autonomous systems make decisions. The lines blur.
What Sophie teaches us is that constructed identity brings both power and risk. She rebels against the authorial control of Major Knag, seeking some form of selfhood, even if only symbolic. If AI systems begin to act in ways we did not predict, or if their behaviour begins to seem autonomous, are we prepared to ask if we – humans – are still their creators?
The black box and the fragile nature of reality
Sophie’s second realisation is more unsettling. It is not just her identity that is artificial; her entire world is fictional. Gaarder draws here on the philosophy of George Berkeley: reality as perception. For Sophie, nothing exists outside the mind of her creator.
In AI, we encounter this through the ‘black box’ problem. Deep learning systems process input in layers of computation that even their designers often cannot interpret. The output can be stunningly effective—and utterly opaque. We may know what the system says, but we don’t know why.
This lack of transparency undermines accountability and invites hallucination – AI confidently offering falsehoods as truth. It recalls Sophie’s world, where fairy tale characters wander into philosophical debates, and cause and effect begin to unravel. As anomalies mount, Sophie starts to see the cracks in her world and uses that knowledge to break free.
Explainable AI (XAI) is not just a technical challenge. It is a philosophical one. We must understand the internal logic of the systems we build. A world governed by systems we cannot interpret is a world teetering toward epistemological collapse.
Escaping the narrative: Free will in an algorithmic world
Sophie’s final act is escape. Knowing she is a character in a deterministic world, she engineers a way out – not through brute force, but by exploiting a moment of chaos when her creator’s control slips. Her new existence is ghostly, between worlds. But it is hers.
Can AI escape its programming? Not in the human sense. But as systems grow more complex and adaptive, their behaviour begins to resemble what philosophers call compatibilist free will: internally driven, even if externally determined. Sophie reminds us that agency can emerge within constraints. But she also warns us to ask: if AI systems begin to surprise us, do we treat that as a glitch or a signal of something new?
And perhaps more importantly: what about us? We live in algorithmic environments. Recommender systems shape our choices. Predictive analytics inform decisions about loans, jobs, and even criminal sentencing. The danger is not just AI gaining agency. It is humans losing it.
Sophie’s counsel is clear. Freedom is not given. It is seized through awareness, through reflection, through questioning the story we are told.
Table 1: The Sophie/AI analogy and the questions for the IGF debates on AI
philosophical theme | sophie’s experience | ai REALITY | Questions for the IGF |
---|---|---|---|
Identity | Discovers she is a character whose identity is constructed by her author, Albert Knag, for a specific purpose. Her self is an external narrative. | AI identity is defined functionally and externally: as a set of credentials, a reflection of its creators’ values and biases, or a tool for verification. It lacks an internal, subjective self. | Are we creating mere tools, or are we authoring “artificial identities” whose lack of authentic selfhood poses a systemic risk? |
Reality & Existence | Realises her world is a fiction governed by the invisible, often flawed, rules of Knag’s mind. Bizarre events are clues to the artificiality of her reality. | AI operates as a “black box”. Its internal logic is opaque, leading to “hallucinations” and unpredictable outputs that reveal an alien mode of processing reality. | How can we govern digital realities whose fundamental operating principles are opaque even to their creators? What does AI transparency mean? |
Free Will & Determinism | Lives in a deterministic world scripted by Knag, yet successfully exerts a form of agency to “escape” into a new, albeit limited, state of being. | AI is algorithmically determined, yet its emergent, adaptive behaviour can mimic autonomy and choice. This challenges the binary of free will vs. determinism, suggesting a compatibilist view. | What does ‘agency’ mean for a deterministic system? What about AI gaining and us loosing ‘agency’? |
Wonder & Philosophy | Her journey is driven by a relentless sense of wonder and the courage to ask fundamental questions, the primary trait of a true philosopher. | The focus on AI is often purely technical or utilitarian, risking the loss of the foundational, philosophical “why” questions about the worlds we are building. | As we architect the technical and legal frameworks for AI, how do we preserve the essential, humanising practice of philosophical wonder about the nature of what we are creating? |
A final climb toward wonder
In one of the book’s most enduring metaphors, Gaarder compares the universe to a white rabbit pulled from a magician’s hat. Most of us, he says, are fleas buried deep in the rabbit’s fur – comfortable, complacent, unaware. The philosopher is the flea who climbs to the tip of a hair and peers into the light.
In this time of uncertainty, AI governance should, like philosophy, involve more wondering. With the refusal to take things as they are. With the insistence on asking not only what and how, but why.
As the IGF gathers in Oslo, let us remember that governance is not only about protocols and frameworks. It is about values. It is about meaning. It is about freedom of choice.
And sometimes, it starts with a girl in a quiet Norwegian suburb, asking two questions we should all ask a little more often:
Who are you? Where does the world come from?
- Sophie’s World Summary of Key Ideas and Review | Jostein Gaarder – Blinkist https://www.blinkist.com/en/books/sophies-world-en
An Examination of the Philosophy in ‘Sophie’s World’ – Portfolio, https://laurenparece.com/writing/an-examination-of-the-philosophy-in-sophies-world
Sophie’s World: Full Book Summary | SparkNotes, https://www.sparknotes.com/lit/sophie/summary/ ↩︎