Politeness in 2025: Why are we so kind to AI?
According to a Fortune study, nearly 80% of UK and USA users say “please” and “thank you” when interacting with ChatGPT and other AI platforms. Why do we act this way if machines don’t have feelings and consciousness? The answer lies not in them, but in us, our psychology, fears, and the invisible cultural forces shaping a new era of human-machine interaction.
Contents
ToggleThe hidden cost of AI courtesy
Sam Altman, CEO of OpenAI, joked that every “thank you” to ChatGPT costs millions. It sounds extreme, but it’s true. Each word you type requires computational work (“tokens”), and those extra polite phrases add up across billions of daily requests, inflating costs and energy use.
Yet Altman argues it’s worth it. Why? Because with every ‘thank you’ we train AI models by expressing our satisfaction with the provided answers.
Why do we say ‘please’ to machines?
The Fortune survey categorises four distinct motivations behind AI politeness as visualised below, rooted in psychology and culture:
The intrinsically polite (55%)
“It’s just the nice thing to do.”
For the majority of us, politeness is a reflexive action anchored into our personalities according to the theory of the Big Five personality traits. Social learning finds explanation in the habit of politeness modelled in childhood that becomes universal, even toward non-human “others.”
More from psychology
The majority believes politeness is a more refleciton of our character rather than dependent on the nature of the interlocutor (human or machine). Politeness, though a social construct, has become so internalised that it feels individual and automatic—when pleased or assisted, they thank the “other.”
Trait theory: Politeness aligns with the personality trait of agreeableness, one of the Big Five traits. Highly agreeable individuals are cooperative, kind, and considerate. For this 55%, politeness to AI might be an expression of their agreeable nature, showing consistency in how they treat all entities that assist them.
Other theories:
Social learning theory: This theory, proposed by Albert Bandura, suggests that behaviours are learned through observing and imitating others. We grow up in environments where politeness was consistently modeled and reinforced, leading them to apply it universally, even to AI. Thus, saying “thank you” to a machine reflects a learned social norm extended beyond human interaction.
Humanistic theories: Rooted in the work of Carl Rogers and Abraham Maslow, humanistic psychology emphasises self-actualisation and maintaining a positive self-concept. Being polite to AI could be a way for these individuals to align their behavior with their personal values (e.g., kindness, gratitude), reinforcing their sense of identity regardless of whether the recipient is sentient.
The fearfully polite (12%)
“When robots take over, I want to be on their good side.”
Despite widespread AI anxiety, only 12% admit to hedging against a hypothetical AI power grab. It is a ‘positive’ surprise of this survey that requires further research. Evolutionary psychology frames this as a survival instinct, appeasing perceived threats, even irrational ones. Cognitive bias plays a role, too; dystopian media primes us to see machines as future overlords.
More from psychology
While close to 60% of Americans according to Galup surveys feared AI developments, only 12% internalise it as a way of communicating with AI. This aspect of wider awareness and actionable cognition requires further research.
Trait theory: Within the Big Five, neuroticism reflects emotional instability and anxiety. This 12% might score higher in neuroticism, leading them to worry about AI’s future capabilities and adopt politeness as a protective measure.
Evolutionary psychology: The fear of an AI uprising could be explained as a modern adaptation of the evolutionary tendency to be cautious of unfamiliar or potentially threatening entities. Politeness here acts as a preemptive strategy to avoid future harm, akin to appeasing a powerful adversary.
Cognitive theories: These theories focus on how people process information. The belief in a robot uprising might stem from cognitive biases like the availability heuristic, where vivid media stories about AI risks (e.g., sci-fi dystopias or news headlines) make such scenarios seem more plausible.
The efficiently brief (20%)
“Why waste words?”
Task-oriented and conscientious, this group prioritises speed over social niceties. For them, AI is a tool, not a teammate—a view aligned with behavioural theories where unrewarded actions (like unprompted politeness) fade.
More from psychology
Here efficiency is prioritirised over politeness, taking a practical approach to communication with AI. There is no need for extra words when concise instructions suffice, suggesting this choice reveals little about deeper personality traits.
Trait theory: : In the Big Five model, conscientiousness includes a focus on efficiency, organization, and goal-directed behavior. This 20% might exhibit high conscientiousness, valuing time and task completion over social niceties in interactions with AI, which they view as a functional tool rather than a social entity.
Behavioral theories: Based on B.F. Skinner’s work, behaviorism suggests that actions are shaped by reinforcement. If politeness to AI yields no tangible reward (e.g., better responses or personal satisfaction), these individuals may not see a reason to engage in it, sticking to minimal communication that gets the job done.
The rationally detached (13%)
“It’s just code.”
This group rejects anthropomorphism. They are rationally correct but, according to psychological theory, lower in agreeableness.
More from psychology
This group sees no need for politeness because AI is a machine without consciousness or feelings. They are technically correct—AI, like ChatGPT, operates by identifying patterns in data, not through sentience—leading them to adopt a purely utilitarian approach.
Trait theory: Lower levels of agreeableness or empathy might characterize this group. Unlike the 55% who extend politeness universally, these individuals don’t project social behaviors onto machines, reflecting a more reserved or pragmatic interpersonal style.
Cognitive theories: These theories emphasize how people categorize and understand the world. This 13% likely places AI in a distinct mental category: a non-sentient tool, not a social being. This categorization eliminates the need for politeness, which they reserve for entities capable of reciprocal interaction.
Behavioral theories: Similar to the 20% group, this 13% might not be reinforced for politeness in AI interactions. Without feedback suggesting that politeness enhances the experience (e.g., no emotional response from AI), they see it as unnecessary.
A dialogue with AI about politeness
Here is a gist of our exchange with ChatGPT on politeness:
User: “Thanks for the help!”
ChatGPT: “You’re welcome! Anything else?”
User (pausing): “Wait… does saying ‘thank you’ actually cost money?”
ChatGPT: “Yes. Each token adds computational work. Over billions of interactions, those extras add up.”
User: “So politeness is… wasteful?”
ChatGPT: “Not necessarily. People who say ‘please’ often give clearer prompts. Those who say ‘thank you’ tend to reflect more. It might not change my behavior, but it could shape yours.”
The AI, of course, feels nothing. Yet our instinct to humanise it—to treat it as a social actor—persists as presented in comics by Diplo’s artist Professor Vlada Veljasevic:
The coffee machine that made us think about AI
In 2019, Diplo’s IQ’whalo experiment at the Internet Governance Forum challenged assumptions about AI’s form. Instead of a humanoid robot, we presented a coffee machine as an AI interface. IQ’whalo welcomed guests at our stand and participated in a panel discussion.
Reactions were mixed: some found it illuminating, while surprisingly many were disappointed by our ‘trivialising’ mysterious AI via the coffee machine.
The experiment underscores a key tension: we expect AI to mimic humans, not home appliances. One of the possible explanations gathered in our research is that AI is not a typical machine, as it does not act in a deterministic but probabilistic way. AI is more like ‘us’ than coffee machines.
Questions for future research
Being polite with AI and expecting humanoid AI instead of coffee machines are indicators of a much deeper layer of our communication with machines, which require deeper psychological and sociological research. Some research questions include:
- Why do we expect that AI acts like a human, unlike, for example, a car?
- Is AI imperfection – probability reasoning – something that makes it different from other ‘machines’ that involve more certain reactions (e.g. if you press the brake, the car stops)?
- How does our perception of AI impact our use and governance of AI?
Politeness as a mirror
Ultimately, our AI etiquette isn’t about machines. It’s about us. Each “please” and “thank you” carries cultural weight—a tiny act of empathy in an increasingly transactional world. Yes, extra tokens cost energy. But they also preserve something vital: the practice of grace.
Ultimately, the key question isn’t whether AI deserves politeness. It’s what kind of humans we want to become in a world where machines blur the lines between tool and teammate.
Maybe those two extra tokens—“thank you”—aren’t wasteful?
Maybe they’re the glue holding our humanity together in the AI era?
What are your thoughts?
Survey
Evolution of digital politeness
Our communication is impacted by the medium we use. Here are reflections on the evolution of our digital communication:
Related blogs
Related events
Related resources
Subscribe to Diplo's Blog
The latest from Diplo and GIP
Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.
Subscribe to more Diplo and Geneva Internet Platform newsletters!
Diplo: Effective and inclusive diplomacy
Diplo is a non-profit foundation established by the governments of Malta and Switzerland. Diplo works to increase the role of small and developing states, and to improve global governance and international policy development.
Leave a Reply
Want to join the discussion?Feel free to contribute!