English is the undisputed lingua franca of AI. Machine learning, neural networks, AI agents, and other core concepts are conceptualised and deployed in English. Yet, English as a powerful and practical medium for AI is turning into a problem with a hype-driven ‘AI language tsunami’: the more we talk about AI, the less meaning our words seem to carry. Like any inflation, this inflation of words has reduced the value of the language dealing with AI.
A rescue comes from an unexpected place: our native languages. The AI lingo bubble can be deflated through the mother tongue we learned before we ever touched digital technology. It can help us address AI with common sense, critical thinking, and sharp insight.
The power of our inner language
Native language is our inner language, the very scaffolding of our thoughts, as argued by the famous linguist Lev Vigotsky1Ben Alderson-Day and Charles Fernyhough, “Inner Speech: Development, Cognitive Functions and Link to Psychopathology,” Frontiers in Psychology 6 (2015): Art. Article 1033, accessed July 20, 2025, LINK
and supported by the Sapir-Whorf hypothesis.2Green, R. (2023, November). The Sapir‑Whorf Hypothesis: How Language Influences How We Express Ourselves. Verywell Mind. LINK
Let me start with myself. I developed my cognitive apparatus to understand nature and social relations through Serbian. Our ‘inner language’ is not fixed; if we live abroad, we can experience moments of catching ourselves thinking in some other language. For example, I think about digital and AI issues in English, the language through which I grasped the core concepts in this field.
In the last few years of AI acceleration, I noticed a fast widening gap between words explaining AI and, especially, non-technical aspects of AI reality. My search for the use of native language to to understand and teach AI was aided by observing the Maltese language.
The Maltese language is layered like geological sediment.3Nieder, J., & Tomaschek, F. (2023). Maltese as a merger of two worlds: A cross‑language approach to phonotactic classification. PLoS ONE, 18(4), e0284534. https://doi.org/10.1371/journal.pone.0284534 Its foundation is Semitic, established during Arab rule (870-1091 AD), and includes its core grammar, kinship terms, body parts, and basic verbs. After the Norman conquest in 1091, it was heavily exposed to Italian (specifically Sicilian), which shaped its food, culture, and abstract terminology. The latest layer is English, introduced in the 1800s, influencing the language of technology, science, and governance. In their daily lives, the Maltese fluidly switch between these three layers, which inherently impacts how they cognitively frame a problem.4Ellen Bialystok and Gregory Poarch, “Language Experience Changes Language and Cognitive Ability,” Zeitschrift für Erziehungswissenschaft 17, no. 3 (2014): 433–446, accessed July 20, 2025, LINK
This linguistic multilayering inspired me to peel back the layers of jargon and convention that obscure my understanding of AI. I have tried to uncover AI’s more profound, non-technical meaning for years by using Serbian as an anchor in common-sense thinking.
It hasn’t been easy. The intuitive reaction is to revert to English. Some find it impractical, especially when developing applications and already “thinking in English.” In a pragmatic but misguided shortcut, others simply use ChatGPT to translate English terms into Serbian, which misses the point of the exercise entirely. But here and there, I have had successes that have been truly illuminating.
Getting to the core meaning
Using non-English languages, we can strip away the trappings of ‘AI lingo’, the professional shorthand that evolves within any fast-growing industry. Let’s take the example of Contextual Engineering (CE), one of the latest trends in the AI domain.5Tocxten. (2025, June 28). Context engineering: The emerging trend in artificial intelligence. Tocxten. LINK
First, consider the dense, professional definition:
Contextual Engineering (CE) employs structured Data Units (DUs) within a Retrieval-Augmented Generation (RAG) pipeline to dynamically aggregate semantically relevant information. This is achieved by interfacing with external Knowledge Bases (KBs) via a Multi-Connection Protocol (MCP), enabling federated access to heterogeneous data sources (e.g., APIs, SQL/NoSQL DBs, or document stores). The MCP facilitates secure, low-latency data ingestion through standardised connectors (REST/gRPC) while implementing QoS policies for bandwidth optimisation. A Large Language Model (LLM) performs subsequent semantic synthesis using transformer-based NLP to generate context-aware outputs. The E2E process leverages ANN search for vector similarity matching within the RAG framework, ensuring high-recall information retrieval before LLM inference.
Now, here is how I explained it in Serbian,
Kontekstualno inženjerstvo je razumevanje pasusa ili recenice u kontekstu duzeg testa i drugih relevantnih podataka. Na primer, odgovor na nasa pitanja, veliki jezički model vestacka inteligencija moze pruziti tako sto im posaljemo pitanje, osnovni pasus, siri text i druge relevant podatke.
And here is that simple explanation translated from Serbian back into English:
Contextual Engineering (CE) is about understanding one paragraph (data unit) in the context of a longer text (RAG) and other relevant data (KBs). This paragraph, broader text, and other pertinent data can be collected and sent to an LLM with a request (prompt) to generate answers to our questions.
This shift to a native language stripped away the heavy terminology and led to a common-sense answer that can pass the Feynman test of explaining a complex problem to a child.
The risks of professional ‘turf language’
AI lingo is not a new phenomenon. Professions develop their languages to optimise communication. Acronyms and established cognitive frames make communication among insiders easier and faster. However, this carries the significant risk of becoming a tool for turf protection.
I have seen this repeatedly, from established fields to emerging ones such as internet governance. In International Geneva, you find dozens of these ‘turf languages’—for trade, climate change, human rights, health, and more. Each community coalesces around roughly 50 key acronyms, terms, and document references that outline parameters of professional turfs.
The risk of camouflaging AI by ‘turf language’ is particularly relevant as AI impacts the very basics of social life, jobs, and human wellbeing.
Tension between deterministic software development and probabilistic AI
In addition to loss of meaning by inflated AI language, there is another deeper tension between the deterministic language of programming and the probabilistic nature of AI. Software is always developed as described in step-by-step procedures. AI is like a ‘guessing’ machine built around probable, not specific outcomes. Thus, dealing with AI requires a different set of skills, including a mix of intuition and common sense.
The in-built tension is between, on one hand, need to develop AI by using deterministic and programming logic: setting up servers, generating vector databases, and using Python to create agents and, on the other hand, different set of cognitive skills to navigate the probabilistic nature of AI in specific knowledge domain, be it management, diplomacy, or medical science.
In this interplay between the determinism of computer programming and the probabilistics of AI, our native language can help us access the ‘inner language’ of our understanding. We engage in a different cognitive process by forcing ourselves to articulate complex AI concepts in a mother tongue less polluted with AI technical jargon. We are scaffolding our thoughts about Aidifferently, just as Vygotsky suggested.
Practical next steps
Explain concepts in your native language first. Describe what a large language model does (predicts the next word based on context) or what “retrieval‑augmented generation” means (finding relevant documents before generating an answer). Use metaphors that resonate within your culture.
Develop an AI team with native speakers in your language. If you can discuss AI with others in your native language, it will increase the sharpness and depth of your thinking about AI. Language is a social medium which is enriched through exchanges.
Alternate between languages when reading or teaching AI. This can enhance executive control and help identify gaps in understanding.
Be aware of professional jargon and challenge it. Ask whether terms like “context engineering” hide simple ideas.
And what if your native language is English? You may have a slight disadvantage, but you can still follow the same logic. The goal is to develop a new kind of literacy for the AI era by consciously bypassing the clichés, technical oversimplifications, and AI hype.
Click to show page navigation!