Four seasons of AI:  From excitement to clarity in the first year of ChatGPT

Published on 30 November 2023
Updated on 10 April 2024

Winter of excitement | Spring of metaphors | Summer of reflections | Autumn of clarity 

ChatGPT was launched by OpenAI on the last day of November in 2022. It triggered a lot of excitement. We were immersed in the magic of a new tool as AI was writing poems and drawing images for us. Over the last 12 months, the winter of AI excitement was followed by a spring of metaphors, a summer of reflections, and the current autumn of clarity.

On the first anniversary of ChatGPT, it is time to step back, reflect, and see what is ahead of us.

Winter of Excitement 

ChatGPT was the most impressive success in the history of technology in terms of user adoption. In only 5 days, it acquired 1 million users, compared to, for example, Instagram, which needed 75 days to reach 1 million users. In only two months, ChatGPT reached an estimated 100 million users. 

tMW9Zwm2WLyFHSuD4o5vqahAcPT laHscP5zXOGjfTA3kNzxviciXtMQPBJgTx9Nt1IrYkM6oRD4epLjl5958gDsJxCsPdCGNOHYn7GZnxbvk UJxZ0E1WToowz

The launch of ChatGPT last year was the result of countless developments in AI dating all the way back to 1956! These developments accelerated in the last 10 years with probabilistic AI, big data, and dramatically increased computational power. Neural networks, machine learning (ML), and large language models (LLMs) set the stage for AI’s latest phase, which brought tools like Siri and Alexa, and, most recently, generative pre-trained transformers, better known as GPTs, which are behind ChatGPT and other latest tools. 

ChatGPT started mimicking human intelligence by drafting our texts for us, answering questions, and creating images. 

Spring of Metaphors

The powerful features of ChatGPT triggered a wave of metaphors in the spring of this year. We humans, whenever we encounter something new, use metaphors and analogies to compare its novelty to something we already know. 

Most AI is anthropomorphised and typically described as the human brain, which ‘thinks’ and ‘learns’. ‘Pandora’s box’ and ‘black box’ are terms used to describe the complexity of neural networks. As spring advanced, more fear-based metaphors took over, centred around doomsday, Frankenstein, and Armageddon

As discussions on governing AI gained momentum, analogies were drawn to climate change, nuclear weapons, and scientific cooperation. All of these analogies highlight similarities while ignoring differences. 

Summer of Reflection 

poca1tGq

Summer was relatively quiet, and it was a time to reflect on AI. Personally, I dusted off my old philosophy and history books to search for old wisdom to answer current AI challenges, which are far beyond simple technological solutions. 

Under the series ‘Recycling Ideas’ I dove back into ancient philosophy, religious traditions, and different cultural contexts from Ancient Greece to Confucius, India, and the Ubuntu concept of Africa, among others.

Autumn of Clarity

Clarity pushed out hype as AI increasingly made its way onto the agendas of national parliaments and international organisations. Precise legal and policy formulations have replaced the metaphorical descriptions of AI. In numerous policy documents from various groupings—G7, G20, G77, G193, UN—the usual balance between opportunities and threats has shifted more towards risks. 

Some processes, like the London AI Summit, focused on the long-term existential risks of AI. Others brought more ‘weight’ to the immediate risks of AI (re)shaping our work, education, and public communication. In inspiration for governance, many proposals often mentioned the International Atomic Agency, CERN, and the International Panel on Climate Change. 

A year has passed: What’s next?

AI will continue to become mainstream in our social fabric, from individual choices and family dynamics to jobs and education. As the structural relevance of AI increases, its governance will require even more clarity and transparency. As the next step, we should focus on the two main issues at hand: how to address AI risks and what aspects of AI should be governed. 

How to address AI risks  

There are three main types of AI risks that should shape AI regulations: 

  • the immediate and short-term ‘known knowns’
  • the looming and mid-term ‘known unknowns’
  • and the long-term yet disquieting ‘unknown unknowns’

Unfortunately, currently, it is these long-term, ‘extinction’ risks that tend to dominate public debates. 

AI Risks Venne Diagram

Short-term risks: These include loss of jobs, protection of data and intellectual property, loss of human agency, mass generation of fake texts, videos, and sounds, misuse of AI in education processes, and new cybersecurity threats. We are familiar with most of these risks, and while existing regulatory tools can often be used to address them, more concerted efforts are needed in this regard.

Mid-term risks: We can see them coming, but we aren’t quite sure how bad or profound they could be. Imagine a future where a few big companies control all AI knowledge and tools, just as tech platforms currently control people’s data, which they have amassed over the years. Such AI power could lead them to control our businesses, lives, and politics. If we don’t figure out how to deal with such monopolies in the coming years, they could bring humanity to the worst dystopian future in only a decade. Some policy and regulatory tools can help deal with AI monopolies, such as antitrust and competition regulations, as well as data and intellectual property protection

Long-term risks: The scary sci-fi stuff, or the unknown unknowns. These are the existential threats, the extinction risks that could see AI evolve from servant to master, jeopardising humanity’s very survival. After very intensive doomsday propaganda through 2023, these threats haunt the collective psyche and dominate the global narrative with analogies to nuclear armageddon, pandemics, or climate cataclysms. 

The dominance of long-term risks in the media has influenced policy-making. For example, the Bletchley Declaration adopted during the London AI Safety Summit heavily focuses on long-term risks while mentioning short-term ones in passing and making no reference to any medium-term risk. 

The AI governance debate ahead of us will require: (a) addressing all risks comprehensively, and (b) whenever it is required to prioritise them, that decisions be made in transparent and informed ways. 

Dealing with risks is nothing new for humanity, even if AI risks are new. In environment and climate fields, there is a whole spectrum of regulatory tools and approaches, such as the use of precautionary principles, scenario building, and regulatory sandboxes. The key is that AI risks require transparent trade-offs and constant revisits based on technological developments and the responses of society.

What aspects of AI should be governed? 

In addition to AI risks, the other important question is: What aspects of AI should be governed? As the AI governance pyramid illustrates below, AI developments are related to computation, data, algorithms, or uses.  Selecting where and how to govern AI has far-reaching consequences for AI and society. 

 Business Card, Paper, Text, Triangle

Computation level: The main question is access to powerful hardware that processes the AI models. In the race for computational power, two key players—the USA and China—try to limit each others’ access to semiconductors that can be used in AI. The key actor is Nvidia, which manufactures graphical processing units (GPU) critical for running AI models. With the support of advanced economies, the USA has an advantage over China in semiconductors, which they try to preserve by limiting access to these technologies via sanctions and other restriction mechanisms.

Data level: This is where AI gets its main inputs from, sometimes called the ‘oil’ of the AI industry. However, the protection of data and intellectual property is not as prominent as the regulation of AI algorithms in the current AI debates. There are more and more calls for clarity on what data and inputs are used. Artists, writers, and academics are checking if AI platforms build their fortune on their intellectual work. Thus, AI regulators should put much more pressure on AI companies to be transparent about using data and intellectual property to develop their models.

Algorithmic level: Most of the AI governance debate is on algorithms and AI models. They mainly focus on the long-term risks that AI can pose for humanity. On a more practical level, the discussion focuses on the relevance of ‘weights’ in developing AI models: how to highlight the relevance of some input data and knowledge in generating AI responses. Those who highlight security risks also argue for centralised control of AI developments, preferably by a few tech companies, and restricting the use of an open-source approach to AI.

Apps and tools level: This is the most appropriate level for regulating technology. For a long time, the main focus of internet governance was on the level of use while avoiding any regulatory intervention into how the internet functions, from standards to the operation of internet infrastructure (like internet protocol numbers or the domain name system). This approach is one of the main contributors to the fast growth of the internet. Thus, the current calls to shift regulations on the algorithm level (under the bonnet of technology) could have far-reaching consequences for technological progress. 

Current debates on AI governance are focused on at least one of these layers. For example, at the core of the last mile of negotiations on the EU’s AI Act is the debate on whether AI should be governed on the algorithm level or when they become apps and tools. The prevailing view is that it should be done on the top of the pyramid—apps and tools.

Interestingly, most supporters of governing AI codes and algorithms, often described as ‘doomsayers’ or ‘longtermists’, rarely mention governing AI apps and tools or their data aspects. Both areas—data and the use of AI—are areas that have more detailed regulation, which is often not in the interest of tech companies. 

Read Also

ChatGPT: A Year in Review

 Birthday Cake, Cake, Cream, Dessert, Food, People, Person, Icing

x x x

On the occasion of the very first birthday of ChatGPT, the need for clarity in AI governance prevails. It is important that this trend continues, as we need to make complex trade-offs between short, medium, and long-term risks. 

At Diplo, our focus is on anchoring AI in the core values of humanity through our humAInism project and community. In this context, our focus will be on awareness building of citizens and policymakers. We need to understand AI’s basic technological functionality without going into complex terminology. Most of AI is about patterns and probability, as we recently discussed with diplomats while explaining AI via patterns of colours in national flags. 

Why not join us in working for an informed, inclusive, meaningful, and impactful debate on AI governance! 

 Light, Advertisement, Poster, Lightbulb, Person
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.

Subscribe to Diplo's Blog