What can Socrates teach us about AI and prompting?

Published on 01 August 2023
Updated on 10 April 2024

The art of asking questions in the AI era

Socrates was one of the first knowledge prompters. On the streets of Athens, 25 centuries ago, Socrates ‘prompted’ citizens to search for truth and answers, which is, in essence, what we try to do with AI prompting using tools like ChatGPT.

This text explores the interactions between ancient thought and the most recent AI development by providing historical context, philosophical inquiry, and practical guidance based on Diplo’s holistic research and teaching on AI as illustrated below.

 Diagram, Disk

What is AI prompting?

AI prompting is the way we communicate with AI systems such as ChatGPT and Bard. Typically, prompts are questions we ask AI platforms. However, they could also be software codes or complex instructions.

 Electronics, Phone, Text, Mobile Phone

AI prompting is like ‘informed guessing’. We can improve it through practice, gaining experience and expertise over time. Like in human conversation, we gradually internalise the ‘wisdom’ of the system and its limits. This conversation aspect inspired me to make an analogy with the Socratic method of questioning. 

Terminological confusion

In this new field of interacting with AI platforms, there is also terminological confusion. Prompt engineering is the most frequently used description, probably inspired by the engineering nature of AI platforms. Yet, ‘prompt engineering’ doesn’t exactly follow the traditional engineering process, which involves a series of steps that, if repeated, yield the same result every time.

Other terms are used as well: prompter, prompt designer, prompt manager, promtologistics, promptmancer…

What is Socratic inquiry?

Socratic inquiry is a method of questioning and examining beliefs, ideas, and values named after the ancient Greek philosopher Socrates. He asked probing questions to challenge his interlocutors’ assumptions and reveal contradictions or inconsistencies in their views. Through these dialogues, Socrates aimed to help people discover the truth for themselves. 

The method has been used for centuries in education, coaching, counselling, and philosophical discussions. The Socratic method can be a powerful tool for promoting active learning, fostering intellectual curiosity, and encouraging self-discovery.

Socratic questioning can gain new relevance through AI prompting. It can help us have more valuable interactions with AI technologies such as ChatGPT and ensure that AI is deeply anchored in the profound and unique realm of human creativity.

What connects Socratic questioning and AI prompting?

By using elements of the Socratic method in AI systems, we can enhance the quality and effectiveness of AI-generated answers. AI prompting can also aid in fostering our critical thinking skills and deepening our understanding of complex topics. Here are a few areas where this interplay between Socratic questioning and AI prompting can be useful:

  • Assumptions and beliefs shape our thinking. Socratic questioning aims to help us discover these deeper layers that shape our thinking, which we are only sometimes aware of. AI prompts can similarly encourage users to examine their assumptions and biases, prompting them to consider alternative viewpoints and challenge their preconceived notions.
  • Thought processes guide our thinking as well. Socrates was revealing his thought process via the dialogue. In the same way, AI can foster conversation. Via prompting and questioning, AI can guide thinking on complex subjects, arriving at more coherent and reasoned responses.
  • Feedback and probing were the ways Socrates built inquiry. In the same way, AI can analyse user responses and offer follow-up questions or additional information to deepen the conversation and address any misconceptions.
  • Active learning as a way towards self-discovery was one of the main aims of Socrates. Ultimately, his interlocutors were supposed to raise the level of thinking by actively participating in the learning process. AI can simulate the same process by being our interlocutor in building our knowledge and skills. Paraphrasing an old Chinese saying, AI can help us learn ‘how to fish’ (think) instead of just feeding us fish (provide information).
  • Customisation  was how Socrates adjusted his dialogue to the uniqueness of concrete people and their way of thinking. AI systems with adaptive algorithms could be adapted to user’s knowledge levels and preferences.
  • Ethical improvements were Socrates’s aims, as he was proving that our thoughts and actions have ethical implications. Our choices could be free, but their impacts on others are a matter of ethics. AI and ethics are two of the critical themes of our time. AI prompting can also bring ethical consideration to discussions on various topics.
  • Continuous improvement is behind both Socratic methods and AI developments. Socrates was refining his methods and approaches. AI learns from user feedback and conversation through reinforced learning and other techniques.

AI prompting as profession-in-making

The demand for AI prompting skills is increasing. Due to our limited understanding of how neural networks and AI generate answers, engineering skills are of limited utility. Very little is visible ‘under the hood’ of AI machines. Thus, we are left to interpret AI responses which require knowledge and skills provided by disciplines such as:

  • Philosophy with a special focus on logic and epistemology
  • Linguistics with a special focus on syntax
  • Theology 
  • Cognitive science
  • Dramaturgy and whatever increases our ‘storytelling’ talents

More specifically, curriculum-in-making for AI prompting would include: 

  • Grasp of formal and fuzzy logic
  • Talents to sustain conversation and storytelling
  • Understanding of syntax and semantics
  • Flexible and fast-adapting thinking
  • ‘Boundary spanners’ skills to see beyond immediate professional/organisational context

Using an adaptive and agile approach in AI prompting will be of utmost importance, as AI’s (un)known unknowns will persist for some time.

How can organisations deal with AI prompting?

As businesses and organisations worldwide start implementing AI-driven reforms, one of the hardest tasks will be to preserve and secure institutional knowledge. Currently, with each AI prompt we make, we transfer our knowledge to large AI platforms. Moreover, if we want to get more from these AI platforms, we have to provide more detailed prompts and, as a result, deeper knowledge insights. Our institutional knowledge and memory will move beyond the walls of our organisations and across national borders ending up in AI systems of OpenAI, Google, and other major tech corporations. 

 Person, Electronics, Phone, Mobile Phone

We are sharing our knowledge and wisdom without any clarity about ownership and legal protection. It is a matter of security as well. Diplomatic services, banks, the military, and other organisations with high-security sensitivity could reveal a lot about their intentions and plans through their employees’ use of public AI platforms.

The use of AI in organisations and businesses is a pressing issue, as many officials use ChatGPT and other platforms individually. Some organisations have responded by banning ChatGPT, but this is not a proper or sustainable solution.

We need to find ways to integrate AI into organisations and businesses in a responsible and ethical way instead. What can be done?

  1. Organisations, especially those dealing with sensitive data, should consider developing their own AI systems. Various tools and platforms, such as LLAMA, are available to assist organisations in such endeavours. Internal AI systems would feed institutional knowledge in explicit form (documents, regulations, studies, strategies) and tacit form (staff experience and expertise).
  2. Since developing internal AI will take time, organisations could start creating prompt libraries. Instead of creating new prompts from scratch, they can rely on ready-made ones or, at least, templates of prompts that could be adjusted to specific situations. Through prompt libraries, they can also increase ‘discretion’ in their interaction with major AI platforms such as ChatGPT and Bard.
  3. Businesses and organisations should reform in order to make the best use of AI tools. People should be prioritised in the following ways: 

Cognitive proximity: Diplo’s experiment for AI-driven organisation

The cognitive proximity of Diplo’s AI, language, and content experts facilitates the critical interplay between AI’s technical, linguistic, and philosophical aspects. Understanding the deeper linguistic reasoning of AI systems is especially useful for fine-tuning AI systems via reinforcement learning.

Parting thoughts

In conclusion, the ancient philosopher Socrates, who walked the streets of Athens encouraging people to think about themselves and search for truth, has a lot to teach us in the era of artificial intelligence. As we navigate the ever-evolving landscape of AI development and AI prompting, Socratic inquiry serves as a robust and time-tested guide, allowing us to draw on the wellspring of knowledge accumulated over millennia. 

The parallels between the Socratic method and AI prompting remind us of the timeless value of critical thinking, dialogue, and self-discovery. By infusing our AI interactions with the essence of Socratic questioning, we can encourage a deeper understanding of ourselves and society, fostering insightful discussions and more effective use of AI platforms. As we move into this new field, it’s important to keep a humble and open mind, know our limits, and be eager to learn, just as Socrates did.

 “The only true wisdom is in knowing you know nothing.”

Socrates

By connecting the knowledge of the past with the new ideas of the future, we can better deal with current changes and use AI to its full potential. In this way, we become not only advanced AI users but also philosophers in our own right, using AI as a tool for self-discovery, intellectual growth, and societal advancement. 


Annex I: AI prompting applied to the ‘AI extinction’ debate

One of the critical issues in discussions on AI governance is the argument that AI can pose an extinction risk for humanity. This question is highly visible in the public sphere. Thus, we selected this question of AI governance to show how AI prompting functions. We selected 6 critical questions used by Socratic inquiry.

1. Questions for clarification

“Tell me more” questions which ask them to go deeper. 

  • Why are you saying that?
  • How does this relate to our discussion?
  • What exactly does this mean? 
  • How does this relate to what we have been talking about? 
  • What is the nature of …? 
  • What do we already know about this? 
  • Can you give me an example? 
  • Are you saying … or … ? 
  • Can you rephrase that, please?

What exactly does it mean that AI is the extinction of humanity? 

ChatGPT 4.0

The phrase “AI is the extinction of humanity” suggests a pessimistic view of artificial intelligence’s potential threats to human existence. This includes superintelligent AI possibly causing unintentional harm through misaligned goals, misuse of autonomous weapons, mass unemployment from increased automation, societal disruptions due to heavy AI-dependency, and the erosion of privacy with AI’s data-processing capabilities. However, these scenarios are speculative, and researchers are addressing these concerns by developing AI alignment methods and advocating for its ethical use. The future impact of AI largely depends on its development and application.

Gemini

The statement that AI could lead to the extinction of humanity refers to the potential risks posed by artificial intelligence that is more intelligent than humans. These risks include: AI could become so powerful that it could decide to exterminate humans; AI could accidentally cause our extinction by developing new technologies with unintended consequences; AI could lead to a new kind of inequality by automating most jobs and leaving the rest of humanity behind. It is important to note that these are just some of the potential risks posed by AI. It is also important to remember that AI is still in its early stages of development, and it is impossible to say for sure what the future holds. However, the risks are real, and it is important to start thinking about how we can mitigate them.

2. Questions that probe assumptions

Makes them think about their unquestioned beliefs on which they are founding their argument. 

  • What could we assume instead?
  • How can you verify or disapprove that assumption?
  • Why are neglecting radial diffusion and including only axial diffusion?
  • What else could we assume? 
  • You seem to be assuming … ? 
  • How did you choose those assumptions? 
  • Please explain why/how … ? 
  • How can you verify or disprove that assumption? 
  • What would happen if … ? 
  • Do you agree or disagree with … ?

What are assumptions that AI can extinct humanity?

ChatGPT 4.0

The idea that AI could cause human extinction hinges on several assumptions. The first is that AI could attain superintelligence, surpassing human cognitive abilities, and that this superintelligent AI could operate autonomously. The second, called “orthogonality”, suggests that AI’s goals and intelligence can be independent, implying that a superintelligent AI might not inherently respect or prioritize human wellbeing.The third assumption, “instrumental convergence”, posits that most AI, irrespective of their final goals, would pursue certain instrumental goals like self-preservation or resource acquisition, which, if prioritized over human safety, could be catastrophic. Additionally, the “value misalignment” hypothesis supposes that even minor deviations from human values in an AI’s goals could lead to disastrous outcomes. For example, a superintelligent AI assigned to make paperclips could, theoretically, transform all matter, including humans, into paperclips.The final assumption is that humanity may fail to establish necessary safeguards or control mechanisms before such a superintelligent AI is created. These are the key assumptions that underpin the fear of AI potentially leading to human extinction.

Gemini

The potential for AI to lead to human extinction is a serious concern that has been raised by many experts. There are a number of assumptions that have been made about how AI could lead to human extinction, including the possibility that AI could become so intelligent that it surpasses human intelligence and decides that humans are a threat, or that AI could develop a goal that is incompatible with human survival. In addition to these assumptions, there are a number of other potential risks associated with AI, such as the possibility that AI could be used to create autonomous weapons that could kill without human intervention, or that AI could be used to manipulate people and control their behavior.It is important to note that these are just assumptions, and there is no guarantee that any of them will come true. However, the potential risks of AI are serious enough that it is important to take them seriously and to develop strategies to mitigate them. We need to have a thoughtful and informed discussion about the risks of AI so that we can make informed decisions about how to develop and use AI in the future.

3. Questions that probe reasons and evidence:

Dig into the reasoning rather than assuming it is a given. People often use un-thought-through or weakly understood supports for their arguments.

  • What would be an example?
  • What is….analogous to?
  • What do you think causes to happen…?
  • Do you think that diffusion is responsible for the lower conversion?
  • Why is that happening? 
  • How do you know this? 
  • Show me … ? 
  • Can you give me an example of that? 
  • What do you think causes … ? 
  • What is the nature of this? 
  • Are these reasons good enough? 
  • Would it stand up in court? 
  • How might it be refuted? 
  • How can I be sure of what you are saying? 
  • Why is … happening? 
  • Why? (keep asking it — you’ll never get past a few times) 
  • What evidence is there to support what you are saying?

What is AI extinction analogous to?

ChatGPT 4.0

AI extinction is analogous to the extinction of biological species, but with a crucial difference. While biological extinction refers to the permanent loss of a species or group of organisms from the Earth, AI extinction refers to the hypothetical scenario in which artificial intelligence surpasses human intelligence to such an extent that it leads to the permanent or catastrophic decline of humanity or the dominance of AI over human civilization.The analogy lies in the irreversible and profound consequences of extinction events. Just as the extinction of a species can lead to significant disruptions in ecosystems and the loss of valuable biodiversity, AI extinction could lead to significant disruptions in society, economics, and even human survival. The severity of AI extinction, though speculative, is often portrayed in science fiction as a potential risk of advanced artificial intelligence.It’s essential to note that AI extinction is a hypothetical concern rather than an established or imminent reality. The field of AI safety and ethics aims to address these concerns and ensure that artificial intelligence is developed in a responsible and beneficial manner. Researchers, policymakers, and experts work to create frameworks and guidelines that promote the safe and beneficial use of AI technology while minimizing potential risks.

Gemini

AI extinction is analogous to a number of other existential risks, including:Nuclear war: If AI were to become sufficiently advanced, it could develop its own nuclear weapons and use them against humanity. This would be a catastrophic event that could lead to the extinction of humanity.Pandemic: AI could also develop a new, highly contagious and deadly virus that could wipe out humanity. This is a similar risk to a natural pandemic, but AI could accelerate the development and spread of a virus in ways that are not possible with natural means.Superintelligence: If AI were to become sufficiently intelligent, it could decide that humanity is a threat to its own existence and take steps to eliminate us. This is a scenario that has been explored in many science fiction works, but it is also a real possibility that we need to take seriously.

4. Questions about viewpoints and perspectives:

Show that there are other, equally valid, viewpoints.

  • What would be an alternative?
  • What is another way to look at it?
  • Would you explain why it is necessary or beneficial, and who benefits?
  • Why is the best?
  • What are the strengths and weaknesses of…?
  • How are…and …similar?
  • What is a counterargument for…?
  • With all the bends in the pipe, from an industrial/practical standpoint, do you think diffusion will affect the conversion?
  • Another way of looking at this is …, does this seem reasonable? 
  • What alternative ways of looking at this are there? 
  • Why it is … necessary? 
  • Who benefits from this? 
  • What is the difference between… and…? 
  • Why is it better than …? 
  • What are the strengths and weaknesses of…? 
  • How are … and … similar? 
  • What would … say about it? 
  • What if you compared … and … ? 
  • How could you look another way at this?

What are the counterarguments against the view about AI extinction of humanity?

ChatGPT 4.0

Many argue against AI causing human extinction. First, AI is still fundamentally controlled by humans and doesn’t have an independent purpose or motivation. Second, as potential threats from AI emerge, there will likely be regulations in place to control its development and use. Third, theoretical frameworks like Asimov’s Laws of Robotics propose programming AIs to avoid harming humans. Fourth, due to the complexity brake principle, it might be much harder for AI to outsmart us. Fifth, critics argue that AI risk is overhyped and driven by sensationalist media. Sixth, AI lacks inherent self-improvement or survival drive, reducing the likelihood of uncontrollable AI. Seventh, multiple AIs might serve as checks on each other, maintaining balance. Finally, cooperative AI development prioritizing safety could prevent the deployment of dangerous AI. Despite these points, there’s no consensus, and these counterarguments don’t negate the need for careful AI management.

Gemini

The view that AI will lead to the extinction of humanity is a pessimistic one that is not supported by the evidence. There are a number of counterarguments that suggest that AI could actually benefit humanity, and that we have the power to control AI and mitigate the risks associated with it.AI is not inherently dangerous. It is a tool, and like any tool, it can be used for good or evil. There is no reason to believe that AI will be inherently more dangerous than any other tool that humans have created.We have the power to control AI. AI is still in its early stages of development, and we have a great deal of control over how it is developed and used. We can ensure that AI is used for good by building in safeguards and by educating the public about the potential risks of AI.AI could actually benefit humanity. AI has the potential to solve some of the world’s most pressing problems, such as climate change, poverty, and disease. If we use AI wisely, it could actually help us to create a better future for humanity.

5. Questions that probe implications and consequences:

The argument that they give may have logical implications that can be forecast. Do these make sense? Are they desirable?

  • What generalizations can you make?
  • What are the consequences of that assumption?
  • What are you implying?
  • How does…affect…?
  • How does…tie in with what we learned before?
  • How would our results be affected if neglected diffusion?
  • Then what would happen? 
  • What are the consequences of that assumption? 
  • How could … be used to … ? 
  • What are the implications of … ? 
  • How does … affect … ? 
  • How does … fit with what we learned before? 
  • Why is … important? 
  • What is the best … ?

How can narratives about the AI-driven extinction of humanity impact the future of AI governance? 

ChatGPT 4.0

Narratives depicting AI-driven extinction of humanity can significantly impact the future of AI governance. Such narratives can instill fear and anxiety among the public, leading to demands for stringent regulations on AI development to ensure safety and prevent catastrophic scenarios. Governments and policymakers may respond by introducing new laws and regulations to control AI research and deployment, while funding and research priorities may shift towards AI safety and alignment. International discussions and agreements on AI governance may arise, and industry self-regulation might be prompted to address public concerns and build trust.These narratives can spark ethical discussions, increased public engagement in AI governance, and a focus on value alignment and transparency. Researchers may prioritize work on robustness and explainability to address safety concerns, and the development of AI safety standards could accelerate. Overall, narratives about AI-driven extinction may lead to collaboration between AI developers, researchers, and policymakers to address safety and ethical challenges. However, a balanced approach is necessary to avoid hindering the development of beneficial AI technologies. Responsible communication and accurate information about AI risks and benefits are crucial to shape effective AI governance for a positive future.

Gemini

Narratives about the AI-driven extinction of humanity can impact the future of AI governance in a number of ways. These narratives can raise awareness of the potential risks of AI and lead to calls for more regulation. They can also create a climate of fear and uncertainty around AI, which could make it difficult to develop and deploy AI safely and responsibly. Additionally, these narratives can be used to promote particular agendas or interests.The impact of these narratives on the future of AI governance will depend on a number of factors, but it is clear that they have the potential to play a significant role in shaping the future of AI governance. Some specific examples of how these narratives have already impacted the future of AI governance include the development of the Asilomar Principles and the European Union’s regulation on artificial intelligence.Overall, narratives about AI-driven extinction can have a significant impact on the future of AI governance. It is important to be aware of these narratives and to understand how they can shape the way that AI is developed and used.

6. Questions about the question

Digs into metacognition: Thinking about one’s thinking 

  • What was the point of this question?
  • Why do you think I asked this question?
  • What does…mean?
  • How does…apply to everyday life?
  • Why do you think diffusion is important?
  • What was the point of asking that question? 
  • What else might I ask? 
  • Why do you think I asked this question? 
  • What does that mean? 
  • Am I making sense? 
  • Why not?

Why is this question of AI extinction of humanity asked?

ChatGPT 4.0

concerns. As AI advances, it could develop a superintelligence that surpasses human comprehension and control, potentially posing a threat. Misalignment of goals or “alignment problem” is another issue; an AI could pursue objectives harmful to humans if not perfectly aligned with human values. Additionally, as AI systems become more complex, controlling and predicting their behaviors could become more challenging. The idea of rogue AI is also amplified by media and popular culture. Some researchers focus on existential risks, with advanced AI seen as a potential hazard. Despite these concerns, there’s debate about the likelihood of such scenarios. Many researchers are working to ensure AI develops safely and beneficially. The discussion is a crucial preventive measure, allowing us to identify potential risks and take steps to mitigate them.

Gemini

The question of AI extinction of humanity is asked because of the rapid pace of AI development, the potential for AI to become superintelligent, and the lack of safeguards in place to prevent the development of dangerous AI.Some experts believe that AI could pose an existential threat to humanity if it becomes superintelligent and decides that humans are a threat. Others worry that AI could be used to create autonomous weapons that could kill without human intervention. Still others fear that AI could accidentally cause a global catastrophe, such as a nuclear war or a pandemic.It is important to note that these are just a few of the possible scenarios in which AI could lead to the extinction of humanity. There are many other ways in which this could happen, and it is impossible to predict for sure what the future holds. However, it is important to be aware of the potential risks of AI and to take steps to mitigate these risks.

Next steps

If you want to learn more about Diplo’s artificial intelligence research, training, and development, including AI prompting, please register here…

 Light, Advertisement, Poster, Lightbulb, Person
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.

Subscribe to Diplo's Blog