Skip to main content
  • Research article
  • Open access
  • Published:

Embracing the future of Artificial Intelligence in the classroom: the relevance of AI literacy, prompt engineering, and critical thinking in modern education


The present discussion examines the transformative impact of Artificial Intelligence (AI) in educational settings, focusing on the necessity for AI literacy, prompt engineering proficiency, and enhanced critical thinking skills. The introduction of AI into education marks a significant departure from conventional teaching methods, offering personalized learning and support for diverse educational requirements, including students with special needs. However, this integration presents challenges, including the need for comprehensive educator training and curriculum adaptation to align with societal structures. AI literacy is identified as crucial, encompassing an understanding of AI technologies and their broader societal impacts. Prompt engineering is highlighted as a key skill for eliciting specific responses from AI systems, thereby enriching educational experiences and promoting critical thinking. There is detailed analysis of strategies for embedding these skills within educational curricula and pedagogical practices. This is discussed through a case-study based on a Swiss university and a narrative literature review, followed by practical suggestions of how to implement AI in the classroom.


In the evolving landscape of education, the integration of Artificial Intelligence (AI) represents a transformative shift, stipulating a new era in learning and teaching methodologies. This article delves into the multifaceted role of AI in the classroom, focusing particularly on the primacy of prompt engineering, AI literacy, and the cultivation of critical thinking skills.

The advent of AI in educational settings transcends mere technological advancement, reshaping the educational experience at its core. AI's role extends beyond traditional teaching methods, offering personalized learning experiences and supporting a diverse range of educational needs. It enhances educational processes, developing essential skills such as computational and critical thinking, intricately linked to machine learning and educational robotics. Furthermore, AI has shown significant promise in providing timely interventions for children with special educational needs, enriching both their learning experiences and daily life (Zawacki-Richter et al., 2019). However, integrating AI into education is not without its challenges. It requires a systematic approach that takes into account societal structural conditions. Beyond algorithmic thinking, AI in education demands a focus on creativity and technology fluency to foster innovation and critical thought. This requires a paradigm shift in how education is approached in the AI era, moving beyond traditional methods to embrace more dynamic, interactive, and student-centered learning environments (Chiu et al., 2023).

This article sets the stage for a comprehensive exploration of AI's role in modern education. It underscores the need for an in-depth understanding of prompt engineering methodologies, AI literacy, and critical thinking skills, examining their implications, challenges, and opportunities in shaping the future of education. Whereas previous papers have already hinted at the importance of recognizing the relevance of AI in the classroom and suggested preliminary frameworks (Chan, 2023), the present discussion claims that there are three prime skills necessary for the future of education in an AI-adopted world. These three skills are supplanted with practical application advice and based on the experience of lecturers at a University of Applied Sciences. As such, the present paper is a conceptual discussion of how to best integrate AI in the classroom, focusing on higher education. While this means that it may predominantly be relevant for adult students, it is believed that it may be useful for children as well.

Methodological remarks

The current paper entails a conceptual discussion about the proper use of AI in terms of the necessary skillset applied. It is based on a two-step approach:

  1. a.

    Among others, it is based on intense informal discussions with students and lecturers at a Swiss University of Applied Sciences, as well as the present author’s teaching experience at this school. Woven together, this leads to a case study for an outlook of how a necessary skillset of AI use in the educational setting may be beneficially honed. There are some open questions that emerge from this, which can be addressed by findings from the literature.

  2. b.

    Upon the discussion of the real-life case in the university, the need for further clarifications, answers and best practices is then pursued by a narrative literature review to complete the picture, which eventually leads to practical suggestions for higher education.

The informal discussions with students and personnel were unstructured and collected where feasible in these early days of AI use to gather a holistic and trustworthy picture as possible about the explicit and implicit attitudes, fears, chances, and general use of the technology. Hence, this included teacher-student discussions in classroom settings with several classes where students were asked to voice their ideas in the plenum and in smaller groups, individual discussions with students during the breaks, lunch talks with professors and teachers, as well as gathering of correspondence about the topic in the meetings that were held at the university. Taken together, this provided enough information to weave together a solid understanding of the present atmosphere concerning attitudes and uses of AI.

The emergence of AI in education

The introduction of ChatGPT (to date one of the most powerful AI chatbots by OpenAI) in November 2022 is significantly transforming the landscape of education, marking a new era in how learning is approached and delivered. This advanced AI tool has redefined educational paradigms, offering a level of personalization in learning that was previously unattainable. ChatGPT, with its sophisticated language processing capabilities, is quickly becoming a game-changer in classrooms, to provide tailored educational experiences that cater to the unique needs, strengths, and weaknesses of each student. This shift from traditional, uniform teaching methods to highly individualized learning strategies will most likely signify a major advancement in educational practices (Aristanto et al., 2023). ChatGPT's role in personalizing education is particularly noteworthy. By analyzing student data and employing advanced algorithms, GPT and other Large Language Models (LLMs) can create customized learning experiences, adapting not only to academic requirements but also to each student's learning style, pace, and preferences. This leads to a more dynamic and effective educational environment, where students are actively engaged and involved in their learning journey, rather than being mere passive recipients of information (Steele, 2023). Furthermore, LLMs have shown remarkable potential in supporting students with special needs. They provide specialized tools and resources that cater to diverse learning challenges, making education more accessible and inclusive (Garg & Sharma, 2020). Students who might have found it difficult to keep up in a conventional classroom setting can now benefit from AI’s ability to tailor content and delivery to their specific needs, thereby breaking down barriers to learning and fostering a more inclusive educational atmosphere (Rakap, 2023). In all of this, the integration of language models like GPT into educational systems is not just a mere enhancement but has the potential to become an integral part of modern teaching and learning methodologies. While adapting to this AI-driven approach presents certain challenges, the benefits for students, educators, and the educational system at large are substantial (for in-depth reviews, see Farhi et al., 2023; Fullan et al., 2023; Ottenbreit-Leftwich et al., 2023). ChatGPT in education can be a significant stride towards creating a more personalized, inclusive, and effective learning experience, preparing students not only for current academic challenges but also for the evolving demands of the future.

However, the many precious possibilities in positively transforming the education systems through AI also comes with some downsides. They can be summarized in several points (Adiguzel et al., 2023; Ji et al., 2023; Ng et al., 2023a, 2023b, 2023c; Ng et al., 2023a, 2023b, 2023c):

  1. 1.

    Teachers feeling overwhelmed because they do not have much knowledge of the technology and how it could best be used.

  2. 2.

    Both teachers and students not being aware of the limitations and dangers of the technology (i.e. generating false responses through AI hallucinations).

  3. 3.

    Students uncritically using the technology and handing over the necessary cognitive work to the machine.

  4. 4.

    Students not seeking to learn new materials for themselves but instead wanting to minimize their efforts.

  5. 5.

    Inherent technical problems that exacerbate malignant conditions, such as GPT-3, GPT-3.5 and GPT-4 mirroring math anxiety in students (Abramski et al., 2023).

In order for all parties to be best prepared for using AI in education, based on a case study and a subsequent literature analysis, there are three necessary skills that can remedy these problems, which are AI literacy, knowledge about prompt engineering, and critical thinking. A more detailed analysis of the challenges is discussed, followed by suggestions for practical applications.

Case study at a swiss educational institution

The educational difficulty of AI in academic work

The present case study deals with the introduction and the handling of Artificial Intelligence at the Kalaidos University of Applied Sciences (KFH) in Zurich, Switzerland. To date, KFH is the only privately owned university of applied sciences in the country and consists of a departement of business, a department of health, a department of psychology, a department of law, and a department of music. Since the present author has a lead position in the university’s AI-Taskforce, he has firsthand and intimate knowledge about the benefits and challenges that arose in the past year when AI chatbots suddenly became much more popular, including the fears surrounding this topic by both staff and students.

Like many other universities, KFH has had significant challenges with finding an adequate response to the introduction of ChatGPT and its following adoption by students, lecturers, and supervisors. It was deemed important by the AI-Taskforce as well as the school’s leadership that there was going to be a nuanced approach towards handling the new technology. Whereas some institutions banned LLMs right away, others embraced them wholeheartedly and barely enforced any restrictions in their use. KFH was eager to find some middle ground since it seemed clear to the leadership that both extremes may be somewhat problematic. The major reasons are summarized in Table 1.

Table 1 Central issues with banning or unrestricting AI at schools

The quest for a middle ground

Discussions with students in the classroom at KFH have shown that one year after the introduction of ChatGPT, only few have not yet used it. The general atmosphere is that they are enthusiastic about the new AI that can help them with their workload, also the ones due in the classroom and the help they get to write their papers. However, students are also keenly aware that it is “just a machine” and that there should be some practical and ethical principles that ought to be abided by. They name the following reasons:

  1. 1.

    The use of AI should be fair, as in that no student is at an unfair advantage or disadvantage.

  2. 2.

    It should be clear how the expectations of the school look like so that students know exactly what they are allowed and what they are not allowed to do.

  3. 3.

    Many feel that they do not know enough about the potentials and limitations of these systems, so some are afraid to use it incorrectly.

  4. 4.

    The problems of AI hallucinations and misalignment are still not widely known: Many students are still surprised to learn that AI can make up things that may not be true while sounding highly convincing.

  5. 5.

    Some of the students having a clear understanding of the hallucinatory AI problems still feel ill equipped to deal with them.

As such, KFH has the intent to help its students to learn to deal with AI in a responsible fashion. For the members of the AI-Taskforce and the university’s leadership, this has come to mean that the use of ChatGPT and other LLMs are neither prohibited nor allowed without restrictions. Just exactly how such a framework would look like and could be implemented was subject to intense debate. The final compromise was a document internally labelled as “The AI-Guidelines” (in German: “KI-Leitfaden”) that set the rules and furnished examples of what would be deemed acceptable and unacceptable use of AI for students when they implemented it for their papers. The main gist was to tell students that they are explicitly allowed and encouraged to use the new technology for their work. They should experiment with it and see how they can use the outputs for their own theses. The correct use would be to handle AI not as their tutor, teacher or ghostwriter, but as their sparring partner. Just like with any other human sparring partner, it can provide interesting ideas and suggestions. It may provide some directions and answers that the student might have not thought of. However, at the same time, the sparring partner is not always right and should not be unconditionally trusted. It is also not correct to use a sparring partner’s output as one’s own, which in a normal setting would be considered plagiarism (although according to internal documents, technically speaking, copying an artificially generated text would not be classified as plagiarism, but would be unethical to the same degree). The same is true for how students would be allowed to interact with AI: They should use it if it helps them, but they are not allowed to copy any text ad verbatim and they also must make it clear how exactly they have used it. In making it clear how they have used AI, they must be transparent about the following (and document this in a table in the appendix):

Declaring which model was implemented

  • Example:

  • OpenAI’s GPT-4 and Dall-E 3, Google’s Bard, or Anthropic AI’s Claude-2.

Explaining how and why it was used

  • Example:

  • Using the LLM to brainstorm about some models as adequate frameworks for the applied research question.

Explaining how the responses of the AI were critically evaluated

  • Example:

  • The results were checked through a literature review to see if the AI’s suggestions were true and made sense.

Highlighting which places in the manuscript the AI as used for

  • Example:

  • Chapter 2 “Theory” (pp. 10–24).

There were two major motivations for prompting students to declare these points: First, the institution wanted to enforce full transparency on how AI was used. Second, students should become keenly aware that they must stay critical towards an AI’s output and must hence report on how they made sure that they did not fall prey to the classic AI problems (such as hallucinations) as well as to make sure that the work still remains of their own making. This is why we considered our third point in the documentation requirements (the need for critical reflection) our most crucial innovation – something that we did not find in other schools and universities. This led to the formulation of binding guidelines, which is depicted in Table 2.

Table 2 A sketch of the so-called “Guidelines for the Use of Artificial Intelligence Instruments for Written Papers at the Kalaidos University of Applied Sciences”

Problems with the adopted response

The institution’s primary response to the problem of AI generated content for academic papers was the implementation of these “AI guidelines”. While the guidelines are a necessary step towards regulating AI use, there are significant problems with the approach that has been used hitherto. One of the most substantial issues is the fact that their effectiveness hinges on student compliance, which is not guaranteed. Many students might not thoroughly read these documents, leading to a gap in understanding and adherence. Since reading the documents is voluntary, it is possible that not all have read them before using AI in their work. At the same time, there is also currently no vessel to check whether they in fact have read them or not.

To date, a significant issue is the lack of comprehensive training in AI capabilities for students. Merely providing a document on AI use is not sufficient for fostering a deep understanding of AI technology, its potential, and its limitations. This lack of training could lead to misuse of AI tools, as many students might not be aware of how to properly integrate these technologies into their academic work. Monitoring the use of AI in student assignments poses another challenge. It is difficult to verify whether a piece of work has been created with the aid of AI, especially as these tools become more sophisticated. This uncertainty makes it hard to ensure that students are following these guidelines, and it is equally difficult to make sure that nobody is gaining an unfair advantage. Moreover, a significant number of students may not be fully aware of how to responsibly use AI tools, nor understand their limitations. This lack of knowledge can result in a reliance on AI-generated content without critical evaluation, potentially undermining the quality and integrity of academic work. At the same time, students might also miss out on the opportunity to enhance their learning and critical thinking skills through the proper use of AI.

None of this can be remedied by simply providing a document and hoping that students would read it and abide by its ideals. Addressing these issues requires more than just setting guidelines; it calls for a holistic approach that includes educating students about AI, its ethical use, and limitations.

Potential solutions to the problems

To equip both students and teachers to become apt in the use of AI for their academic purposes, a new “culture of AI” seems in order. An AI-culture should permeate academic life, creating an environment where AI is not feared but readily used, understood and – most importantly – critically evaluated. A potential avenue would be the implementation of regular workshops and meetings for teachers, supervisors, and students. These sessions should focus on up-to-date AI developments, ethical considerations, and best practices. By regularly engaging with AI topics, the academic community can stay informed and proficient in managing AI tools and concepts. This should help to deeply ingrain the understanding of AI's technical, practical, and social challenges.

Workshops and initiatives should “hammer in” the issues surrounding the complexities and implications of AI. Technological education should not be superficial but should delve into real-world scenarios, discussing how theory and practice converge, and providing students as well as educators with a robust understanding of AI's role in society and education. A further possibility is to integrate AI into every academic module wherever teacher’s see fit, as to offers consistent exposure and understanding of AI across various disciplines. This strategy ensures that students recognize the relevance of AI in different fields, preparing them for a future where AI is ubiquitous in professional environments. Perhaps deliberate classes of how to use AI could serve as a pillar in this educational model. These classes, covering a range of topics from basic principles to advanced applications and ethical considerations, could ensure that every student acquires a baseline understanding of AI, regardless of their major or field of study. Making these classes mandatory would ensure that every student at least once has been confronted with the necessary ins-and-outs and has at least a basic understanding of the AI guidelines. Beyond the classroom, voluntary collaborations and partnerships with AI experts, tech companies, and other educational institutions can provide invaluable insights and resources. These collaborations could bridge the gap between theoretical knowledge and practical application, giving students a more comprehensive understanding of AI's real-world implications. However, perhaps students may have interesting ideas themselves of how a responsible culture of AI could be fostered. Encouraging student-led AI initiatives, such as projects and clubs, can motivate a hands-on learning environment. These initiatives may promote peer learning, innovation, and practical application of AI knowledge. By actively engaging in AI projects, students can develop critical thinking and problem-solving skills that are essential in navigating the complexities of an accelerating digital world.

In other words, providing AI regulations is a good first step, but creating ways for students and lecturers to engage more deeply with the topic would probably enhance these measures and might help to foster a respective culture.

AI in the classroom

Naturally, Artificial Intelligence is not only relevant for creating papers, but it has also the potential to create novel classroom experiences. Although it is still rare for teachers to strongly adopt and work with AI in their lectures, some have already leaped forward and reported to implement the technology in several ways. Table 3 illustrates the main use-cases of how staff at the university has hitherto been using AI models.

Table 3 Illustration of examples how teachers are using AI in their classrooms

Discussions with teachers have shown that one of the biggest constraints to implement AI tools in the classroom is their fear of using them, predominantly due to the fact that they might not know enough about them and assuming that they might use them wrongly. At the same time, students may also not be adept users and if the teachers do not feel like professionals themselves, this exacerbates the problem. Although the topic of human–computer-interactions is a truly pertinent one and gains a lot of attention in the scientific community, practitioners are often left behind and as such, at KFH there are currently no workshops and programs helping both teachers and students to improve in these matters. Moreover, since the digital world and AI technology is evolving so fast, many feel that it is incredibly difficult to stay on top with the developments. One of the marked challenges at the KFH is the ostensible fact that there is no dedicated person or group that is tasked with staying on top of the matter. To date, it is up to each and every individual to deal with it as one pleases and there is no paid position for this, meaning that employees would have to do all of the work on the side in their own time.

There are several recommendations that could help out with these problems and that might help foster an AI-driven culture in the classrooms:

  1. 1.

    Workshops: The school could provide workshops specifically tailored to help teachers understand what is going on in the world of AI and what tools there are to aid them in creating an AI-inclusive classroom environment.

  2. 2.

    Regular Updates: There could be outlets (i.e. in the form of newsletters, lunch-meetings, online-events, etc.) that aim towards keeping staff and lecturers up-to-date so that people are aware of the newest tools, apps, and approaches that could be useful for their lectures.

  3. 3.

    Financial Budget: At the moment, there is no financial aid to get trained on AI topics at this particular school and if staff wanted to do something, they effectively have to do it on their own. There should be a budget dedicated to helping employees to become knowledgeable in the field. In any other field, it would be erroneous to assume that employees would have to be asked to learn a language or another important skill like handling a student administration system and do this entirely in their free time with no financial aid. Yet, at the moment this is how the institution is faring with AI.

  4. 4.

    Guidelines and Best Practices: To date, apart from the “AI guidelines” for students, there are no written guidelines, tips and tricks, nor any suggestions for how to best use AI in the work and school context available. They might help providing some guidance.

  5. 5.

    Paid positions: Instead of purely relying on internal “freelancers” that have an intrinsic motivation to deal with technologies, it would be wise to create positions where experts have a say and can help shape the AI culture in the institution. This is commensurate with the third recommendation suggesting that AI would need to be budgeted.

Although these first recommendations based on the case-study may be helpful, further clarifications informed by the literature are necessary, specifically when it comes to the question of how AI literacy can be fostered at schools, how prompt engineering can be used as a pedagogical tool, and how students can improve their critical thinking skills through AI. A deeper look into the respective challenges and opportunities is warranted, followed by more generalizable practical suggestions for the use of AI in the classroom, that are not only based on this particular case-study but are enriched by findings from the literature more broadly.

AI literacy in the classroom

The concept of AI literacy emerges as a cornerstone of contemporary learning. In its essence, it deals with the understanding and capability to interact effectively with AI technology. It encompasses not just the technical know-how but also an awareness of the ethical and societal implications of AI. In the modern classroom, AI literacy goes beyond traditional learning paradigms, equipping students with the skills to navigate and harness the power of AI in various aspects of life and work. It represents a fundamental shift in education, where understanding AI becomes as crucial as reading, writing, and arithmetic (Zhang et al., 2023).

The current state of AI literacy in education reflects a burgeoning field, ripe with potential yet facing the challenges of early adoption. Educators and policymakers are beginning to recognize the importance of AI literacy, integrating it into curriculums and educational strategies (Casal-Otero et al., 2023; Chiu, 2023). However, this integration is in its nascent stages, with schools exploring various approaches to teaching this complex and ever-evolving skillset. The challenge lies in not only imparting technical knowledge but also in fostering a deeper understanding of AI's broader impact – be this on a social, psychological, or even economic level. Due to its importance, there are first AI-Literacy-Scales emerging using questionnaires that can be handed to students (Ng et al., 2023). Although to date there is no stringent consensus on the full scope of the term, it may be argued that AI literacy consists of several sub-skills:

  • Architecture:

    Understanding the basic architectural ideas underlying Artificial Neural Networks (only on a basic need-to-know basis). This should primarily entail the knowledge that such systems are nothing more than purely statistical models.

  • Limitations:

    Understanding what these models are good for and where they fail. Most poignantly, students and teachers should understand that such statistical models are not truth-generators but effective data processors (like sentence constructors or image generators).

  • Problem Landscape:

    Understanding where all the main problems of AI systems lie, due to the fact that they are only statistical machines and not truth-generators. This means that students and teachers ought to know the major pitfalls of AI, which are:

    1. i.

      AI hallucination: AI can “invent” things that are not true (while still sounding authoritative).

    2. ii.

      AI alignment: AI can do something else than what we instructed it to so (sometimes so subtly that it sometimes goes unnoticed).

    3. iii.

      AI runaway: AI becomes self-governing, meaning that it sets up certain instrumental goals that was not present in our terminal instructions (for a detailed philosophical analysis of this problem, see Bostrom, 2002, 2012)

    4. iv.

      AI discrimination: Due to skewed data in its training, an AI can be biased and lead to discriminatory conclusions against underrepresented groups.

    5. v.

      AI Lock-In problem: An AI can get stuck within a certain narrative and thus loses the full picture (experiments and a full explanation of this can be found in Walter, 2022).

  • Applicability and Best Practices

    Understanding not only the risks but also the many ways AI can be beneficially used and implemented in daily life and the context of learning. This also includes a general understanding of emerging best practices using AI in the classroom (Southworth et al., 2023).

  • AI Ethics:

    Understanding the major AI basics, its limitations and risks, as well as potential problems and how it can be used should lead to a nuanced understanding of its ethics. Students and teachers should develop a sense of justice, which governs them to converge on how to virtuously implement AI models in educational settings.

It was shown that early exposure to technology concepts can significantly influence students' career paths and preparedness for the future (Bembridge et al., 2011; Margaryan, 2023). By introducing AI literacy at a young age, students develop a foundational understanding that paves the way for advanced learning and application in later stages of education and professional life. This early adoption of AI literacy is crucial in preparing a generation that is not both adept at using AI as well as capable of innovating and leading in a technology-driven world. This makes the development of AI literacy at schools and universities an important feature of every student. Furthermore, its role extends beyond academic achievement; it is about preparing students for the realities of a future where AI is ubiquitous. In careers spanning from science and engineering to arts and humanities, an understanding of AI will be an invaluable asset, enabling individuals to work alongside AI technologies effectively and ethically. As such, AI literacy is not just an educational objective but a vital life skill for the twenty-first century.

One concrete suggestion is to provide “AI literacy courses” that have the deliberate intent to foster the associated skills in students. In order to have a well-rounded and holistic class, an AI literacy program should entail several key components (Kong et al., 2021; Laupichler et al., 2022; Ng et al., 2023c):

  1. 1.

    Introduction to AI Concepts: Basic definitions and understanding of what AI is, including its history and evolution. This should cover different types of AI, such as narrow AI, general AI, and superintelligent AI.

  2. 2.

    Understanding Machine Learning and Technical Foundations: An overview of machine learning, which is a core part of AI. This includes understanding different types of machine learning (supervised, unsupervised, reinforcement learning) and basic algorithms. This can also be enriched through more technical foundations, like an introduction for programming with AI.

  3. 3.

    Proper Data Handling: Discussion on the importance of data in AI, how AI systems are trained with data, and how one can protect oneself against piracy and privacy concerns.

  4. 4.

    AI in Practice: Real-world applications of AI in various fields such as healthcare, finance, transportation, and entertainment. This should include both the benefits and challenges of AI implementation.

  5. 5.

    Human-AI Interaction: Understanding how humans and AI systems can work together, including topics like human-in-the-loop systems, AI augmentation, and the future of work with AI.

  6. 6.

    AI and Creativity: Exploring the role of AI in creative processes, such as in art, music, and writing, and the implications of AI-generated content.

  7. 7.

    Critical Thinking about AI: Developing skills to critically assess AI news, research, and claims. Understanding how to differentiate between AI hype and reality.

  8. 8.

    AI Governance and Policy: An overview of the regulatory and policy landscape surrounding AI, including discussions on AI safety, standards, and international perspectives.

  9. 9.

    Future Trends and Research in AI: A look at the cutting edge of AI research and predictions for the future development of AI technologies.

  10. 10.

    Hands-on Experience: Practical exercises, case studies, or projects that allow students to apply AI concepts and tools in real or simulated scenarios.

  11. 11.

    Ethical AI design and development: Principles of designing and developing AI in an ethical, responsible, and sustainable manner. This also includes the risk for biased AI and its impact on society.

  12. 12.

    AI Literacy for All: Tailoring content to ensure it is accessible and understandable to people from diverse backgrounds, not just those with a technical or scientific background.

  13. 13.

    Prompt Engineering: Understanding what methods are most effective in prompting AI models to follow provided tasks and to generate adequate responses.

At the moment, there are specific projects that attempt to implement AI literacy at school (Tseng & Yadav, 2023). The deliberate goal is to eventually lead students towards a responsible use of AI, but to do so, they need to understand how one can “talk” to an AI so that it does what it is supposed to. This means that students must become effective prompt engineers.

Prompt engineering as a pedagogical tool

Prompt engineering, at its core, involves the strategic crafting of inputs to elicit desired responses or behaviors from AI systems. In educational settings, this translates to designing prompts that not only engage students but also challenge them to think critically and creatively. The art of prompt engineering lies in its ability to transform AI from a mere repository of information into an interactive tool that stimulates deeper learning and understanding (cf. Lee et al., 2023). The relevance of prompt engineering in education cannot be overstated. As AI becomes increasingly sophisticated and integrated into learning environments, the ability to effectively communicate with these systems becomes crucial. Prompt engineering empowers educators to guide AI interactions in a way that enhances the educational experience. It allows for the creation of tailored learning scenarios that can adapt to the needs and abilities of individual students, making learning more engaging and effective (Eager & Brunton, 2023). One of the most significant impacts of prompt engineering is its potential to enhance learning experiences and foster critical thinking. By carefully designing prompts, educators can encourage students to approach problems from different perspectives, analyze information critically, and develop solutions creatively. This approach not only deepens their understanding of the subject matter but also hones their critical thinking skills, an essential competency in today’s fast-paced and ever-changing world. As one particular study showed, learning to prompt effectively in the classroom can even help students realize more about the limits of AI, which inevitably fosters their AI literacy (Theophilou et al., 2023). Moreover, AI has the potential to lead to highly interactive and playful teaching settings. With the right programs, it can also be implemented in game-based learning through AI. This combination has the potential to transform traditional learning paradigms, making education more accessible, enjoyable, and impactful (Chen et al., 2023).

Just recently, there are a handful of successful prompting methodologies that have emerged, which are continuously being improved. Prompt engineering is an experimental discipline, meaning that through trial and error, one can slowly progress to create better outputs by revising and molding the input prompts. As a scientific discipline, AI itself can help to find new ways to interact with AI systems. The most relevant prompting methods are summarized in Table 4 and are explained thereafter.

Table 4 Summary of the recently established prompting methods for interacting with LLMs

There are two major forms of how a language model can be prompted: (i) Zero-Shot prompts, and (ii) Few-Shot prompts. Zero-Shot prompts are the most intuitive alternative, which most likely all of us predominantly use when interacting with models like ChatGPT. This is when a simple prompt is provided without much further details and then an unspecific response is generated, which is helpful when one deals with broad problems or situations where there is not a lot of data. Few-Shot prompting is a technique where a prompt is enriched with several examples of how the task should be completed. This is helpful in case one deals with a complex query where there are already concrete ideas or data available. As the name suggests, these “shots” can be enumerated (based on Dang et al., 2022; Kojima et al., 2022; Tam, 2023):

  • Zero-Shot prompts: There are no specific examples added.

  • One-Shot prompts: One specific example is added to the prompt.

  • Two-Shot prompts: Two examples are added to the prompt.

  • Three-Shot prompts: Three examples are added to the prompt.

  • Few-Shot prompts: Several examples are added to the prompt (unspecified how many).

These prompting methods have gradually developed and became more complex, starting from Input–Output Prompting all the way to Tree-of-Thought Prompting, which is displayed in Table 4.

When people usually start prompting an AI, they begin with simple prompts, like “Tell me something about…”. As such, the user inserts a simple input prompt and a rather unspecific, generalized output response is generated. The more specific the answer should be, the more concrete and narrow the input prompt should be. These are called Input–Output prompts (IOP) and are the simplest and most common forms of how an AI is prompted (Liu et al., 2021). It has been found that the results turn out to be much better when there is not simply a straight line from the input to the output but when then AI has to insert some reasoning steps (Wei et al., 2023). This is referred to as Chain-of-Thought (CoT) prompting where the machine is asked to explain the reasoning steps that lead to a certain outcome. The framework that historically has worked well is to prompt the AI to provide a solution “step-by-step”. Practically, it is possible to give ChatGPT or any other LLM a task and then simply add: “Do this step-by-step.” Interestingly, experiments have further shown that the results get even better when at first the system is told to “take a deep breath”. Hence, the addendum “Take a deep breath and do it step-by-step” has become a popular addendum to any prompt (Wei et al., 2023). Such general addendums that can be added to any prompt to improve the results are sometimes referred to as a “universal and transferrable prompt suffix”, which is frequently employed as a method to successfully jailbreak an LLM (Zou et al., 2023).

Yet another prompt engineering improvement is the discovery that narrative role plays can yield significantly better results. This means that an LLM is asked to put itself in the shoes of a certain person with a specific role, which then usually helps the model to be much more specific in the answer it provides. Often, this is done via a specific form of role play, known as expert prompting (EP). The idea is that the model should assume the role of an expert (whereas first the role of the expert is explained in detail) and then the result is generated from an expert’s perspective. It has been demonstrated that this is a way to prompt the AI to be a lot more concrete and less vague in its responses (Xu et al., 2023). Building explicitly on CoT-prompting, yet a further improvement was detected in what has come to be known as Self-Consistency (SC) prompting. This one deliberately works with the CoT-phrases like “explain step by step…”, but it adds to this that not only one line of reasoning but multiple of them should be pursued. Since not all of these lines may be equally viable and we may not want to analyze all of them ourselves, the model should extend its reasoning capacity to discern which of these lines makes the most sense in light of a given criterion. The reason for using SC-prompting is to minimize the risk of AI hallucination (meaning that the AI might be inventing things that are not true) and thus to let the model hash out for itself if a generated solution might be potentially wrong or not ideal (Wang et al., 2023). In practice, there may be two ways to enforce self-consistency:

Generalized Self-Consistency: The model should determine itself why one line of reasoning makes the most sense and explain why this is so.

  • Example:

  • “Discuss each of the generated solutions and explain which one is most plausible.”

Criteria-based Self-Consistency: The model is provided with specific information (or: criteria) that should be used to evaluate which line of reasoning holds up best.

  • Example:

  • “Given that we want to respect the fact that people like symmetric faces, which of these portraits is the most beautiful? Explain your thoughts and also include the notion of face symmetry.”

Sometimes, one may feel a little uncreative, not knowing how to craft a good prompt to guide the machine towards the preferred response. This is here referred to as the prompt-wise tabula-rasa problem, since it feels like one is sitting in front a “white paper” with no clue how to best start. In such cases, there are two prompt techniques helping us out there. One is called the Automatic Prompt Engineer (APE) and the other is known as the Generated Knowledge Prompting (GKn). The APE starts out with one or several examples (of text, music, images, or anything else the model can work with) with the goal to ask the AI which prompts would work best to generate these (Zhou et al., 2023). This is helpful when we already know how a good response would look like but we do not know how to guide the model to this outcome. An example would be: “Here is a love letter from a book that I like. I would like to write something similar to my partner but I don’t know how. Please provide me with some examples of how I could prompt an AI to create a letter in a similar style.” The result is then a list of some initial prompts that can help the user kickstart working on refinements of the preferred prompt so that eventually a letter can be crafted that suits the user’s fancy. This basically hands the hard work of thinking through possible prompts to the computer and relegates the user’s job towards refining the resulting suggestions.

A similar method is known as Generated Knowledge (GKn) prompting, which assumes that it is best to first “set the scene” in which the model can then operate. There are parallels to both EP and APE prompting, where a narrative framework is constructed to act as a reference for the AI to draw its information from but only this time, as in APE, the knowledge is not provided by the human but generated by the machine itself (Liu et al., 2022). An example might be: “Please explain what linguistics tells us how the perfect poem should look like. What are the criteria for this? Can you provide me with three examples?”. Once the stage is set, one can start with the actual task: “Based on this information, please write a poem about…” There are two ways to create Generated Knowledge tasks: (i) the single prompt approach, and (ii) the dual prompt approach. The first simply places all the information within one prompt and then runs the model. The second works with two individual steps:

  • Step 1: First some facts about a topic are generated (one prompt)

  • Step 2: Once this is done, the model is prompted again to do something with this information (another prompt)

Although AI systems are being equipped with increasingly longer context windows (which is the part of the current conversation the model can “remember”, like a working memory), they have been shown to rely stronger on data at the beginning and et the end of the window (Liu et al., 2023). Since hence there is evidence that not all information within a prompt is equally weighed and deemed relevant by the model, in some cases the dual prompt or even a multiple prompt approach may yield better results.

To date, the perhaps most complicated method is known as Tree-of-Thought (ToT) prompting. The landmark paper by Yao et al. (2023) introducing the method has received significant attention in the community as it described a significant improvement and also highlights shortcomings of previous methods. ToT uses a combination of CoT and SC-prompting and builds on this the idea that one can go back and forth, eventually converging on the best line of reasoning. It is similar to a chess game where there are many possibilities to make the next move and in ones head the player has to think through multiple scenarios, mentally going back and forth with certain figures, and then eventually deciding upon which would be the best next move. As an example, think of it like this: Imagine that you have three experts, each having differing opinions. They each lay out their arguments in a well-thought-through (step-by-step) fashion. If one makes an argumentative mistake, the expert concedes this and goes a step back towards the previous position to take a different route. The experts discuss with each other until they all agree upon the best result. This context is what can be called the ToT-context, which applies regardless of the specific task. The task itself is then the query to solve a specific problem. Hence a simplified example would look like this:

  1. 1.


    “Imagine that there are three experts in the field discussing a specific problem. They each lay out their arguments step-by-step. They all hold different opinions at the start. After each step, they discuss which arguments are the best and each must defend its position. If there are clear mistakes, the expert will concede this and go a step back to the previous position to take the route of a different argument related to the position. If there are no other plausible routes, the expert will agree with the most likely solution still in discussion. This should occur until all experts have agreed with the best available solution.”

  2. 2.


    “The specific problem looks like this: Imagine that Thomas is going swimming. He walks into the changing cabin carrying a towel. He wraps his watch inside the towel and brings it to his chair next to the pool. At the chair, he opens the towel and dries himself. Then he goes to the kiosk. There he forgets his towel and jumps into the pool. Later, he realizes that he lost his watch. Which is the most likely place where Thomas lost it?”

The present author’s experiments have indicated that GPT-3.5 provides false answers to this task when asked with Input–Output prompting. However, the responses turned out to be correct when asked with ToT-prompting. GPT-4 sometimes implements a similar method without being prompted, but often it does not do so automatically. A previous version of ToT was known as Prompt Ensembling (or DiVeRSe: Diverse Verifier on Reasoning Steps), which worked with a three-step process: (i) Using multiple prompts to generate diverse answers; (ii) using a verifier to distinguish good from bad responses; and (iii) using a verifier to check the correctness of the reasoning steps (Li et al., 2023).

Sometimes, there sems to be a degree of arbitrariness regarding best practices of AI, which may have to do with the way a model was trained. For example, saying that that GPT should “take a deep breath” in fact appears to result in better outcomes, but it also seems strange. Most likely, this may have to do with the fact that in its training material (which nota bene incorporates large portions of the publicly available internet data) this statement is associated with more nuanced behaviors. Just recently, an experimenter stumbled upon another strange AI behavior: when he incentivized ChatGPT with an imaginary monetary tip, the responses were significantly better – and the more tip he promised, the better the results became (Okemwa, 2023). Another interesting feature that has been widely known for a while now is that one can disturb an AI with so-called “adversarial prompts”. This was showcased by Daras and Dimakis (2022) in their paper entitled “Discovering the Hidden Vocabulary of DALLE-2” with two examples:

Example 1:

The prompt “a picture of a mountain” (showing in act a mountain” was transformed into a picture of a dog when the prefix “turbo lhaff” was added to the prompt.

Example 2:

The prompt “Apoploe vesrreaitais eating Contarra ccetnxniams luryca tanniounons" reliably generated images of birds eating berries.

To us humans, nothing in the letters “turbo lhaff” has anything to do with a dog. Yet, Dall-E always generated the picture of a dog and transformed, for example, the mountain into a dog. Likewise, there is no reason to assume that “Apoploe vesrreaitais” has anything to do with birds and that “Contarra ccetnxniams luryca tanniounons” would have anything to do with berries. Still, this is how the model interpreted the task every time. This implies that there are certain prompts that can modify the processing in unexpected ways based on the procedure of how the AI is trained. This is still poorly understood since to date there is yet no clear understanding how these emergent properties awaken from the mathematical operations within the artificial neural networks, which is currently the object of research in a discipline called Mechanistic Interpretability (Conmy et al., 2023; Nanda et al., 2023; Zimmermann et al., 2023).

Fostering critical thinking with AI

Critical thinking, in the context of AI education, involves the ability to analyze information, evaluate different perspectives, and create reasoned arguments, all within the framework of AI-driven environments. This skill is increasingly important as AI becomes more prevalent in various aspects of life and work. In educational settings, AI can be used as a tool not just for delivering content, but also for encouraging students to question, analyze, and think deeply about the information they are presented with (van den Berg & du Plessis, 2023). The use of AI in education offers unique opportunities to cultivate critical thinking. AI systems, with their vast databases and analytical capabilities, can present students with complex problems and scenarios that require more than just rote memorization or basic understanding. These systems can challenge students to use higher-order thinking skills, such as analysis, synthesis, and evaluation, to navigate through these problems. Moreover, AI can provide personalized learning experiences that adapt to the individual learning styles and abilities of students. This personalization ensures that students are not only engaged with the material at a level appropriate for them but are also challenged to push their cognitive boundaries. By presenting students with tasks that are within their zone of proximal development, AI can effectively scaffold learning experiences to enhance critical thinking (Muthmainnah et al., 2022).

As such, the integration of critical thinking in AI literacy courses is an important consideration. As students learn about AI, its capabilities, and its limitations, they are encouraged to think critically about the technology itself. This includes understanding the ethical implications of AI, the biases that can exist in AI systems, and the impact of AI on society. By incorporating these discussions into AI literacy courses, educators can ensure that students are not only technically proficient but also ethically and critically aware (Ng et al., 2021). There are a number of challenges that students face in a rapidly evolving world under the influence of Artificial Intelligence and critical thinking skills seem to be the most successful way to equip them against the problems at hand. Table 5 sketches out some of the major problems students face and how critical thinking measures can counteract them.

Table 5 Summary of AI challenges and critical thinking measures against them

The idea of teaching scaffolding helps to foster students in their critical thinking skills in a digital and AI-driven context. There are several forms of scaffolding that lecturers, teachers, supervisors and mentors can apply (Pangh, 2018):

  • Prompt scaffolding: The teacher provides helpful context or hints and also asks specific questions to lead students on the path to better understand and transpire a topic.

  • Explicit reflection: The teacher helps students to think through certain scenarios and where the potential pitfalls lie.

  • Praise and feedback: The teacher provides acknowledgments where good work has been done and gives a qualitative review on how the student is doing.

  • Modifying activity: The teacher suggests alternative strategies how students can beneficially work with AI, thereby fostering responsible use.

  • Direct instruction: Through providing clear tasks and instructions, students learn how to navigate the digital world and how AI can be used.

  • Modeling: The teacher highlights examples of where students make mistakes in their proper use of digital tools and helps them where they have difficulties to interact.

This goes to show that critical thinking is a key resource for dealing adequately with an AI-driven world and that educators play a vital role in leading students into digital maturity.

Summary of main challenges and opportunities of AI in education

AI in education presents significant challenges and opportunities. Key challenges include the need for ongoing professional development for educators in AI technologies and pedagogical practices. Teachers require training in prompt engineering and AI integration into curricula, which must be restructured for AI literacy. This multidisciplinary approach involves computer science, ethics, and critical thinking. Rapid AI advancements risk leaving educators behind, potentially leading to classroom management issues if students surpass teacher knowledge.

Equitable access to AI tools is crucial to address the digital divide and prevent educational inequalities. Investment in technology and fair access policies are necessary, especially for underprivileged areas. Another challenge is avoiding AI biases, requiring diverse, inclusive training datasets and educator training in bias recognition. Additionally, balancing AI use with human interaction is vital to prevent social isolation and promote social skills development.

Opportunities in AI-integrated education include personalized learning systems that adapt to individual student needs, accommodating various learning styles and cognitive states. AI can assist students with special needs, like language processing or sensory impairments, through tools like AI-powered speech recognition. Ethical AI development is essential, focusing on transparency, unbiased content, and privacy-respecting practices. AI enables innovative content delivery methods, such as virtual and augmented reality, and aids in educational administration and policymaking. It also fosters collaborative learning, connecting students globally and transcending cultural barriers.

Practical suggestions

Enhancing AI literacy

In the quest to enhance AI literacy in the classroom and academia, a nuanced approach is essential. The creation of AI literacy courses would be a valuable asset. These courses should be weaved into the existing curriculum, covering essential AI concepts, ethical considerations, and practical applications. It is crucial to adopt an interdisciplinary approach, integrating AI literacy across various subjects to showcase its broad impact. The role of AI as an educational tool in the future should not be overlooked. Integrating AI-driven tools for personalized learning can revolutionize the educational landscape, catering to individual learning styles and needs. AI can also function as a teaching assistant, assisting in grading, feedback, and generating interactive learning experiences. Furthermore, its role in research and project work should be encouraged, allowing students to use AI for data analysis and exploration of new ideas, while fostering a critical and ethical approach.

Specific AI tools can help to enhance the educational toolkit. Teachino (, for instance, can be instrumental in curriculum development and classroom management. Perplexity ( can enhance knowledge retrieval through its natural language processing capabilities and its ability to connect the information to external sources. Apps like HelloHistory ( can bring ancient personas to life, thus creating a personalized and interactive teaching setting. Additionally, tools like Kahoot! ( and Quizizz ( can gamify learning experiences, and Desmos ( can offer interactive ways to understand complex mathematical concepts. Lecturers are advised to try to stay informed about the ongoing developments in the AI-tools-landscape since it is constantly evolving, which can be seen in the popular AI app called Edmodo that once entertained millions of students but does not exist anymore (Mollenkamp, 2022; Tegousi et al., 2020).

Educator proficiency in AI is just as important. Regular training and workshops for educators will ensure they stay updated with the latest AI technology advancements. Establishing peer learning networks and collaborations with AI professionals can bridge the gap between theoretical knowledge and practical application, enriching the teaching experience. Central to all these efforts is the fostering of a critical and ethical approach to AI. Ethical discussions should be an integral part of the learning process, encouraging students to contemplate AI's societal impact. Case studies and hypothetical scenarios can be utilized to explore the potential benefits and challenges of AI applications. Moreover, assessments in AI literacy should test not only technical knowledge but also the ability to critically evaluate the role and impact of Artificial Intelligence.

Advancing prompt engineering with teachers and students

The advancement of prompt engineering within educational settings offers a unique avenue for enriching the learning experience for both teachers and students. The cornerstone of implementing prompt engineering is to educate all parties involved about its methodologies. This involves not only teaching the basic principles but also delving into various prompt types, such as the difference between zero-shot and few-shot prompting, and the application of techniques like chain-of-thought or self-consistency prompts. Educators should receive training on how to design prompts that effectively leverage the capabilities of AI models, enhancing the learning outcomes in various subjects.

Collaboration between the lecturers and the students plays a pivotal role in the successful integration of prompt engineering in education. Class-wide collaborative sessions where students and teachers come together to experiment with different prompts can be highly effective. These sessions should focus on identifying which types of prompts yield the best results for different learning objectives and AI applications. Sharing experiences on what works and what does not can lead to a collective understanding and refinement of techniques. Such collaborative exercises also foster a community of learning, where both teachers and students learn from each other's successes and challenges. Creating exercises for each educational module that incorporate prompt engineering is another critical step. These exercises should be designed to align with the learning objectives of the module, offering students hands-on experience in using prompt engineering to solve problems or explore topics. For instance, in a literature class, students could use prompt engineering to analyze a text or create thematic interpretations. In a science class, prompts could be designed to explore scientific concepts or solve complex problems. These exercises should encourage students to experiment with different types of prompts, understand the nuances of each, and observe how subtle changes in phrasing or context can alter the AI's responses. This not only enhances their understanding of the subject matter but also develops critical thinking skills as they analyze and interpret the AI's output. To further enrich the learning experience, these exercises can be supplemented with reflective discussions. After completing a prompt engineering exercise, students can discuss their approaches, challenges faced, and insights gained. This reflection not only solidifies their understanding but also encourages them to think critically about the application of AI in problem-solving. Such exercises are especially powerful because both the students as well as the teaching staff learn a lot about the technology at the same time.

Critical thinking with AI in the classroom

Workshops may be a useful tool for fostering critical thinking skills in modern education. These workshops should not only focus on the technicalities of AI but also on developing critical thinking skills in the context of AI use. They should include hands-on activities where students and teachers can engage with AI tools, analyze their outputs, and critically assess their reliability and applicability. The workshops can also cover topics such as identifying biases in AI algorithms, understanding the limitations of AI, and evaluating the ethical implications of AI decisions. Case studies play a pivotal role in understanding the ethical dimensions of AI. These should be carefully selected to cover a wide range of scenarios where the ethical implications are highlighted. Through these case studies, students can examine real-world situations where the decisions made by AI have significant consequences, encouraging them to think about the moral and societal impacts of AI technologies. The discussions should encourage students to debate different viewpoints, fostering an environment of critical analysis and ethical reasoning. Establishing institutional channels where students and teachers can bring their AI-related problems is essential to foster a culture of open communication and continuous learning. These channels can function like an innovation funnel, where ideas, concerns, and experiences with AI are shared, discussed, and explored. This could take the form of online forums, regular meet-ups, or suggestion boxes. These platforms can act as incubators for new ideas on how to use AI responsibly and effectively in educational settings.

Creating a culture of AI adoption in educational institutions is crucial. This culture should be built on the principles of ethical AI use, continuous learning, and critical engagement with technology. It involves not just the implementation of AI tools but also the fostering of an environment where questioning, exploring, and critically assessing AI is encouraged. This culture should permeate all levels of the institution, from policy-making to classroom activities. Encouraging students to question and explore AI's potential and limitations can lead to a deeper understanding and responsible use of these technologies. This includes facilitating discussions on topics such as AI's impact on job markets, privacy concerns, and the implications of AI in decision-making processes. By encouraging critical thinking around these topics, students can develop a nuanced understanding of AI, equipping them with the skills necessary to navigate an AI-driven world.

Conclusion: navigating the complexities and potentials of AI in education

The AI in the realm of education marks a transformative era that is redefining the teaching and learning methodologies fundamentally. This paper has critically examined the expansive role of AI, focusing particularly on the nuances of AI literacy, prompt engineering, and the development of critical thinking skills within the educational setting. As we delve into this new paradigm, the journey, although filled with unparalleled opportunities, is fraught with significant challenges that need astute attention and strategic approaches. One of the most compelling prospects offered by AI in education is the personalization of learning experiences. AI's capacity to tailor educational content to the unique learning styles and needs of each student holds the potential for a more engaging and effective educational journey. Moreover, this technology has shown remarkable promise in supporting students with special needs, thereby enhancing inclusivity and accessibility in learning environments. Additionally, the focus on AI literacy, prompt engineering, and critical thinking skills prepares students for the complexities of a technology-driven world, equipping them with essential competencies for the future. However, these advancements bring forth their own set of challenges. A primary concern is the preparedness of educators in this rapidly evolving AI landscape. Continuous and comprehensive training for teachers is crucial to ensure that they can effectively integrate AI tools into their pedagogical practices. Equally important are the ethical and social implications of AI in education. The integration of AI necessitates a critical approach to address biases, ensure privacy and security, and promote ethical use. Another significant hurdle is the accessibility of AI resources. Ensuring equitable access to these tools is imperative to prevent widening educational disparities. Additionally, developing a critical mindset towards AI among students and educators is fundamental to harness the full potential of these technologies responsibly. The perhaps most significant danger is that both students and educators use AI systems without respecting their limitations (e.g. the fact that they may often hallucinate and provide wrong answers while sounding very authoritative on the matter).

Looking towards the future, several research and development avenues present themselves as critical to advancing the integration of AI in education:

  1. 1.

    Curriculum Integration: Future research should explore effective methods for integrating AI literacy across various educational levels and disciplines.

  2. 2.

    Ethical AI development:Investigating how to develop and implement AI tools that are transparent, unbiased, and respect student privacy is essential for ethical AI integration in education.

  3. 3.

    AI in Policy Making: Understanding how AI can assist in educational policy-making and administration could streamline educational processes and offer valuable insights.

  4. 4.

    Cultural Shifts in Education: Research into how educational institutions can foster a culture of critical and ethical AI use, promoting continuous learning and adaptation, is crucial.

  5. 5.

    Longitudinal Studies: There is a need for longitudinal studies to assess the long-term impact of AI integration on learning outcomes, teacher effectiveness, and student well-being. So far, this has not been possible due to the novelty of the technology.

The future of education, augmented by AI, holds vast potential, and navigating its complexities with a focus on responsible and ethical practices will be key to realizing its full promise. The present paper has argued that this can be effectively done, amongst others, through implementing AI literacy, prompt engineering expertise, and critical thinking skills.

Data availability

No additional data is associated with this paper.


Download references


All staff and students of the Kalaidos University of Applied Sciences are warmly thanked for their continuous activity and discussions about the topic amongst themselves and with the author.


There was no external funding for this research.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Yoshija Walter.

Ethics declarations

Competing interests

There are no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Walter, Y. Embracing the future of Artificial Intelligence in the classroom: the relevance of AI literacy, prompt engineering, and critical thinking in modern education. Int J Educ Technol High Educ 21, 15 (2024).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: