Skip to main content
  • Research article
  • Open access
  • Published:

First-year students AI-competence as a predictor for intended and de facto use of AI-tools for supporting learning processes in higher education

Abstract

The influence of Artificial Intelligence on higher education is increasing. As important drivers for student retention and learning success, generative AI-tools like translators, paraphrasers and most lately chatbots can support students in their learning processes. The perceptions and expectations of first-years students related to AI-tools have not yet been researched in-depth. The same can be stated about necessary requirements and skills for the purposeful use of AI-tools. The research work examines the relationship between first-year students’ knowledge, skills and attitudes and their use of AI-tools for their learning processes. Analysing the data of 634 first-year students revealed that attitudes towards AI significantly explains the intended use of AI tools. Additionally, the perceived benefits of AI-technology are predictors for students’ perception of AI-robots as cooperation partners for humans. Educators in higher education must facilitate students’ AI competencies and integrate AI-tools into instructional designs. As a result, students learning processes will be improved.

Introduction

AI-robots are agents programmed to fulfill tasks traditionally done by humans (Dang & Liu, 2022). The number of interactions between humans and AI-robots is increasing, which is a strong indicator of the integration of AI-technology into the lives of humans (Kim et al., 2022). A popular example is the deployment of chatbots on a website. These AI-robots can guide users and respond to basic user requests (Larasati et al., 2022). The technology behind semi-automated and fully automated human-like task fulfillment is based on AI-methods and AI-algorithms (Gkinko & Elbanna, 2023). These AI-methods and -algorithms form the main programming characteristics of AI-robots (Woschank et al., 2020). The features lead to an increasing similarity in the performance of humans and AI-robots (Byrd et al., 2021). Additionally, the appearance and behavior of AI-robots are becoming more human-like (Hildt, 2021). While most machines are easily distinguishable from humans, AI-robots might be hard to identify (Desaire et al., 2023) and the ability to identify AI-robots is one of the many challenges accompanying these new technologies. As a result, humans even start to attribute AI-robots with human-like understanding, as well as mental capacities (Roesler et al., 2021).

Accordingly, new and changing demands in humans’ digital competencies are required to deal with the various applications of AI-robots in all sectors of human life (Seufert & Tarantini, 2022). One of these fields is higher education, which is strongly affected by introducing AI-technology and AI-robots (Ouyang et al., 2022; Popenici & Kerr, 2017). Future applications for AI-technology can be found at all levels of higher education (Ocaña-Fernández et al., 2019). On the student level, virtual AI teaching assistants (Kim et al., 2020; Liu et al., 2022) and intelligent tutoring systems (Azevedo et al., 2022; Latham, 2022) have the capability to guide individual learner paths (Brusilovsky, 2023; Rahayu et al., 2023). Educators might implement automated grading and assessment tools (Heil & Ifenthaler, 2023; Celik et al., 2022) or create educational content with generative AI (Bozkurt & Sharma, 2023; Kaplan-Rakowski et al., 2023). The administration of higher education institutions has to adapt their policies to the new technology (Chan, 2023), while incorporating learning analytic tools to improve study conditions, reduce drop-out rates, and adapt their study programs (Aldowah et al., 2019; Ifenthaler & Yau, 2020; Ouyang et al., 2023; Tsai et al., 2020). These developments are embedded into national policy-making processes, such as creating ethics guidelines (Jobin et al., 2019) and competence frameworks (Vuorikari et al., 2022) for AI-technology.

According to recent studies, first-year students enter institutions of higher learning with various perceptions and expectations about university life, for instance, in terms of social aspects, learning experiences, and academic support (Houser, 2004). While students’ generic academic skills appear to be well-established for coping with higher education requirements, their competencies related to AI seem to be limited (Ng et al., 2023).

As of now, there are no conceptual frameworks that cover the use of human-like AI-technology, focusing on first-year students within the context of higher education. Thus, this study is targeting this research gap. For this purpose, seven functionalities of AI-tools have been conceptualized for their application in the context of higher education. This conceptualization is a helpful differentiation to analyze the intent and frequency of use, as well as possible indicators that might affect intent and frequency of use. As a result, implications for further implementing AI-tools in higher education learning processes will be derived.

Background

First-year students

First-year students’ perceptions and expectations and how they cope with academic requirements in higher education have been identified as important factors for learning success and student retention (Mah, & Ifenthaler, 2018; Tinto, 1994; Yorke & Longden, 2008). Several studies identified a mismatch between first-year students’ perceptions and academic reality (Smith & Wertlieb, 2005). Furthermore, research indicates that many first-year students do not know what is expected at university and are often academically unprepared (Mah & Ifenthaler 2017; McCarthy & Kuh, 2006). Students' preparedness is particularly relevant concerning generic skills such as academic competencies, which they should possess when entering university (Barrie, 2007). Numerous aspects, including sociodemographic features, study choices, cognitive ability, motivation, personal circumstances, and academic and social integration, have been linked to first-year students’ learning success and retention in higher education (Bean & Eaton, 2020; Sanavi & Matt, 2022). Mah & Ifenthaler (2017) identified five academic competencies for successful degree completion: time management, learning skills, self-monitoring, technology proficiency, and research skills. Accordingly, coping with academic requirements is an important driver of student retention in higher education (Thomas, 2002). Moreover, students’ perceptions of their first year can affect student success (Crisp et al., 2009).

More recently, it has been argued that competencies related to AI are an important driver for student retention and learning success (Bates et al., 2020; Mah, 2016; Ng et al., 2023). Nonetheless, first-year students’ perceptions, expectations, and academic competencies for coping with academic requirements related to AI-tools have not yet been researched in-depth.

Conceptualization of AI-tools in higher education

Dang and Liu (2022) propose a differentiation of AI-robots, which is also used in this study. They categorize AI-robots into << mindful >> (AI-robots with increasingly human characteristics) and << mindless >> (AI-robots with machine characteristics) tools. The so-called mindful AI-robots can perform more complex tasks, react to the prompts of the users in a more meaningful way, and are designed to act and look like humans. On the other hand, mindless AI-robots perform fewer complex tasks and appear more like machines. In the following, a short overview of AI-tools is provided, including their main functionality and examples for practical use in higher education learning processes:

Mindless AI-robots

1) Translation text generators: These tools use written text as input and translate the text into a different language. Translation text generators can help to quickly translate text into the language a student is most familiar with or to translate into a language that is required by the assignment. Many study programs require students to hand in (some) papers in a language different from the study program’s language (Galante, 2020). Two of the most prominent translation text generators are Google Translate and DeepL (Martín-Martín et al., 2021).

2) Summarizing/rephrasing text generators: These tools use written text as input and can change the structure of the text. On the one hand, they are used to extract critical information, keywords, or main concepts out of structured text, reducing the complexity of the input text. In this way, they help the user focus on the input text's most important aspects, allowing them to get a basic understanding of complex frameworks. Summarizing text, such as research literature or lecture slides, is an important learning strategy in the context of higher education (Mitsea & Drigas, 2019). On the other hand, these text generators can rephrase text input, an important task when writing research papers: In most cases, written research assignments include some theoretical chapter based on existing research literature. Students must rephrase and restructure existing research literature to show their understanding of concepts and theories (Aksnes et al., 2019). Quillbot is an example of such a rephrasing tool (Fitria, 2021).

3) Writing assistants: Writing assistants can enhance the quality of written text. These tools automatically check for grammar and spelling mistakes, while the text is being created. Furthermore, these tools can give recommendations to the writer to improve the language used: they can provide suggestions for alternative formulations to avoid colloquial language and unnecessary iterations. Writing assistants are usually a part of word processors (e.g., Microsoft Word), but standalone programs or extensions such as Grammarly also exist (Koltovskaia, 2020).

4) Text generators: These tools can automatically generate written text. Text generators take short prompts as input and produce text based on this input. The output text is mainly used for blog entries, text-based social media posts, or Twitter messages. They can be differentiated from chatbots as they cannot produce more complex pieces of text. WriteSonic is a such a text generator tool (Almaraz-López et al., 2023).

Mindful AI-robots

5) Chatbot: Chatbots are applications that simulate human interactions (Chong et al., 2021). In the context of business, they are generally used to answer customer questions automatically. In education, these chatbots help to guide learners through online environments or administrative processes. With the release of ChatGPT, a new kind of chatbot was introduced. These chatbots can produce various output formats, including working algorithms, presentations, or pictures, based on prompts that are very similar to human interactions (Almaraz-López et al., 2023; Fauzi et al., 2023; Fuchs, 2023). Students can use chatbots to automatically produce content, which is traditionally being used as part of instructional design, especially final assessments.

6) Virtual avatars: Virtual avatars are digital representations of living beings. They can be used in online classroom settings to represent teachers and learners alike. In these classroom settings, virtual representations, such as Synthesia, have been shown to improve students’ learning performance, compared to classes without virtual representation (Herbert & Dołżycka, 2022).

7) Social-humanoid robots: These tools not only simulate human behavior and perform human tasks, but in many cases, social-humanoid robots are also built close to human complexity, featuring hands, legs, and faces (van Pinxteren et al., 2019). They can perform human-like mimic to various degrees. Currently, these social-humanoid robots are used as servers in restaurants and are tested in medical and educational institutions (Henschel et al., 2021).

AI-competencies and AI-ethics

The European DigComp Framework 2.2 is a comprehensive framework, that organizes different components of digital competencies deemed essential for digitally competent citizens (Vuorikari et al., 2022). Within this framework, AI literacy can be found in three dimensions: knowledge, proficiency, and attitudes. Basic ideas about the functionality and application areas of AI technology are allocated to the knowledge dimension. This dimension also holds theoretical knowledge about AI laws and regulations, such as the European data protection regulation. The ability of a person to take advantage of AI and use it to improve various aspects of their life can be found in the proficiency dimension. Successfully deploying AI technology to solve problems requires the capability to choose adequate tools and consequently control these chosen tools. Competent citizens must be able to form an opinion on AI technology's benefits, risks, and disadvantages. This allows them to participate in political and social decision-making processes. Through a meta-analysis of guidelines, Jobin et al. (2019) identifies eleven ethical principles which must be considered when working with AI, such as transparency, justice, fairness and trust. Hands-on examples are the guidelines by Diakopoulos et al. (2016) as well as Floridi et al. (2018). The attitude dimension holds these competencies. As with many technological advancements, higher education will be one of the main drivers for facilitating digital AI-competencies (Cabero-Almenara et al., 2023; Ehlers & Kellermann, 2019).

Furthermore, AI technology will change the various learning processes within higher education (Kim et al., 2022). This includes the perspective of educators (Kim et al., 2022), learners (Zawacki-Richter et al., 2019), and administration alike (Leoste et al., 2021). Although research indicates these impacts, research on AI-robots in higher education is scarce, mainly because higher education institutions rarely use the different applications broadly (Kim et al., 2022; Lim et al., 2023).

The functionalities of the different tools offer students various potential applications for learning processes. Following the Unified Theory of Acceptance and Use of Technology (UTAUT), the intent to use new digital tools as well as the actual usage of technology might be influenced by the expectation of performance, the expectation of effort, social influence, and facilitating conditions (Venkatesh et al., 2003). Strzelecki (2023) states that the assumptions made by UTAUT also hold for AI-tools, more specifically ChatGPT, although he could not identify a significant effect from facilitating conditions. In accordance with the DigComp 2.2 framework, this study focuses on students’ attitudes, proficiency, and knowledge regarding AI-technology as additional constructs influencing the intent to use and actual usage of AI-tools.

Furthermore, the study builds on the considerations by Dang and Liu (2022) and examines which constructs influence students’ perception of AI-technology as competitors and cooperation for humans: Research in the field of AI uncovers a range of possible outcomes from increasing AI integration into human society (Einola & Khoreva, 2023). While some argue that AI technology will compete with humans in the workplace, leading to a massive job loss (Zanzotto, 2019) and deskilling of human workers (Li et al., 2023). On the other hand, AI has the potential to be a cooperation partner for humans by automating processes (Bhargava et al., 2021; Joksimovic et al., 2023) or relieving humans from physical and psychological stress (Raisch & Krakowski, 2021).

Hypotheses

This research project aims to better understand first-year students’ perceptions as well as the intended and de facto use of AI-tools. While AI-competencies are understood as an essential driver for learning success and student retention (Ng et al., 2023), the following hypotheses emerge from the research gaps identified for the context of higher education:

Hypothesis 1

The underlying constructs of AI-competencies (skills, attitude, knowledge) have a positive effect on the intention to use AI-robots, while the intention to use AI-robots has a positive effect on the actual use of AI-robots.

Hypothesis 2a

Students’ AI-competencies and the perceived benefits of AI-technology are predictors for students’ perception of AI-robots as cooperation partners for humans.

Hypothesis 2b

Students’ AI-competencies and the perceived risks of AI-technology are predictors for students’ perception of AI-robots as competition for humans.

Method

Data collection and participants

An online questionnaire was designed to collect data from first-year students at a German and a Swiss university. Possible participants were asked to take part in the survey through an e-mail, which was send through the universities e-mail system. In total, N = 638 first-year students participated in the survey. On average, they were 20.62 years old, with a standard deviation of 2.25 years. Of the N = 638 students, N = 309 identified as male, N = 322 as female, and N = 7 as non-binary. The lowest average use of the mindless tools could be found in paraphrasing and summarizing tools (M = 1.13, SD = 1.51). The use of online writing assistants was slightly higher (M = 1.94, SD = 1.76), and the highest average usage could be found in online translation tools (M = 3.53, SD = 1.18). The average use of mindless robots was relatively low (M = 2.2, SD = 1.05). The willingness to use the robots ranged from the lowest in virtual avatars (M = 2.23, SD = 1.13) to the highest in online translation tools (M = 3.16, SD = 1.17).

Instrument

The online questionnaire consists of three parts. The instrument's first part comprises questions regarding knowledge, skills, and attitudes regarding AI-technology (Vuorikari et al., 2022). The different AI-robots are presented in part 2 of the questionnaire. For each tool, current and intended usage was gathered, following the unified theory of acceptance and use of technology (UTAUT) (Venkatesh et al., 2003). The items were formulated to match the different tools with those tasks that are relevant for students, such as writing assignments or preparing for exams. In addition, ethical considerations for each tool were prompted (Vuorikari et al., 2022). The actual use of the robots by the participants was evaluated with a 6-point Likert scale and their potential willingness to use them with the help of a 5-point Likert scale. The third part of the instrument summarizes items that collect demographic data. The instrument can be found in Additional file 1.

Analysis

A path analysis was conducted based on the factors of AI-competence, taken from the DigiComp2.2 framework (Skills, Attitude, Knowledge), in combination with the UTAUT models’ assumption that the intention to use technology influences the actual use of AI-tools. A visualization of the model can be found in Fig. 1. The path analysis was done with RStudio, more specifically, the package lavaan (Rosseel, 2012).

Fig. 1
figure 1

Path analysis—skills, attitudes, and knowledge as predictors for intended and de facto use of AI-tools

Multiple linear regression analyses were conducted in RStudio to answer Hypotheses 2a and 2b.

Results

Hypothesis 1: the influence of skills, attitude, and knowledge on the intended use of AI-tools

The model has a relative well fit, with a non-significant chi-square (3, 638) = 7.3, p = 0.06, and the fit Comparative Fit Index (CFI) = 0.96, above the respective cut-off value of 0.95. The Tucker-Lewis Index (TLI) = 0.91 is slightly lower than 0.95. The RMSEA = 0.05 is below 0.08.

The results indicate a significant positive influence of attitude (ß = 0.26, p < 0.01) and a significant negative influence of skills (β = − 0.1, p = 0.02) on the intention to use the tools. Knowledge seems to have no significant impact (β = − 0.06, p = 0.19). Furthermore, the intention to use the AI-tools significantly predicts their actual use (β = 0.33, p < 0.01, R2 = 0.11). The path analysis is shown in Fig. 1.

Hypothesis 2a: perceived benefits as indicators for AI as cooperation partner

A multiple linear regression was conducted to analyze the influencing factors on students’ rating of AI as cooperation partners. Concerning students’ rating of AI as a cooperation opportunity, the influence of AI-competence and the perceived benefits of AI were included in the analysis. Both factors are significant predictors and explain 15.41% of the variation in the estimation of AI as a cooperation possibility for humans. F(2, 635) = 57.84, p < 0.01. Both AI-competence, β = 0.22, p < 0.01, t(637) = 5.9 and perceived benefits, β = 0.27, p < 0.01, t(637) = 7.2 are significant predictors.

Hypothesis 2b: perceived risks as indicators for AI as competition

A multiple linear regression was conducted to analyze the influencing factors on students’ rating of AI as a competitor for humans. When considering the influence of perceived risks and AI-competence on students’ rating of AI as competition, both factors explain 2.26% of the variation in the dependent factor. F(2,635) = 7.33, p < 0.01. While the AI-competence is a significant predictor, β = 0.09, t(637) = 10.2, p < 0.01, perceived risk is not β = 0.03, t(637) = 1.64, p = 0.1.

Discussion

Findings

The analyzed data provides insights into the actual use and implementation of AI-tools in students' learning process in their entry phase. So far, mindless AI-tools are favored by the participants compared to mindful tools. These mindless AI-tools provide useful functionalities regarding tasks that can be considered as typical for higher education programs, such as written papers, presentations, or reports (Flores et al., 2020; Medland, 2016). These functionalities include translations (Einola & Khoreva, 2023) or summaries (Fitria, 2021). The analysis results show that the intention to use these tools is affected by students’ perceived skills, knowledge, and attitudes (Venkatesh et al., 2003). A positive attitude has a positive effect on the intended use of AI-tools. A positive attitude includes a general interest and an openness about AI technology, but also a strong interest in a critical discussion about AI technology. Students’ curiosity about the new technology leads to factual testing and might give students a better understanding of what the AI-tools have to offer them in practice, reflecting on the challenges and opportunities of AI-technology. The findings of the path analysis indicate that proficiency in controlling the tools does not have a positive effect on the intended use. This result can be explained through the aforementioned importance of attitude towards AI-technology (Almaraz-López et al., 2023; Vuorikari et al., 2022). Students’ curiosity for the new technology might outweigh their perceived need for a distinct AI proficiency.

Additionally, many AI-tools can be easily accessed and give the impression of being easy to use. The same argument holds for the construct of knowledge. The student’s intention to use AI-tools for learning processes appears to be independent of their theoretical knowledge of the systems’ internal functionalities. While this knowledge might help students to understand better the results they receive from AI-tools or increase their ability to formulate adequate prompts (Zamfirescu-Pereira et al., 2023), the absence of theoretical knowledge does not present itself as a barrier to the intended use.

Implications

These findings do have important implications for the further implementation of AI-tools in higher education learning processes (Heil & Ifenthaler 2023; Celik et al., 2022; Kaplan-Rakowski et al., 2023; Latham, 2022; Liu et al., 2022; Ocaña-Fernández et al., 2019). At first glance, using AI-tools does not require prior practical and theoretical training from students. At the same time, students might not be able to fully apprehend the possibilities of AI-tools or effectively use them to improve their learning processes (Alamri et al., 2021; Børte et al., 2023). Educators should, therefore, integrate these tools into their instructional design practices and pair them with additional practices to facilitate the students’ AI-competencies (Lindfors et al., 2021; Sailer et al., 2021; Zhang et al., 2023). As a result, students will be able to use AI-tools to improve their learning processes, while simultaneously being able to critically reflect on the input, output, and influence of the respective AI-tools.

The results of Hypotheses 2a and 2b show a significant effect of AI competence and the perceived benefits of AI-tools on the expected cooperation potential of AI technology (Bhargava et al., 2021; Raisch & Krakowski, 2021). Instructional designers and other stakeholders in higher education need to provide best-practice examples of how AI-tools can be used to positively influence learning processes if they want to facilitate the usage of the respective tools.

Limitations and outlook

ChatGPT was not yet openly accessible when the data for this survey was collected. The overall usage of AI tools has likely increased since ChatGPT was introduced to a broader user base (Strzelecki, 2023). The presence of ChatGPT in media and scientific discussions might have led students to look into other AI-tools, such as DeepL (Einola & Khoreva, 2023) or Quillbot (Fitria, 2021) as well. The composition of the student sample also limits the study's results. While the University in Switzerland is more open towards the usage of AI technology, policymakers in German universities tend to be more restrictive towards the use of AI (von der Heyde et al., 2023). To overcome the limitation of the sample size, future studies will include students from a broader range of academic years. As a result, the generalizability of the result will be improved.

The present discussion about ChatGPT and the influence of AI-tools in general on higher education underlines the need to educate learners about AI and their respective AI-competencies (Almaraz-López et al., 2023; Chong et al., 2021; Fauzi et al., 2023). A second study is currently being conducted to analyze how the introduction of ChatGPT to the public sphere has changed students’ attitudes toward AI and their use of AI-tools, both intended and factual. It can be assumed that the powerful tool leads to an increasing awareness of AI, as well as a broad usage over different study programs and for various tasks within higher education programs. Further studies should include additional research approaches to collect additional data about students’ experiences and usage of AI tools, such as a think-a-loud study or interviews with students. These approaches give insights into the teaching strategies which might help students to facilitate AI competences and improve their learning outcomes through AI tools. An example of such a strategy is a class that teaches students to write scientific texts with the support of ChatGPT. A comprehensive understanding of necessary competencies and pedagogical are the foundation for holistic AI literacy programs. These programs need to be accessible for all students and flexible enough to adhere to different levels of prior knowledge and learning preferences. Another important task for ongoing research projects is the analysis of the relationship between AI competencies, pedagogical concepts and the learning outcome of students, especially regarding the different tools which might be used in the future. Additional, longitudinal studies might be best suited to gather detailed data through out AI-supported learning process.

Conclusion

The increasing capabilities of AI-tools offer a wide range for possible application in higher education institutions. Once the gap between the theoretical chances and applicable solutions is closed, multiple stakeholders, such as administrator, educators and students, will be able to benefit from individualized learning paths, automated feedback or data-based decision-making processes. Lately, an increasing number of research work has been published to close this gap. The introduction of ChatGPT to the general public has fueled the discussions about AI technology, especially in the field of higher education institutions. One of the challenges encased in the implementation of AI into learning processes is the facilitation of students’ AI competencies. Students need the practical skills, theoretical knowledge and comprehensive attitudes to unlock the potential of AI-technology for their learning processes. Educators and higher education institutions have the responsibility to create safe learning environments which foster points of contact with AI as well as possibilities to actively engage with AI. These learning environments must provide students with access to relevant AI-tools and must be founded on holistic legal frameworks and regulations.

Availability of data and materials

The data supporting this study's findings are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.

References

Download references

Acknowledgements

Not applicable.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

All authors participated in planning the study, designing the data collection tools, collecting and analyzing data for the study. The first author (corresponding author) led the writing up process, with contributions from the second and third authors. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Jan Delcker.

Ethics declarations

Competing interests

The authors declare no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. The authors declare no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

AI-Competence Instrument.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Delcker, J., Heil, J., Ifenthaler, D. et al. First-year students AI-competence as a predictor for intended and de facto use of AI-tools for supporting learning processes in higher education. Int J Educ Technol High Educ 21, 18 (2024). https://doi.org/10.1186/s41239-024-00452-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41239-024-00452-7

Keywords