- Research article
- Open access
- Published:
Acceptance of artificial intelligence among pre-service teachers: a multigroup analysis
International Journal of Educational Technology in Higher Education volume 20, Article number: 49 (2023)
Abstract
Over the past few years, there has been a significant increase in the utilization of artificial intelligence (AI)-based educational applications in education. As pre-service teachers’ attitudes towards educational technology that utilizes AI have a potential impact on the learning outcomes of their future students, it is essential to know more about pre-service teachers’ acceptance of AI. The aims of this study are (1) to discover what factors determine pre-service teachers’ intentions to utilize AI-based educational applications and (2) to determine whether gender differences exist within determinants that affect those behavioral intentions. A sample of 452 pre-service teachers (325 female) participated in a survey at one German university. Based on a prominent technology acceptance model, structural equation modeling, measurement invariance, and multigroup analysis were carried out. The results demonstrated that eight out of nine hypotheses were supported; perceived ease of use (β = 0.297***) and perceived usefulness (β = 0.501***) were identified as primary factors predicting pre-service teachers’ intention to use AI. Furthermore, the latent mean differences results indicated that two constructs, AI anxiety (z = − 3.217**) and perceived enjoyment (z = 2.556*), were significantly different by gender. In addition, it is noteworthy that the paths from AI anxiety to perceived ease of use (p = 0.018*) and from perceived ease of use to perceived usefulness (p = 0.002**) are moderated by gender. This study confirms the determinants influencing the behavioral intention based on the Technology Acceptance Model 3 of German pre-service teachers to use AI-based applications in education. Furthermore, the results demonstrate how essential it is to address gender-specific aspects in teacher education because there is a high percentage of female pre-service teachers, in general. This study contributes to state of the art in AI-powered education and teacher education.
Introduction
Technology powered by artificial intelligence (AI) is becoming increasingly significant in our daily lives, subtly altering our ways of thinking, behaving, and interacting with one another (Chen et al., 2020a). Rapid growth in the use of AI technologies has been observed in the field of education, where they are radically changing the nature of classroom instruction (Zhang & Aslan, 2021). For example, the emergence of ChatGPT has already generated significant interest and intense debate within the education field. Specifically, ChatGPT and other similar AI technologies have the potential to significantly impact the education sector by providing personalized learning experiences for students and automating administrative tasks for educators. Simultaneously, AI technologies have played an essential part in altering education during the COVID-19 pandemic by gathering and analyzing student data for adaptive learning in education (Lee & Han, 2021). As a result, there has been a recent uptick in the number of studies looking into the possible benefits of AI in K-12 and higher education (Chen et al., 2020b). While some see the potential of AI in education and see it as a way to make it more fair and equitable for all students, others are skeptical and reject it because they fear it will replace teachers and increase unemployment (Reiss, 2021). Therefore, examining and understanding the acceptance of AI is vital, particularly for pre-service teachers.
Preparing pre-service teachers for AI-powered education is challenging (Pedró et al., 2019). Therefore, there is a substantial demand for research on accepting AI in teacher education and understanding what factors govern its use; yet, the necessary research is sparse. Current research on technology acceptance in the educational context is based on the Technology Acceptance Model (TAM) (Granić & Marangunić, 2019; Scherer & Teo, 2019; Tarraga-Minguez et al., 2021). However, empirical research on AI acceptance in teacher education has predominantly focused on in-service teachers, with only a few studies focusing on pre-service teachers. For instance, Wang et al. (2021) researched the intention of in-service teachers in higher education to utilize AI tools. Their study was based on the TAM to predict teachers’ intention to implement AI tools through anxiety, self-efficacy, attitude toward AI, perceived ease of use, and perceived usefulness. Choi et al. (2022) conducted a study focusing on the perceived trust of in-service teachers using AI educational tools with the TAM. Their research concluded that the ease of use of the AI tool was the most significant factor in determining whether teachers would accept AI. For pre-service teachers, Sánchez-Prieto et al. (2019) proposed a TAM-based technology adoption model to investigate the factors involved in implementing AI-driven assessment. However, there is no further empirical study on pre-service teachers’ acceptance of AI. In addition, a significant further consideration is the differential acceptance of AI technology by gender because there is generally a high percentage of female students who will become teachers. Previous research on pre-service teachers’ acceptance of technology has found that gender was a moderator in some cases, but the results have been inconsistent (Papadakis, 2018; Teo, 2010; Teo et al., 2015).
Despite some theoretical and empirical studies into teachers’ attitudes towards AI, it remains unclear how pre-service teachers think about such AI-powered tools. Furthermore, because of the high percentage of female students in teacher education, the impact of gender differences should be considered. Therefore, pre-service teachers’ perspectives on AI-driven technology for teaching and learning, as well as the underlying factors of their intention to utilize them and the role of gender here, need to be understood to facilitate the incorporation of AI technologies into future education. To that purpose, a questionnaire was developed and administered to pre-service teachers at one German university. The questionnaire was based on TAM3 (Venkatesh & Bala, 2008) because this theoretical model can adequately test how technology and AI are accepted in various settings (Sánchez-Prieto et al., 2019; Scherer & Teo, 2019).
Consequently, this study aims to address the mentioned research gap by using the TAM3 to understand pre-service teachers’ AI acceptance and the impact of gender. To achieve this objective, the study first employs a research model based on TAM3 using structural equation modeling (SEM) to measure pre-service teachers’ acceptance of AI and the relevant determinants. Secondly, the moderating effect of gender in the model is measured by measuring invariance and performing multiple group analysis (MGA). Finally, the study’s main findings and future directions are presented and discussed.
Literature review
TAM in teacher education
The TAM has been widely adapted and applied to investigate how users feel about and react to various kinds of technology. The technology acceptance of teachers was investigated and seen as a multi-faceted phenomenon with both exogenous and endogenous influences (Scherer & Teo, 2019). Therefore, the TAM provides an appropriate approach to explaining pre-service teachers’ attitudes towards technology use. Specifically, the TAM is helpful in describing pre-service teachers’ utilization of various technologies in different contexts. Therefore, it is necessary first to understand the development of TAM and its application in teacher education, especially for pre-service teachers. First, the TAM was developed to characterize users’ intentions towards adopting technology by drawing from the Theory of Reasoned Action (TRA; Ajzen & Fishbein, 1980) and the Theory of Planned Behavior (TPB; Ajzen & Ajzen, 1985). The TAM, built on the TRA, can be used to foresee the adoption of any specific technology (Davis, 1989). In follow-up empirical studies, more and more external variables were added to the model as increasing criticism of TAM contributed to its development, for example, in Technology Acceptance Model 2 (TAM2; Venkatesh & Davis, 2000), the Unified Theory of Acceptance and Use of Technology (UTAUT; Venkatesh et al., 2003), and Technology Acceptance Model 3 (TAM3; Venkatesh & Bala, 2008). Despite the wide variety of TAM versions, the users’ intention can be explained by the three core factors: perceived ease of use, perceived usefulness, and attitude towards using (e.g., Davis, 1989; Marangunić & Granić, 2015).
Over the past few years, many empirical studies have been conducted with pre-service teachers to better understand the factors that shape their perspectives on technology in different settings. In teacher education, these different versions of TAM-based studies have been widely applied to a broad range of different types of educational tools (Chen & Tseng, 2012; Koutromanos et al., 2015; Luan & Teo, 2009; Mac Callum et al., 2014); across cultural contexts (Teo et al., 2009), and with a focus on gender (Emin & Sami, 2016; Shashaani, 1993; Teo et al., 2015). Although the number of empirical studies based on TAM is high, the findings are inconsistent due to differences in sample sizes, study designs, and application settings (Scherer & Teo, 2019). In addition, it has to be noted that, on the one hand, there are differences in the path coefficients within the TAM, and on the other hand, there are differences in the effects that the moderating variables have on the model.
The path coefficients for TAM were predominantly consistent, although there were a few discordant paths within the pre-service teachers’ samples. Firstly, a considerable number of studies revealed that perceived usefulness was more strongly explained than perceived ease of use in pre-service teachers’ final use of technology (Baydas & Goktas, 2017; Koutromanos et al., 2015; Teo, 2012; Teo et al., 2009; Wong, 2015). For example, using structural equation modelling, Teo et al. (2009) predicted technology acceptance among pre-service teachers in Singapore. His findings revealed that perceived usefulness was the most vital determinant of behavioral intention. Similarly, research conducted in Hong Kong with pre-service mathematics teachers revealed that PU was more influential than perceived ease of use in the TAM (Wong, 2015). One explanation may be that students would abandon a particular technology without direct personal benefit (Kennedy, 2002). Therefore, more emphasis should be placed on the use of technology to encourage pre-service teachers to use it in their future teaching. Nevertheless, studies on whether pre-service teachers’ anxiety affects technology acceptance have been contradictory. For instance, the study on the acceptance of mobile learning, in which 316 pre-service teachers were administered questionnaires, revealed that overall anxiety did not affect the perceived ease of use (Islamoglu et al., 2021). Furthermore, in the study, no effect of anxiety on perceived ease of use by gender was found. Another study on the acceptance of Gamification Tools indicated that computer anxiety did not affect perceived enjoyment (Turan et al., 2022). However, research on using humanoid social robots in the classroom has revealed that pre-service teachers’ anxiety about the technology affects their potential acceptance (Istenic et al., 2021). As AI-based tools in teacher education are another form of technology, it is essential to know more about the effects of anxiety, but there is currently a lack of research.
Moderating variables is a critical research theme for studying technology acceptance among pre-service teachers. According to Sun and Zhang (2006), three categories of factors would moderate the acceptance of the technology: (1) the organizational factor, (2) the technological factor; and (3) the individual factor. The organizational factor, for example, refers to whether participants volunteered to participate in the study or were required to use a particular technique to accomplish a task. The technological factor revolves around the technology itself, such as the different types of applications, the complexity of the technology, the purpose of using the technology, and so on. Individual factors are defined as, for example, the participant’s age, gender, cultural background, and experience with the technology. Much research has shown that these factors moderate the relationship in TAM (Scherer & Teo, 2019). For teacher education, individual factors, especially gender, are of interest and relevance. Gender differences have been of significant concern in technology acceptance, yet some related research surrounding TAM was contradictory. Despite the traditional perception that gender impacts technology adoption, some studies found no gender difference in pre-service teachers’ acceptance of technology (Papadakis, 2018; Teo, 2010; Teo et al., 2015).
Challenges of integrating AI technologies in teacher education
There has been an increase in the usage of big data and AI-powered products in education during recent years. AI is embedded in many educational technology tools to provide learning analytics, recommendations, and diagnostics in various ways and for various purposes. There is no denying that teachers are the backbone of the classroom and the driving force behind the next stage of AI growth in teaching. The challenges related to AI application in education may be classified into three categories: technology, teachers and students, and social ethics (Zhai et al., 2021). As mentioned, preparing teachers for AI-powered education is a significant challenge in integrating AI into the future classroom. Given the inevitable prevalence of AI in education, (pre-service) teachers’ perspectives on using AI in education are highly significant.
Even though some AI technologies have been introduced in K-12 and higher education, there is a lack of research related to teachers' attitudes towards them. However, many studies have been done on technology acceptance in the last 20 years. Many studies found that technology had not been entirely accepted because many teachers still had negative attitudes towards it and were reluctant to use it (Istenic et al., 2021; Kaban & Boy Ergul, 2020). The reasons preventing their acceptance include teachers’ anxiety (Zimmerman, 2006) about interacting with new technology, their comfort zone, and their willingness to use the same materials and didactics (Tallvid, 2016). These reasons may also hinder (pre-service) teachers from using AI technologies. Therefore, eliminating pre-service anxiety and establishing trust in AI is one of the challenges in teacher education. Furthermore, the media influences teachers’ perceptions of AI that they will replace human positions, with little specific knowledge of how AI can contribute to teaching and learning (Luckin et al., 2016). This means, in part, that pre-service teachers lack relevant knowledge and skills about AI. Therefore, another challenge is that teacher training programs should consider cultivating new competencies in the context of AI, including in-service and pre-service teachers. According to Luckin et al. (2016), the following skills are needed for future teachers to face the development of AI education: a clear understanding of how AI systems facilitate learning, research and data analytical skills, and new team work and management skills.
Research questions and hypotheses
Numerous TAM-based studies have measured pre-service teachers’ acceptance of educational technology. However, few studies have attempted to use the model to measure pre-service teachers’ acceptance of AI in education. The proposed research model of our study is based on the TAM3 (Venkatesh & Bala, 2008). This includes factors, like AI Self-Efficacy (AISE), Perceived Enjoyment (PE), AI Anxiety (AIA), Perceived Ease of Use (PEOU), Perceived Usefulness (PU), Job Relevance (JR), Subjective Norm (SN), and Behavioral Intention (BI). Our study has two main research questions: (1) which factors determine pre-service teachers’ AI acceptance? and (2) what role does gender play in AI acceptance?
-
RQ1: To what extent do the pre-service teachers’ AI acceptance support the hypothesized relationships in the proposed research model?
According to the literature on technology acceptance, the PEOU of AI systems is influenced by three constructs: self-efficacy, perceived enjoyment, and anxiety. PEOU refers to the individual’s perception of how easy it is to use the new technology (Venketsh & Davis, 2000). Previous research shows self-efficacy (Alharbi & Drew, 2018) and perceived enjoyment (Teo & Noyes, 2011) have been positively associated with PEOU. However, anxiety has a negative effect (Istenic et al., 2021) on PEOU in some literature and no effect (Islamoglu et al., 2021) in others. Based on the previous studies, the following three hypotheses were tested:
-
Hypothesis 1. AI self-efficacy is positively associated with perceived ease of use.
-
Hypothesis 2. Perceived enjoyment is positively associated with perceived ease of use.
-
Hypothesis 3. AI anxiety is negatively associated with perceived ease of use.
PU is another key driver influencing the user’s performance (Lee et al., 2005). According to previous research, perceived ease of use, self-efficacy, and job relevance positively influence perceived usefulness. According to previous research, perceived ease of use (Teo et al., 2015), self-efficacy (Wong, 2015), and job relevance (Siyam, 2019) positively influence perceived usefulness. Therefore, the following hypotheses were tested:
-
Hypothesis 4. Perceived ease of use is positively associated with perceived usefulness.
-
Hypothesis 5. Job relevance is positively associated with perceived usefulness.
-
Hypothesis 6. Subjective norm is positively associated with perceived usefulness.
According to Scherer and Teo’s meta-analysis (2019), PU and PEOU are two significant predictors of behavioral intention. More than 80% of previous studies indicate that PU has a more significant impact on BI. In addition to this, related research has revealed that subjective norms have a more significant total impact on pre-service teachers’ BI (Ursavaş et al., 2019). This is reflected in the following hypothesis:
-
Hypothesis 7. Subjective norm is positively associated with behavioral intention.
-
Hypothesis 8. Perceived usefulness is positively associated with behavioral intention.
-
Hypothesis 9. Perceived ease of use is positively associated with behavioral intention.
The second main research question is about the moderating effect of gender. There is a lack of research on gender differences in AI among pre-service teachers. First, RQ 2.1 focused on whether there was measurement invariance regarding gender for items being modified. This is an essential prerequisite for proceeding with the following two research questions. Second, RQ 2.2 considers the average difference by gender of latent constructs. The third question concerns the potential moderating effect of gender on the relationships among variables in the study model.
-
RQ 2.1: To what extent do pre-service teachers respond differently concerning genderwhen measuring their acceptance of AI?
-
RQ 2.2: What are the gender differences in the latent mean of each construct?
-
RQ 2.3: Do gender differences have a moderating effect on the relationship of variables in the proposed research model?
Methods
Procedure
The study was administered in the winter semester of 2021/2022 through an online questionnaire at one German university. The participating pre-service teachers were all enrolled in teacher education programs and were invited to respond to the survey via Unipark Questback EFS (https://ww3.unipark.de/) in a school education course. The University Data Protection Office approved the research protocol, and all data were collected anonymously. Out of 712 potential respondents, a complete data set of 452 (63.48% of the responses) participants was received.
Participants
Participants in this study were pre-service teachers from different teacher education programs, namely primary school education (n = 260), lower school education (n = 44), secondary school education (n = 63), and grammar school education (n = 85) at one German university. Among the participants, 71.90% were female students, and the mean age of all students was 21.31 years (SD = 3.89). The majority were from the first semester (M = 1.83, SD = 1.55). Table 1 shows the student profiles.
Instrument
The survey was presented in German and consisted of two sections. The first section captured demographic information, i.e., gender, age, major, semester, etc. The second section contained a standardized instrument with PU (four items, α = 0.88), PEOU (four items, α = 0.76), AISE (four items, α = 0.87), AIA (three items, α = 0.91), PE (three items, α = 0.86), SN (two items, α = 0.90), JR (three items, α = 0.90), and BI (two items, α = 0.66) on a five-point Likert scale. The scale ranged from 1 (strongly disagree) to 5 (strongly agree) to measure pre-service teachers’ AI acceptance based on Technology Acceptance Model 3 (Venkatesh & Bala, 2008) and an adapted German version (Stephan, 2021).
Data analysis
Data analysis was performed using the open-source statistical software R, version 4.1.2 (R Core Team, 2021). Due to the small amount of missing data, the questionnaire for the missing values was deleted directly. Descriptive statistical analyses were based on demographic variables, e.g., participants’ age, gender, semester, and so on. In order to compare the differences between male and female pre-service teachers across variables, t-tests were adopted in this study.
To examine the study validity and reliability and to test the overall research model, SEM analysis was performed using the sem function within the lavaan package (Rosseel, 2012). The research model was estimated using maximum likelihood. To test measurement and structural models, we first analyzed the measurement model to demonstrate its internal consistency, reliability, convergent validity, and discriminant validity. Next, the structural model was conducted to test the hypotheses among the constructs. With the Chi-Square fit index test, we evaluated the model fitness, Comparative Fit Index (CFI), Tucker Lewis Index (TLI), Root Mean Square Residual (RMSR), and the standardized root means square residual (SRMR). X2/df was used since X2 is highly sensitive to sample size. According to Carmines and Mclver (1981), it should be considered acceptable with values not greater than 3.00. According to Hair et al. (2019), values of CFI and TLI above 0.90 are considered to be a good fit, and values of 0.08 or less for RMSEA are an excellent fit. Meanwhile, SRMR values of 0.08 or below reflect a good fit.
Next, measurement invariance was also carried out with the R software via the semTools package (Jorgensen et al., 2021). The gender gap necessitated using the subsampling method proposed by Yoon and Lai (2018). Therefore, the detection of configural, metric, scalar and strict models were all based on the subsample. The R software calculated subsample fit statistics such as chi-square, CFI, RMSEA, and SRMR. Since X2 differences are too sensitive to the sample size, this study recommended using ΔCFI to evaluate the two nested models. A value of ΔCFI higher than 0.01 indicates a significant drop in fit between the two models (Cheung & Rensvold, 2002).
Gender differences in variables are typically investigated using t-tests or ANOVA comparing composite scores. The latent-mean analysis is another alternative that considers comparisons between groups of latent factors underlying the constructs (Vandenberg & Lance, 2000). In order to calculate the latent mean difference, one of the groups is to serve as a reference group, and the mean should be fixed to zero. This study used the female pre-service teachers as a reference group (coded as 0). It is important to remember that assessing latent-mean differences necessitates scalar invariance (Vandenberg & Lance, 2000). The present study was conducted on partial strict models.
Finally, the potential moderating effect of gender on the hypothesized model was examined. Multigroup SEM using R software was used to evaluate the moderating impact of gender in the structural model. First, the unconstrained model was specified, in which all the parameters were freely estimated. Second, the fully constrained model was specified, where all regression path coefficients were constrained to be equal. Third, the Chi-square test was performed for both models. Finally, in order to test the differences in the regression coefficients of the individual paths between the two groups, the study applied a multigroup analysis to all nine paths to compare the differences.
Results
Descriptive statistics
Table 2 shows the means and standard deviations of the eight composite variables for the three groups. Nearly all the mean scores of the structures for the entire sample were higher than the mid-point (ranging from a low of 2.18 to a high of 3.51). However, AIA and SN were 2.18 and 2.39 (< 2.50), respectively. Similarly, the standard deviation of all variables was less than 1.00; only SN (1.02) was the exception. The male sample showed higher PU, PEOU, AISE, PE, and BI than the female sample when compared by means. Furthermore, statistical significance was detected for AIA (t (243) = − 3.64, p < 0.001) and PE (t (278) = 3.41, p < 0.001) between the female and male samples.
Result of the measurement model
The goodness-of-fit indices were well acceptable for the confirmatory factor analysis (CFA) of the full measurement model:X2 = 515.368, df = 247, X2/df = 2.09, CFI = 0.961, TLI = 0.952, RMSEA = 0.049 [90% CI: 0.043–0.055], and SRMR = 0.045. For convergent validity, Cronbach’s alpha, composite reliability (CR), and average variance extracted (AVE) were reported in Table 3. According to Hair et al. (2019), factor loadings above 0.50 are recommended. The results demonstrated that the factor loadings for all items were between 0.605 and 0.936, meeting the threshold set by Hair et al. (2019). Further, Cronbach’s alpha and CR were calculated to measure the reliability of the latent variable. Following the guidance of Nunnally and Bernstein (1994), a CR value above 0.70 is considered adequate. Finally, AVE is used to determine convergent validity, and as a rule of thumb, a value greater than 0.50 is considered acceptable (Teo & Noyes, 2014). As Table 3 illustrates, the values of CR and AVE for almost all constructs reached the threshold. Therefore, all were recognized except the AVE for PEOU (0.45) and the CR for BI (0.66), which did not meet the relevant minimum criteria.
The correlation between constructs and the square roots of the AVEs was shown in Table 4. to verify the discriminant validity of the latent variables. According to Fornell et al. (1982), a matrix’s diagonal elements should have larger values than their off-diagonal rows and columns. The data revealed that the square of the roots of the AVEs of each construct was higher than the correlation coefficient among the other variables.
Result of the structural model
The proposed research model’s hypothesized relationships were established using structural equation modeling (see Fig. 1 Research model). Firstly, the goodness-of-fit indices for the SEM were considerably well, X2 = 641.127, df = 256, X2/df = 2.50, CFI = 0.943, TLI = 0.934, RMSEA = 0.058 [90% CI: 0.052–0.063], and SRMR = 0.058. Next, in Table 5, we report the standardized path coefficients for the proposed research model and its results. Eight out of the nine hypotheses tested were confirmed. The eight supported hypotheses have path coefficients ranging from 0.171 to 0.518, where the smallest path coefficient was the path from SN to BI (z-value = 3.172), and the most significant path coefficient was from PE to PEOU (z-value = 8.554). Furthermore, the path from AIA to PEOU was not statistically significant (path coefficient = − 0.037, z-value = − 0.731); therefore, H3 was rejected. Finally, the R-squared values indicated that SN, PE, and PEOU explained 65% of the variance in BI. SN, JB, and PEOU explained 50% of the PU variance. AISE and PE explained 60.0% of the variance in PEOU.
Result of measurement invariance and latent mean differences
Multigroup invariance analyses were performed with the R software. All estimations were conducted using maximum likelihood and based on the covariance matrix. As Brown (2006) recommended, before performing the invariance measurement, the initial measurement model requires splitting into two datasets; in this study, one for male students and the other for female students. However, sub-sampling was employed in this research due to the unevenness of the male and female samples (Yoon & Lai, 2018). Consequently, the open-source R program was applied to automatically generate the female cohort into 125 subsamples, resulting in 125 same samples for male and female students. There were several hierarchical orderings of nested models for measurement invariance: configural invariance, metric invariance, scalar invariance, and strict invariance. The results of all models are reported in Table 6. The initial model tested was the configural model, known as the baseline model. The results of this least restrictive model indicate that the same items measured the same construct for both male and female students. Next, the metric invariance was tested by constraining the factor loading to be equal. With the evidence as reported, the metric invariance was confirmed between the male and female samples, which means the factor loadings are equivalent across the male and female samples.
For the scalar model, the factor loadings and intercepts are constrained to be equal across two groups. Therefore, the comparison between the metric and scalar models was acceptable. To test the strict model, the factor loadings, intercepts, and residual variances are equal across female and male samples. The X2 difference test was carried out by comparing M4: Strict Invariance and M3: Scalar Invariance. However, the results were unacceptable (ΔCFI = 0.011). Therefore, Byrne’s (2016) strategy was used to identify the noninvariant path. Finally, the partial strict invariance model (with AIA3 being freed) and M3 were compared using the X2 difference, indicating no statistically significant difference between them.
Based on partial strict invariance across gender groups, latent mean comparisons can be made between them. In this study, the female pre-service teachers served as the reference group, and its factor means constrained to zero. Table 7 shows the results of latent mean differences. Of the eight constructs, gender differences on the factor of AIA (z = − 3.217, p < 0.01) and PE (z = 2.556, p < 0.05) were statistically significant. More specifically, the female group demonstrated higher mean values in AIA, SN, and JR.
Results of moderating effects in the structural equation model
To explore the moderating effect of gender, we employed multiple groups SEM analysis (male group: 123; female group: 123). The chi-square difference between the unconstrained and fully constrained models is shown in Table 8, indicating that the two groups differed (ΔX2 = 21.625; Δdf = 9; p = 0.01).
Furthermore, multigroup SEM analysis determined which relationships were significantly different between male and female students. Based on Table 9, hypotheses H10c and H10d were supported, and other hypotheses were rejected. Thus, the path from AIA to PEOU and PEOU to PU were moderated by gender, while the other hypotheses were not moderated. AIA had a statistically significant effect on PEOU in female group (standardized estimate = − 0.119*) but not in male group (standardized estimate = 0.254). Moreover, the effect of PEOU on PU was more substantial among women (standardized estimate = 0.530***) than in men (standardized estimate = 0.283**).
Discussion
This study had two aims: first, to examine which determinants contribute to pre-service teachers’ AI acceptance; second, to understand whether gender differences significantly affect AI acceptance. In order to accomplish the above objectives, this study validated the relevant influential factors based on TAM3 using SEM with nine hypotheses. In addition, latent mean differences, and multigroup analysis were performed using measurement invariance to examine the moderation effect of gender in the study model. This section discusses the results and their potential implications, limitations, and future research directions.
Pre-service teachers’ AI acceptance
This study investigated the factors influencing pre-service teachers’ acceptance of AI. Only one of the nine hypotheses in the proposed research model (AIA → PEOU) was rejected. In this study, we have shown that AISE and PE can influence the PEOU of pre-service teachers. Further, SN, JR, and PEOU also affected PU. Next, SN, PU, and PEOU significantly affected the resulting BI. Finally, only the variable AIA was shown to have no significant effect on pre-service teachers’ behavioral intentions regarding the technology of AI.
The current study has confirmed that PEOU and PU are highly significant factors that influence the acceptance of AI among pre-service teachers, consistent with previous research (Granić & Marangunić, 2019; Scherer & Teo, 2019). PEOU reflects individuals’ beliefs about the ease and convenience of using a particular technology, which is a crucial determinant of technology acceptance. The TAM3 and the Diffusion of Innovation Theory (DOI) (Rogers et al., 2014) both support the relationship between PEOU and final AI acceptance. PEOU reduces the perceived complexity of new technology, which can accelerate its adoption rate. Furthermore, PEOU enhances PU, which refers to individuals' beliefs that the technology will improve their performance and help them achieve their goals. In other words, pre-service teachers are more likely to adopt AI-based educational technology to do their work if it is easy to use, which consequently increases the perceived usefulness of the technology.
Furthermore, Venkatesh and Davis (2000) discovered that PU has a stronger influence on user intention than PEOU. Our study supports this finding in the context of AI. Students are primarily concerned with the potential of technology to improve learning outcomes and instructional effectiveness and achieve educational goals. If an AI product is perceived as highly useful and has a tangible impact on educational outcomes, it is more likely to be adopted, despite requiring some effort to learn how to use it. For example, Hu (2021) investigated students’ perceptions of the learning analytics dashboard in an AI-supported intelligent learning environment and found that PU had a more significant impact on students’ intention to use the platform than PEOU. However, it is essential to note that PEOU continues to be a crucial factor in accepting AI technologies, and these two factors frequently interact in complex ways.
In addition, our study discovered that AI anxiety does not indirectly affect pre-service teachers’ behavioral intentions towards AI. This is consistent with the results of Ayanwale et al. (2022) in a study group of in-service teachers. AI anxiety here refers to the fear and trepidation expressed by pre-service teachers about out-of-control AI (Johnson & Verdicchio, 2017). Given the previously mentioned TRA theory, AIA can be considered a belief that serves as a precursor to behavioral intention. In TAM research, computer anxiety can be essential to adopting technology-enabled tools (Abu-Al-Aish & Love, 2013). Previous studies have indicated that anxiety is negatively related to perceived ease of use (Liaw & Huang, 2015; Nikou & Economides, 2017). The significant difference between computer and AI anxiety might explain the contradiction (Li & Huang, 2020). There is a difference between computers and artificial intelligence. (1) Unlike computers, AI can make autonomous decisions (Beer et al., 2014); (2) AI has a variety of virtual forms (Castelo et al., 2019); (3) AI can provide personalized services, such as chatbots. These differences seem to lead to variations in AI anxiety and computer anxiety. Li and Huang (2020) have identified eight types of anxiety that may contribute to AI anxiety: Privacy violation anxiety, bias behavior anxiety, job replacement anxiety, learning anxiety, existential risk anxiety, ethics violation anxiety, artificial consciousness anxiety, and lack of transparency anxiety. One potential reason pre-service teachers prioritize PU and PEOU over AI anxiety is their lack of awareness or sufficient background knowledge to understand AI technology's complexity. In addition, as student teachers, may not have enough exposure to AI-based educational products, and their familiarity with AI may be limited. In such cases, they are more likely to focus on the tangible benefits that AI technology can offer in education. Furthermore, they may emphasize the ease of use of these products, as they may need to gain the technical knowledge or experience to navigate complex AI systems. Overall, for pre-service teachers, certain factors, such as PEOU and PU seem to overshadow AI anxiety when confronted with AI-based products. Although AI anxiety is a genuine concern in developing and deploying AI products, pre-service teachers may prioritize other factors due to their limited exposure and awareness of AI technology. Whatsmore, as they gain more experience and understanding of AI in education, their perceptions and attitudes towards AI may evolve. Therefore, teacher education programs must provide sufficient exposure and training to pre-service teachers on the use and potential benefits of AI-based educational products. This can help them develop a deeper understanding of the technology and enable them to make informed decisions on its adoption and use in the classroom.
Besides, the study produced several noteworthy findings, including the following: Support was found for hypothesis 1, which predicted that AI self-efficacy is positively associated with PEOU. Support was also found for hypothesis 2, with a significant path from perceived enjoyment to PEOU. For the PEOU, two external variables, AI self-efficacy and perceived enjoyment, play a determining role, consistent with previous technology acceptance studies (Al Kurdi et al., 2020; Holden & Rada, 2011). The PEOU of student teachers can be influenced by their AI self-efficacy and perceived enjoyment, which can impact their confidence and motivation to utilize AI-based educational techniques. Student teachers with a high level of self-efficacy in their ability to use AI technology tend to possess a greater sense of confidence and competence, thereby reducing their perception of the challenges posed by such technology. Meanwhile, student teachers who perceive AI-based educational applications as enjoyable tend to be more motivated to learn about and use them increasing their likelihood of adoption and their perception of their ease of use. Furthermore, the research data supported hypotheses 5 and 6, which indicated a positive association between subjective norm and job relevance with PU. The results of this study are consistent with those of the technology acceptance (Schepers & Wetzels, 2007; Mazman Akar et al., 2019; Zarafshani et al., 2020). Subjective norms play a crucial role in shaping the attitudes and behaviors of pre-service teachers towards AI. Pre-service teachers in teacher education programs are likely influenced by their friends, instructors, and supervisors. When pre-service teachers perceive that their social referents hold positive attitudes toward AI, they are likelier to internalize these attitudes and adopt similar behaviors. This phenomenon can be explained by social cognitive theory (Bandura, 2002), which suggests that individuals’ perceptions of social influence can impact their self-efficacy, perceived expectations, and resistance to change. Thus, understanding subjective norms’ role in shaping pre-service teachers’ attitudes towards AI is crucial for promoting the successful adoption and integration of AI in education. Furthermore, job relevance is a significant factor that affects pre-service teachers’ acceptance and adoption of AI. Pre-service teachers are more likely to use AI if they perceive it as contributing to the quality of their instruction or their learning. This perception can be influenced by AI features such as providing automated feedback, offering personalized learning materials, and more, ultimately leading to an increase in pre-service teacher satisfaction.
Moderation differences in the acceptance of AI by gender
The measurement invariance is a prerequisite for all the following examinations, and this result indicated that the instrument was consistent across gender groups. In addition, the progressively rigorous invariance measurements indicated that the male and female cohorts were stable in factor structure (configural invariance). The findings also supported that the scores on the indicators (items) were statistically equivalent (metric invariance), and the item intercepts were also similar across gender (scalar invariance). Finally, residual variances of items were supported partially (partial strict invariance). More specifically, gender group consistency was supported for all indicators on constructs of TAM3. Our results are consistent with Teo et al. (2015), who also studied pre-service teachers’ measurement invariance on technology acceptance across gender.
Regarding latent mean differences, our study found significant differences in AI anxiety (AIA) and perceived enjoyment (PE) between male pre-service teachers and female pre-service teachers. These findings align with the results of our prior t-tests. Specifically, we found that female pre-service teachers scored higher than male pre-service teachers on AI anxiety, indicating that female pre-service teachers may be more apprehensive about utilizing AI-based applications in their teaching and learning processes. Meanwhile, our study also found that gender differences were moderated in the pathway AIA → PEOU, namely H10c. However, to our knowledge, no literature currently examines gender differences in AI anxiety in the context of teacher education. Our study suggests that gender may play a complex role in determining an individual's experience of AI anxiety. Both cognitive and non-cognitive factors can explain this phenomenon. To consider cognitive factors, one possible explanation for this gender gap is that female students may have less previous experience and exposure to STEM fields compared to males (González-Pérez et al., 2020). This lack of exposure may lead to a lack of confidence in their ability to use and understand AI-based applications, which may lead to anxiety and fear. Another possible cognitive explanation for the gender gap in teacher education is that female students may be more risk-averse and cautious in their decisions than male students. This is a well-documented phenomenon, with studies showing that female students tend to be more risk-averse than male students in various fields, such as financial investment (Charness & Gneezy, 2012). This risk aversion may lead to greater anxiety and apprehension using new and unfamiliar technologies, such as AI-based applications. Another important non-cognitive factor that may contribute to the gender gap in AI anxiety in teacher education is socialization and gender stereotyping. Research has consistently demonstrated that gender stereotypes can significantly influence how the female group and male group perceive their abilities and interests in different fields (Eaton et al., 2020; Luo et al., 2021; Moè & Hirnstein, 2021). This is also supported by social role theory (Eagly & Wood, 2012). This socialization may lead female pre-service teachers to be more reflective and make more careful decisions about the potential risks and uncertainties of new technologies such as AI. For example, Dai et al. (2020) showed that male teachers in STEM education were more confident than female pre-service teachers in their readiness for AI. This may be related to the traditional cultural belief that male students are better suited for AI than female students. Moreover, these stereotypes can play a significant role in creating anxiety and self-doubt among female students. Furthermore, it is precisely this stereotype that affects the self-efficacy of female pre-service teachers, making them believe they are under-prepared to confront AI in education. These findings provide insights for educators and technology developers to better understand the potential barriers female pre-service teachers may face in adopting and utilizing AI-based technologies and the need to address these barriers through targeted interventions and support mechanisms.
Implications, limitations, and future direction
The findings of this study have some implications for practice. First, educational institutions and developers should consider usefulness and ease of use in the first place when selecting or developing AI-based applications for pre-service teachers. Regarding usefulness, AI provides various functionalities by training the learning data of students. Therefore collecting as much data as possible is an inevitable process. However, the concern for ethics and equity in the process of utilizing data is something that requires the attention of researchers. A user-friendly interface and gamification are two essential aspects of ease of use. An AI tool with an intuitive interface and ease of use helps minimize the time and effort teachers and students require learning and managing the product, allowing them to focus more on teaching and learning. At the same time, by incorporating game-like features such as rewards, badges, and leaderboards, these products can help increase student motivation and engagement. Alternatively, chatbots can be created to support more personalized student interactions. These tools can help provide personalized support and guidance to students while reducing teacher workload. In addition to focusing on gender differences, teacher educators and educational institutions should encourage and support female pre-service teachers and remove their fear of AI. For instance, there are mentoring programs for female pre-service teachers to support them to learn more about the fundamentals of AI.
There are some limitations to this study that have to be addressed. Firstly, the instrument was adapted from TAM3 as there is no suitable scale to measure the AI acceptance of pre-service teachers in German. Despite being pre-tested for good validity and reliability, some scale constructs, such as AI anxiety, cannot be measured accurately (Wang & Wang, 2022). Secondly, we did not employ a specific AI-based educational tool in this study. The survey was based only on the students’ previous experience utilizing AI applications. Finally, all the participants were from only one university and most of them were first-year students. It may be assumed that the background of the participants can bias the results of the study.
For future work, an important direction involves the creation of new AI acceptance scales. Notably, within the AI domain, more factors gain prominence, including trust in AI systems, interpretability, transparency, and other relevant factors. Furthermore, it is important to consider additional moderating variables in future research. Currently, the focus has primarily been on gender as a moderating factor. However, from a technical standpoint, it is crucial to investigate how different functionalities assigned to AI systems, such as feedback provision, chatbot, or recommendation system, may moderate AI acceptance. Lastly, an important avenue for future research involves conducting pre- and post-test comparison studies utilizing AI-based applications within the pre-service teacher population. This approach enables a comprehensive exploration of the factors that exert a more substantial influence on pre-service teachers’ acceptance of AI.
Conclusion
As the use of AI in education continues to increase, it is crucial that pre-service teachers, who will be at the forefront of education in the future, acquire knowledge of this technology. Our study aims to contribute to understanding pre-service teachers’ acceptance of AI technology in education by utilizing the Technology Acceptance Model 3 (TAM3). Specifically, we aimed to identify the factors determining pre-service teachers’ acceptance of AI technology and examine potential gender differences in the research model. Our findings suggest that perceived usefulness and perceived ease of use were the most significant factors affecting pre-service teachers’ intentions to use AI technology, with perceived usefulness having a more substantial impact than perceived ease of use. Previous studies on technology acceptance have also shown that perceived usefulness has a more significant effect (Koutromanos et al., 2015; Teo, 2012; Teo et al., 2009; Wong, 2015). However, it is vital to note that our study did not involve the direct use of AI-based educational tools. Therefore, the potential perceived usefulness of AI technology is greater than the possible perceived ease of use for pre-service teacher AI acceptance. It is essential to carefully consider students’ needs and preferences when developing AI-based educational tools to ensure they are perceived as valuable. Collecting feedback from students is also essential in this regard.
Our study also found that gender differences moderate pre-service teachers potential acceptance of AI technology. Female pre-service teachers are more likely to experience anxiety about AI-based educational tools than male pre-service teachers, leading to differences in their perceived ease of use and usefulness. It is also a long-standing problem in STEM education (Lunardon et al., 2022; Pelch, 2018). Additionally, we found that female pre-service teachers are more likely to be externally influenced than male pre-service teachers in their intention to use AI technology in education. The reasons behind this phenomenon are both cognitive and non-cognitive. Especially at the non-cognitive side, there is still relatively limited relevant exploration. Addressing female pre-service teachers’ anxiety about AI technology in education requires a multi-faceted approach. On the one hand, universities should provide education and training on AI for female pre-service teachers to help them better understand how AI works. On the other hand, educators raise the visibility of female role models in AI to create a more inclusive and supportive learning environment.
The exponential growth of AI technology in recent years has created an inevitable and long-term benefits trend for its application and development in education. However, implementing AI in education has challenges and risks. One such challenge is the weak link between theoretical and pedagogical perspectives and the practical application of AI in the classroom; additionally, ethical and educational approaches to AI are still in the exploratory stage (Zawacki-Richter et al., 2019). Pre-service teachers are the future of education, and they play a critical role in shaping the adoption of AI technologies in the classroom. Their acceptance of AI will greatly influence its future development and application in education. Given the challenges posed by AI in education, pre-service teachers are well-positioned to address these issues and explore effective ways to integrate these tools in the classroom.
Availability of data and materials
The data used to gain the findings of this study are available from the corresponding author upon request.
References
Abu-Al-Aish, A., & Love, S. (2013). Factors influencing students’ acceptance of m-learning: An investigation in higher education. The International Review of Research in Open and Distributed Learning. https://doi.org/10.19173/irrodl.v14i5.1631
Ajzen, I. (1985). From intentions to actions: A theory of planned behavior. In Action control (pp. 11–39). Springer. https://doi.org/10.1007/978-3-642-69746-3_24
Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting social behavior. Prentice-Hall.
Al Kurdi, B., Alshurideh, M., & Salloum, S. A. (2020). Investigating a theoretical framework for e-learning technology acceptance. International Journal of Electrical and Computer Engineering IJECE, 10(6), 6484–6496. https://doi.org/10.11591/ijece.v10i6.pp6484-6496
Alharbi, S., & Drew, S. (2018). The role of self-efficacy in technology acceptance. In Proceedings of the Future Technologies Conference (pp. 1142–1150). Springer. https://doi.org/10.1007/978-3-030-02686-8_85
Ayanwale, M. A., Sanusi, I. T., Adelana, O. P., Aruleba, K. D., & Oyelere, S. S. (2022). Teachers’ readiness and intention to teach artificial intelligence in schools. Computers and Education Artificial Intelligence, 3, 100099. https://doi.org/10.1016/j.caeai.2022.100099
Bandura, A. (2002). Social cognitive theory in cultural context. Applied Psychology, 51(2), 269–290. https://doi.org/10.1111/1464-0597.00092
Baydas, O., & Goktas, Y. (2017). A model for pre-service teachers’ intentions to use ICT in future lessons. Interactive Learning Environments, 25(7), 930–945. https://doi.org/10.1080/10494820.2016.1232277
Beer, J. M., Fisk, A. D., & Rogers, W. A. (2014). Toward a framework for levels of robot autonomy in human-robot interaction. Journal of Human-Robot Interaction, 3(2), 74–99. https://doi.org/10.5898/JHRI.3.2.Beer
Brown, T. (2006). Confirmatory factor analysis for applied research. The Guilford Press.
Byrne, B. M. (2016). Structural equation modeling with AMOS: Basic concepts, applications, and programming (3rd ed.). Routledge.
Carmines, E. G., & McIver, J. P. (1981). Analyzing models with unobserved variables. In G. W. Bohrnstedt & E. F. Borgatta (Eds.), Social measurement Current issues. Sage.
Castelo, N., Schmitt, B., & Sarvary, M. (2019). Human or Robot? Consumer responses to radical cognitive enhancement products. Journal of the Association for Consumer Research, 4(3), 217–230. https://doi.org/10.1086/703462
Charness, G., & Gneezy, U. (2012). Strong evidence for gender differences in risk taking. Journal of Economic Behavior & Organization, 83(1), 50–58. https://doi.org/10.1016/j.jebo.2011.06.007
Chen, H.-R., & Tseng, H.-F. (2012). Factors that influence acceptance of web-based e-learning systems for the in-service education of junior high school teachers in Taiwan. Evaluation and Program Planning, 35(3), 398–406. https://doi.org/10.1016/j.evalprogplan.2011.11.007
Chen, N.-S., Yin, C., Isaias, P., & Psotka, J. (2020a). Educational big data: Extracting meaning from data for smart education. Interactive Learning Environments, 28(2), 142–147. https://doi.org/10.1080/10494820.2019.1635395
Chen, X., Xie, H., Zou, Di., & Hwang, G.-J. (2020b). Application and theory gaps during the rise of Artificial Intelligence in Education. Computers and Education: Artificial Intelligence, 1, 100002. https://doi.org/10.1016/j.caeai.2020.100002
Cheung, G. W., & Rensvold, R. B. (2002). Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling, 9(2), 233–255. https://doi.org/10.1207/S15328007SEM0902_5
Choi, S., Jang, Y., & Kim, H. (2022). Influence of pedagogical beliefs and perceived trust on teachers’ acceptance of educational artificial intelligence tools. International Journal of Human Computer Interaction. https://doi.org/10.1080/10447318.2022.2049145
Dai, Y., Chai, C. S., Lin, P. Y., Jong, M. S. Y., Guo, Y., & Qin, J. (2020). Promoting students’ well-being by developing their readiness for the artificial intelligence age. Sustainability, 12(16), 6597. https://doi.org/10.3390/su12166597
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008
Eagly, A. H., & Wood, W. (2012). Social role theory. Handbook of theories of social psychology, 2.
Eaton, A. A., Saunders, J. F., Jacobson, R. K., & West, K. (2020). How gender and race stereotypes impact the advancement of scholars in STEM: Professors’ biased evaluations of physics and biology post-doctoral candidates. Sex Roles, 82, 127–141. https://doi.org/10.1007/s11199-019-01052-w
Emin, I., & Sami, S. (2016). The use of cartoons in elementary classrooms: An analysis of teachers behavioral intention in terms of gender. Educational Research and Reviews, 11(8), 508–516. https://doi.org/10.5897/ERR2015.2119
Fornell, C., Tellis, G. J., & Zinkhan, G. M. (1982). Validity assessment: A structural equations approach using partial least squares. In Proceedings of the American marketing association educators' conference (Vol. 48, pp. 405–409).
González-Pérez, S., Mateos de Cabo, R., & Sáinz, M. (2020). Girls in STEM: Is it a female role-model thing? Frontiers in Psychology, 11, 2204. https://doi.org/10.3389/fpsyg.2020.02204
Granić, A., & Marangunić, N. (2019). Technology acceptance model in educational context: A systematic literature review. British Journal of Educational Technology, 50(5), 2572–2593. https://doi.org/10.1111/bjet.12864
Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2019). Multivariate data analysis (8th). Cengage Learning.
Holden, H., & Rada, R. (2011). Understanding the influence of perceived usability and technology self-efficacy on teachers’ technology acceptance. Journal of Research on Technology in Education, 43(4), 343–367. https://doi.org/10.1080/15391523.2011.10782576
Hu, Y. H. (2021). Effects and acceptance of precision education in an AI-supported smart learning environment. Education and Information Technologies, 27(2), 2013–2037. https://doi.org/10.1007/s10639-021-10664-3
Islamoglu, H., Kabakci Yurdakul, I., & Ursavas, O. F. (2021). pre-service teachers’ acceptance of mobile-technology-supported learning activities. Educational Technology Research and Development, 69(2), 1025–1054. https://doi.org/10.1007/s11423-021-09973-8
Istenic, A., Bratko, I., & Rosanda, V. (2021). Are pre-service teachers disinclined to utilize embodied humanoid social robots in the classroom? British Journal of Educational Technology, 52(6), 2340–2358. https://doi.org/10.1111/bjet.13144
Johnson, D. G., & Verdicchio, M. (2017). AI Anxiety. Journal of the Association for Information Science and Technology, 68(9), 2267–2270. https://doi.org/10.1002/asi.23867
Jorgensen, T. D., Pornprasertmanit, S., Schoemann, A. M., & Rosseel, Y. (2021). semTools: Useful tools for structural equation modeling. R package version 0.5–5. Retrieved June 8, 2022 from https://CRAN.R-project.org/package=semTools
Kaban, A. L., & Boy Ergul, I. (2020). Teachers' Attitudes Towards the Use of Tablets in Six EFL Classrooms. In L. Tomei & E. Podovšovnik (Eds.), Advances in Educational Technologies and Instructional Design. Examining the Roles of Teachers and Students in Mastering New Technologies (pp. 284–298). IGI Global. https://doi.org/10.4018/978-1-7998-2104-5.ch015
Kennedy, P. (2002). Learning cultures and learning styles: Myth-understandings about adult (Hong Kong) Chinese learners. International Journal of Lifelong Education, 21(5), 430–445. https://doi.org/10.1080/02601370210156745
Koutromanos, G., Styliaras, G., & Christodoulou, S. (2015). Student and in-service teachers’ acceptance of spatial hypermedia in their teaching: The case of HyperSea. Education and Information Technologies, 20(3), 559–578. https://doi.org/10.1007/s10639-013-9302-8
Lee, J., & Han, S. H. (2021). The Future of Service Post-COVID-19 Pandemic, Volume 1. Springer. https://doi.org/10.1007/978-981-33-4126-5
Lee, M. K., Cheung, C. M., & Chen, Z. (2005). Acceptance of Internet-based learning medium: The role of extrinsic and intrinsic motivation. Information & Management, 42(8), 1095–1104. https://doi.org/10.1016/j.im.2003.10.007
Li, J., & Huang, J.-S. (2020). Dimensions of artificial intelligence anxiety based on the integrated fear acquisition theory. Technology in Society, 63, 101410. https://doi.org/10.1016/j.techsoc.2020.101410
Liaw, S.-S., & Huang, H.-M. (2015). How factors of personal attitudes and learning environments affect gender difference toward mobile learning acceptance. The International Review of Research in Open and Distributed Learning. https://doi.org/10.19173/irrodl.v16i4.2355
Luan, W. S., & Teo, T. (2009). Investigating the technology acceptance among student teachers in Malaysia: An application of the Technology Acceptance Model (TAM). The Asia-Pacific Education Researcher. https://doi.org/10.3860/taper.v18i2.1327
Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Retrieved June 8, 2022 from http://discovery.ucl.ac.uk/1475756/
Lunardon, M., Cerni, T., & Rumiati, R. I. (2022). Numeracy gender gap in STEM higher education: The role of neuroticism and math anxiety. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2022.856405
Luo, T., So, W. W. M., Wan, Z. H., & Li, W. C. (2021). STEM stereotypes predict students’ STEM career interest via self-efficacy and outcome expectations. International Journal of STEM Education, 8, 1–13. https://doi.org/10.1186/s40594-021-00295-y
Mac Callum, K., Jeffrey, L., & NA, K. (2014). Factors impacting Teachers’ adoption of mobile learning. Journal of Information Technology Education: Research, 13, 141–162.
Marangunić, N., & Granić, A. (2015). Technology acceptance model: A literature review from 1986 to 2013. Universal Access in the Information Society, 14(1), 81–95. https://doi.org/10.1007/s10209-014-0348-1
Mazman Akar, S. G. (2019). Does it matter being innovative: Teachers’ technology acceptance. Education and Information Technologies, 24(6), 3415–3432. https://doi.org/10.1007/s10639-019-09933-z
Moè, A., Hausmann, M., & Hirnstein, M. (2021). Gender stereotypes and incremental beliefs in STEM and non-STEM students in three countries: Relationships with performance in cognitive tasks. Psychological Research Psychologische Forschung, 85(2), 554–567. https://doi.org/10.1007/s00426-019-01285-0
Nikou, S. A., & Economides, A. A. (2017). Mobile-based assessment: Investigating the factors that influence behavioral intention to use. Computers & Education, 109, 56–73. https://doi.org/10.1016/j.compedu.2017.02.005
Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory. McGraw- H.
Papadakis, S. (2018). Evaluating pre-service teachers’ acceptance of mobile devices with regards to their age and gender: A case study in Greece. International Journal of Mobile Learning and Organisation, 12(4), 336–352. https://doi.org/10.1504/IJMLO.2018.095130
Pedró, F., Subosa, M., Rivas, A., & Valverde, P. (2019). Artificial intelligence in education: Challenges and opportunities for sustainable development. Paris: UNESCO. Retrieved June 8, 2022 from https://unesdoc.unesco.org/ark:/48223/pf0000366994?locale=es
Pelch, M. (2018). Gendered differences in academic emotions and their implications for student success in STEM. International Journal of STEM Education, 5(1), 1–15. https://doi.org/10.1186/s40594-018-0130-7
R Core Team. (2021). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. Retrieved June 8, 2022 from https://www.R-project.org/
Reiss, M. J. (2021). The use of AI in education: Practicalities and ethical considerations. London Review of Education. https://doi.org/10.14324/LRE.19.1.05
Rogers, E. M., Singhal, A., & Quinlan, M. M. (2014). Diffusion of innovations. In An integrated approach to communication theory and research (pp. 432–448). Routledge. https://doi.org/10.4324/9780203887011
Rosseel, Y. (2012). Lavaan: An R package for structural equation modeling and more. Version 0.5–12 (BETA). Journal of Statistical Software, 48(2), 1–36.
Sánchez-Prieto, J. C., Cruz-Benito, J., Therón, R., & García-Peñalvo, F. J [Francisco J.] (2019). How to Measure Teachers' Acceptance of AI-driven Assessment in eLearning. In M. Á. C. González, F. J. R. Sedano, C. F. Llamas, & F. J. García-Peñalvo (Eds.), Proceedings of the Seventh International Conference on Technological Ecosystems for Enhancing Multiculturality (pp. 181–186). ACM. https://doi.org/10.1145/3362789.3362918
Schepers, J., & Wetzels, M. (2007). A meta-analysis of the technology acceptance model: Investigating subjective norm and moderation effects. Information & Management, 44(1), 90–103. https://doi.org/10.1016/j.im.2006.10.007
Scherer, R., & Teo, T. (2019). Unpacking teachers’ intentions to integrate technology: A meta-analysis. Educational Research Review, 27, 90–109. https://doi.org/10.1016/j.edurev.2019.03.001
Shashaani, L. (1993). Gender-based differences in attitudes toward computers. Computers & Education, 20(2), 169–181. https://doi.org/10.1016/0360-1315(93)90085-W
Siyam, N. (2019). Factors impacting special education teachers’ acceptance and actual use of technology. Education and Information Technologies, 24(3), 2035–2057. https://doi.org/10.1007/s10639-018-09859-y
Stephan, M. (2021). Online-und Präsenzlehre aus Sicht von Lehramtsstudierenden. Eine Mixed Methods Studie zu emotionalen und motivationalen Effekten [Online and face-to-face learning from the perspective of student teachers. A Mixed Methods Study on achievement Emotions and Motivation]. Friedrich-Alexander-Universität Erlangen-Nürnberg (Germany). Retrieved June 8, 2022 from https://opus4.kobv.de/opus4-fau/frontdoor/index/index/docId/16551
Sun, H., & Zhang, P. (2006). The role of moderating factors in user technology acceptance. International Journal of Human-Computer Studies, 64, 53–78. https://doi.org/10.1016/j.ijhcs.2005.04.013
Tallvid, M. (2016). Understanding teachers’ reluctance to the pedagogical use of ICT in the 1:1 classroom. Education and Information Technologies, 21(3), 503–519. https://doi.org/10.1007/s10639-014-9335-7
Tarraga-Minguez, R., Suarez-Guerrero, C., & Sanz-Cervera, P. (2021). Digital teaching competence evaluation of pre-service teachers in Spain: A review study. IEEE Revista Iberoamericana De Tecnologias Del Aprendizaje, 16(1), 70–76. https://doi.org/10.1109/RITA.2021.3052848
Teo, T. (2010). Measuring the effect of gender on computer attitudes among pre-service teachers. Campus-Wide Information Systems, 27(4), 227–239. https://doi.org/10.1108/10650741011073770
Teo, T. (2012). Examining the intention to use technology among pre-service teachers: An integration of the Technology Acceptance Model and Theory of Planned Behavior. Interactive Learning Environments, 20(1), 3–18. https://doi.org/10.1080/10494821003714632
Teo, T., Fan, X., & Du, J. (2015). Technology acceptance among pre-service teachers: Does gender matter? Australasian Journal of Educational Technology. https://doi.org/10.14742/ajet.1672
Teo, T., Lee, C. B., Chai, C. S., & Wong, S. L. (2009). Assessing the intention to use technology among pre-service teachers in Singapore and Malaysia: A multigroup invariance analysis of the Technology Acceptance Model (TAM). Computers & Education, 53(3), 1000–1009. https://doi.org/10.1016/j.compedu.2009.05.017
Teo, T., & Noyes, J. (2011). An assessment of the influence of perceived enjoyment and attitude on the intention to use technology among pre-service teachers: A structural equation modeling approach. Computers & Education, 57(2), 1645–1653. https://doi.org/10.1016/j.compedu.2011.03.002
Teo, T., & Noyes, J. (2014). Explaining the intention to use technology among pre-service teachers: A multigroup analysis of the Unified Theory of Acceptance and Use of Technology. Interactive Learning Environments, 22(1), 51–66. https://doi.org/10.1080/10494820.2011.641674
Turan, Z., Küçük, S., & Karabey, S. (2022). Investigating pre-service teachers’ behavioral intentions to use web 2.0 Gamification tools. Participatory Educational Research, 9(4), 172–189. https://doi.org/10.17275/per.22.85.9.4
Ursavaş, Ö. F., Yalçın, Y., & Bakır, E. (2019). The effect of subjective norms on pre-service and in-service teachers’ behavioural intentions to use technology: A multigroup multimodel study. British Journal of Educational Technology, 50(5), 2501–2519. https://doi.org/10.1111/bjet.12834
Vandenberg, R. J., & Lance, C. E. (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3(1), 4–70. https://doi.org/10.1177/109442810031002
Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39(2), 273–315. https://doi.org/10.1111/j.1540-5915.2008.00192.x
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204. https://doi.org/10.1287/mnsc.46.2.186.11926
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. https://doi.org/10.2307/30036540
Wang, Y., Liu, C., Tu, Y.-F. (2021). Factors affecting the adoption of ai-based applications in higher education: An analysis of teachers’ perspectives using structural equation modeling. Educational Technology & Society, 24 (3), 116–129. Retrieved June 8, 2023 from https://www.jstor.org/stable/27032860
Wang, Y.-Y., & Wang, Y.-S. (2022). Development and validation of an artificial intelligence anxiety scale: An initial application in predicting motivated learning behavior. Interactive Learning Environments, 30(4), 619–634. https://doi.org/10.1080/10494820.2019.1674887
Wong, G. K. W. (2015). Understanding technology acceptance in pre-service teachers of primary mathematics in Hong Kong. Australasian Journal of Educational Technology. https://doi.org/10.14742/ajet.1890
Yoon, M., & Lai, M. H. (2018). Testing factorial invariance with unbalanced samples. Structural Equation Modeling: A Multidisciplinary Journal, 25, 201–213. https://doi.org/10.1080/10705511.2017.1387859
Zarafshani, K., Solaymani, A., D’Itri, M., Helms, M. M., & Sanjabi, S. (2020). Evaluating technology acceptance in agricultural education in Iran: A study of vocational agriculture teachers. Social Sciences & Humanities Open, 2(1), 100041. https://doi.org/10.1016/j.ssaho.2020.100041
Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–Where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 1–27. https://doi.org/10.1186/s41239-019-0171-0
Zhai, X., Chu, X., Chai, C. S., Jong, M. S. Y., Istenic, A., Spector, M., Liu, J.-B., Yuan, J., & Li, Y. (2021). A review of Artificial Intelligence (AI) in Education from 2010 to 2020. Complexity, 2021, 1–18. https://doi.org/10.1155/2021/8812542
Zhang, K., & Aslan, A. B. (2021). AI technologies for education: Recent research & future directions. Computers and Education Artificial Intelligence, 2, 100025. https://doi.org/10.1016/j.caeai.2021.100025
Zimmerman, J. (2006). Why some teachers resist change and what principals can do about it. NASSP Bulletin, 90(3), 238–249. https://doi.org/10.1177/0192636506291521
Acknowledgements
A sincere thanks to Farrukh Kamran for his diligent proofreading of this paper.
Funding
This work was supported by the Federal Ministry of Education and Research (Germany) [obtained by Prof. Dr. Michaela Gläser-Zikuda, Grant number 16DHB4019]. The authors express their gratitude.
Author information
Authors and Affiliations
Contributions
MGZ, CZ and FH conceptualized the study, and FH, JS and LP realized data collection. CZ analyzed the data and wrote the manuscript draft. MGZ and FH supervised the study. All authors revised the draft and contributed to the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors reported no potential competing interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Zhang, C., Schießl, J., Plößl, L. et al. Acceptance of artificial intelligence among pre-service teachers: a multigroup analysis. Int J Educ Technol High Educ 20, 49 (2023). https://doi.org/10.1186/s41239-023-00420-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s41239-023-00420-7