- Research article
- Open access
- Published:
ChatGPT awareness, acceptance, and adoption in higher education: the role of trust as a cornerstone
International Journal of Educational Technology in Higher Education volume 21, Article number: 46 (2024)
Abstract
As technology continues to advance, the integration of generative artificial intelligence tools in various sectors, including education, has gained momentum. ChatGPT, an extensively recognized language model created by OpenAI, has gained significant importance, particularly in education. This study investigates the awareness, acceptance, and adoption of ChatGPT, a state-of-the-art language model developed by OpenAI, in higher education institutions across China. This study applies the partial least squares structural equation modeling (PLS-SEM) method for examining data collected from 320 Chinese university students. The study’s conceptual framework integrates key determinants from the Technology Acceptance Model (TAM) and extends it by incorporating perceived intelligence as a critical factor in the adoption process. The study findings reveal that ChatGPT awareness significantly influences the intention to adopt ChatGPT. Perceived ease of use, usefulness, and intelligence significantly mediate the association between ChatGPT awareness and adoption intention of ChatGPT. Additionally, perceived trust significantly moderates the relationship between ChatGPT awareness and perceived ease of use, usefulness, and intelligence. Moving forward, in order to maintain students’ critical thinking skills and inventiveness in their assessment writing, assessments must promote the safe use of ChatGPT. Therefore, educators will be crucial in ensuring that artificial intelligence tools are used in assessments ethically and suitably by providing clear guidelines and instructions.
Introduction
Artificial intelligence (AI) has witnessed remarkable advancements in recent years, fostering innovation across various sectors. In particular, integrating AI-powered conversational agents, such as ChatGPT, has garnered substantial attention for its potential applications in enhancing educational experiences (Bernabei et al., 2023). In China, the higher education landscape is no exception to this transformative wave, as institutions seek to harness the capabilities of advanced technologies to adapt to the evolving needs of students and educators. As ChatGPT, a language model developed by OpenAI, emerges as a prominent player in the conversational generative AI landscape. The adoption of these generative AI technologies and their awareness, in the Chinese education sector becomes imperative (Ma & Huo, 2023). The rapid evolution of AI technologies demands a nuanced examination of their integration in educational settings, where the implications for teaching, learning, and institutional practices are profound. With growing interest in AI-based tools, students understand the intricate dynamics that influence the awareness, acceptance, and adoption of ChatGPT in their education (Maheshwari, 2023).
ChatGPT’s potential to provide personalized assistance can contribute to a more inclusive educational environment. However, awareness among students about the availability and benefits of ChatGPT is essential for ensuring widespread accessibility and learning opportunities (Lai et al., 2023). This awareness and adoption of ChatGPT by students may also challenge traditional learning methods. Educators must adapt to the evolving educational landscape, balancing AI integration and maintaining the essential human touch in education. Past studies (Rahman, 2023; Saif et al., 2023) have explored ChatGPT’s capacity to generate human-like responses and adapt to individual learning styles, which has sparked interest in its potential to cater to diverse educational needs. With the integration of AI technologies, ethical considerations become paramount. Awareness of ChatGPT limitations and potential biases is crucial for responsible adoption. Understanding these aspects can influence students’ trust in the technology and willingness to incorporate it into their learning processes (Sohail et al., 2023).
TAM proposed by (Davis et al., 1989) suggested that perceived ease of use (PE) and usefulness (PU) are fundamental factors influencing users’ acceptance of technology. PE refers to the degree to which users believe that interacting with ChatGPT is effortless, while perceived usefulness pertains to the perceived benefits and value derived from its utilization. Previous research (Sarraf et al., 2023) on various technologies, including chatbots and virtual assistants, has consistently demonstrated the significance of these two factors in shaping users’ behavioral intentions. Trust is essential in technology adoption, particularly when individuals interact with different technologies such as ChatGPT (Maheshwari, 2023). A prior researcher (Hyun Baek & Kim, 2023) explained trust as a party’s willingness to be exposed to another party’s actions in the hopes that the latter will carry out a certain task that is crucial to the former. Users’ trust in ChatGPT may be influenced by factors such as reliability, security, and the transparency of its operations. ChatGPT adoption underscores the importance of trust as a significant predictor of users’ intentions to use and rely on generative AI technology. The perceived intelligence of ChatGPT is another crucial dimension that can influence user adoption. A past study exploring the adoption of intelligent systems and AI applications has emphasized the impact of perceived intelligence on user attitudes and behaviors (Ali et al., 2023). Perceived intelligence captures users’ perceptions of ChatGPT’s cognitive abilities and responsiveness. Users may form perceptions regarding the system’s ability to comprehend and respond intelligently to their queries (Maheshwari, 2023).
Adopting innovative technologies in higher education is pivotal for fostering enhanced learning experiences, promoting academic engagement, and preparing students for the demands of a rapidly evolving workforce (Oviedo-Trespalacios et al., 2023). As institutions in China navigate the complexities of integrating AI-driven tools into their educational frameworks, understanding the factors influencing ChatGPT adoption becomes crucial. This study delves into the interplay between awareness, acceptance, and adoption of ChatGPT in China’s higher education context. The TAM provides a theoretical framework for comprehending user behaviors towards new technologies. In this context, the TAM constructs of perceived ease of use (PE), usefulness (PU), and perceived intelligence (PI) are examined as potential mediators influencing the relationship between ChatGPT awareness and ChatGPT adoption. Furthermore, the study enlightens three research questions: first, does ChatGPT awareness influence ChatGPT adoption intention? Second, do perceived ease of use, usefulness, and intelligence mediate the relationship between ChatGPT awareness and ChatGPT adoption intention? Third, does perceived trust moderate the relationship between perceived ease of use (PE), usefulness (PU), intelligence (PI), and ChatGPT awareness?
Studies on students’ intentions to use generative AI platforms for learning are available (Bilquise et al., 2023; Hyun Baek & Kim, 2023; Menon & Shilpa, 2023; Niu & Mvondo, 2024). However, a noticeable research gap exists regarding students’ awareness of ChatGPT, particularly regarding adoption intentions in the education sector. Past studies focused on different adoption factors, but no one used the awareness factor to use ChatGPT. Notably, a particular empirical study conducted in Poland on ChatGPT (Strzelecki, 2023) stands out since it investigates the students’ acceptance and real-world use of ChatGPT. Most of the research on ChatGPT has been limited to editorials, reviews, comments, and general discussions; only this empirical study has been conducted in this specific field of research. To fill the research gap in the ChatGPT literature by analyzing all factors, the present study focuses on awareness, acceptance, and adoption of ChatGPT in Chinese university students. Understanding how students use ChatGPT is an essential first step for educators in creating effective teaching, evaluation, and support initiatives and raising student awareness of the potential for plagiarism. With this knowledge, educators will be able to speak with students about ChatGPT in an acceptable manner and recognize unethical activity that AI rendered possible. To the best of our knowledge, no empirical research on this particular topic has been done in the China region so far. Thus, this study is significant as it is one of the primary empirical investigations in China. Therefore, the main study objective of study how ChatGPT awareness influences the adoption intention of ChatGPT. Perceived ease of use, usefulness, and intelligence significantly mediates the association among ChatGPT awareness and adoption intention of ChatGPT. Furthermore, perceived trust significantly moderates the relationship between ChatGPT awareness and perceived ease of use, usefulness, and intelligence.
The rest of this study is organized as follows: Sect. 2 presents the literature review, develops hypotheses, and proposes the conceptual framework that drives this investigation. Section 3 outlines the study’s methodology. Section 4 represents the study results. The study’s discussion is described in Sect. 5. The final Sect. 6 also discusses the study’s conclusion, theoretical and practical implications, and limitations.
Theoretical underpinning and hypotheses development
Technology acceptance model (TAM)
The TAM is a broadly used theoretical framework for considering and predicting users’ acceptance and use of technology (Davis, 1989). It posits that perceived ease of use (PE) and perceived usefulness (PU) are TAM components that influence users’ intentions to use technology, affecting its actual usage. Perceived ease of use refers to the degree to which users trust using a particular technology would be free of effort. ChatGPT might involve how easily educators and students can interact with and integrate ChatGPT into their educational activities (Hyun Baek & Kim, 2023). Perceived usefulness reflects the degree to which users believe that a technology would enhance their performance. If we talk about ChatGPT, it could relate to how beneficial users perceive the AI model in supporting various educational tasks (Niu & Mvondo, 2024). Similarly, perceived intelligence is not a direct component of TAM but can be considered an additional factor in this context. It refers to users’ perceptions of the intelligence or capabilities of ChatGPT (Ali et al., 2023).
This study suggests that TAM components PE, PU, and PI successfully mediated the relationship between awareness, acceptance, and adoption. In other words, users’ awareness and acceptance may influence these factors, affecting their adoption of ChatGPT in higher education. Perceived trust is proposed as a moderating factor that may influence the strength and direction of the relationships between awareness, acceptance, adoption, and the mediating factors. Higher levels of trust might enhance the positive effects of PE, PU, and PI on the adoption of ChatGPT (Maheshwari, 2023). This study uses the TAM framework to understand the factors influencing the awareness, acceptance, and adoption of ChatGPT in higher education in China. The goal is to provide insights into the dynamics of technology acceptance in the specific context of ChatGPT in Chinese higher education. Our study also used TAM as a supporting theory as evidence from past studies (Hyun Baek & Kim, 2023; Lai et al., 2023; Saif et al., 2023).
ChatGPT awareness and ChatGPT adoption intention
Awareness serves as a critical precursor to technology adoption. Users must be aware of a technology’s existence, functionalities, and potential benefits before considering its adoption. In the context of ChatGPT, awareness encompasses understanding its capabilities, applications, and limitations (Eppler et al., 2023). Increasing awareness can be achieved through educational initiatives. When users are informed about the potential uses and advantages of ChatGPT, they may develop a more positive attitude towards the technology, fostering a higher intention to adopt it. This could involve tutorials, demonstrations, or informational campaigns. Past studies (Kamarudin et al., 2022; Lee et al., 2022) have shown that higher levels of awareness are positively correlated with increased adoption intention. The advanced capabilities of ChatGPT to generate coherent responses and adapt to various conversational styles contribute significantly to heightened awareness (Oviedo-Trespalacios et al., 2023). Users are more likely to express interest in adopting ChatGPT when they know its potential to enhance communication, streamline tasks, and improve user experience. Social influence, including recommendations from peers, colleagues, or online communities, also plays a pivotal role in shaping awareness. Positive reviews, testimonials, and success stories shared by early adopters contribute to a broader understanding of ChatGPT benefits, fostering a more favorable environment for adoption (Ayinde et al., 2023). The relationship between awareness and adoption intention is a common phenomenon in technology. Users are more likely to adopt a technology if they are aware of its existence, capabilities, and potential benefits. In the case of ChatGPT, awareness may include understanding its applications, features, and how it can be integrated into daily tasks (Ali et al., 2023). In light of these findings, the following hypothesis is presented:
H1
ChatGPT awareness positively impacts ChatGPT adoption intention.
Mediating role of perceived ease of use
User awareness is a crucial factor influencing how individuals interact with and perceive ease of use of ChatGPT. When users clearly understand the model’s capabilities and limitations, their expectations are aligned with reality. This awareness can enhance the overall user experience and provide more seamless interaction (Al-Abdullatif, 2023). However, the level of awareness varies among users. Some users may deeply understand ChatGPT technology and recognize its strengths and potential pitfalls. Others may have limited knowledge and approach ChatGPT with different expectations. This divergence in awareness can lead to diverse perceptions of ease of use. ChatGPT’s ease of use is intricately linked to the alignment between user expectations and ChatGPT’s actual capabilities. Users who are well-informed about the model’s limitations are more likely to approach the interaction with realistic expectations, fostering a smoother and more satisfactory experience (Bilquise et al., 2023).
The PE factor is often associated with the user interface and overall user experience. If users find ChatGPT intuitive, user-friendly, and responsive, they are more likely to perceive its ease of use positively. A well-designed interface, clear instructions, and an intuitive interaction flow provide a seamless user experience (Jo, 2023). This positive experience fosters a sense of comfort and confidence, positively influencing users’ intentions to adopt ChatGPT. Moreover, the efficiency and effectiveness of ChatGPT in delivering meaningful and relevant responses contribute to the perceived ease of use. If users consistently receive accurate and helpful information through interactions with ChatGPT, they are more likely to perceive the system as easy to use. The system’s ability to comprehend user inputs, generate coherent responses, and adapt to various contexts enhances its usability and positively influences adoption intention (de Andrés-Sánchez & Gené-Albesa, 2023).
Building awareness is the first step in any technology adoption process. Users must be informed about the existence and potential benefits of ChatGPT. Without sufficient awareness, potential users may not even consider adopting this technology (Maheshwari, 2023). A past study (Bernabei et al., 2023) has shown that higher awareness is generally associated with increased intention to adopt new technology such as ChatGPT. PE is a critical factor influencing users’ decisions to adopt a ChatGPT. It refers to how individuals believe using a particular ChatGPT technology will be effort-free. ChatGPT users are likely to form perceptions about how easy it is to interact with and integrate into their existing systems. Perceived ease of use highlights the importance of user-friendly interfaces, clear documentation, and accessible support systems in promoting ChatGPT adoption (Ali et al., 2023). Therefore, this study underscores the significance of perceived ease of use as a mediator between ChatGPT awareness and adoption intention. Considering these concepts, the following hypotheses are developed for this study:
H2a
ChatGPT awareness significantly influences perceived ease of use.
H2b
Perceived ease of use significantly influences ChatGPT adoption intention.
H2c
Perceived ease of use significantly mediates the relationship between ChatGPT awareness and ChatGPT adoption intention.
Mediating role of perceived usefulness
As users become increasingly aware of the capabilities of ChatGPT, their perception of usefulness plays a pivotal role in shaping the success and acceptance of this technology. Users perceive the utility of ChatGPT in various contexts, which lays the groundwork for understanding the interplay between awareness and perceived usefulness (Niu & Mvondo, 2024). As awareness of ChatGPT increases, users develop expectations based on their understanding of the model’s capabilities. This study explores how these expectations, whether accurate or not, influence users’ judgments of usefulness. Factors such as transparency and disclosure of potential biases are crucial in shaping user awareness (Doris M. & Brennan, 2018). A past study examined the correlation between ChatGPT awareness, perceived usefulness, and overall user satisfaction (Ayinde et al., 2023). Analyzing user experiences across different applications, industries, and cultural contexts provides insights into the nuanced relationship between awareness and satisfaction. The ethical implications of ChatGPT awareness are addressed, focusing on issues such as user trust, privacy concerns, and the responsible deployment of AI. Understanding the ethical dimensions of ChatGPT integration is essential for mitigating potential negative consequences on perceived usefulness (Lai et al., 2023).
Users are more likely to perceive ChatGPT as useful if they see its relevance to their tasks or job responsibilities. Positive experiences with ChatGPT, such as accurate responses and efficient task completion, contribute to users’ perception of its usefulness. As users can interact with ChatGPT, it influences their perception of its usefulness, as a user-friendly interface enhances the overall experience (Buabeng-Andoh et al., 2019). This study underscores the significance of perceived usefulness in shaping users’ intentions to adopt ChatGPT as a past study (Bilquise et al., 2023) addressed emerging challenges and refined the understanding of perceived usefulness in the evolving landscape of conversational AI adoption. Users may perceive ChatGPT as useful for problem-solving and information retrieval, as it can generate clear and contextually appropriate responses based on the input provided. The convenience of interacting with a chatbot for quick information or assistance might enhance its perceived usefulness. Users may find it accessible and easy to use, especially for routine tasks. ChatGPT’s ability to perform various tasks, from answering questions to generating creative content, can contribute to its perceived usefulness (Strzelecki, 2023). Therefore, perceived usefulness influences users’ perceptions, and developers can enhance the design and implementation of ChatGPT to maximize its acceptance and utility in various domains.
Awareness acts as a catalyst, prompting individuals to explore the perceived benefits and functionalities of ChatGPT. This highlights the importance of awareness campaigns and educational initiatives in promoting AI-driven technologies like ChatGPT adoption. Students can leverage these findings to design targeted interventions to enhance users’ perceptions of ChatGPT’s usefulness (Hyun Baek & Kim, 2023). Emphasizing and showcasing practical use cases and demonstrating the efficiency and effectiveness of ChatGPT in various domains can contribute to a positive perception of its usefulness. Additionally, efforts to enhance user awareness should not be neglected, as a lack of awareness can impede the potential benefits of perceived usefulness. A past study that understood the dynamics of perceived usefulness could provide valuable insights into the factors shaping users’ attitudes toward adopting advanced AI technologies (Saif et al., 2023). Usefulness and sociocultural are important factors that influence user attitudes toward ChatGPT adoption. As society becomes more accustomed to AI, understanding the cognitive processes underlying user adoption becomes increasingly vital for developers, marketers, and policymakers. Therefore, this study explores the mediation role of perceived usefulness in the relationship between ChatGPT awareness and adoption intention, emphasizing the need for a strategic approach to promoting this AI technology’s benefits and practical applications (Ma & Huo, 2023). Consequently, this study put forward the following hypotheses:
H3a
ChatGPT awareness significantly influences perceived usefulness.
H3b
Perceived usefulness significantly influences ChatGPT adoption intention.
H3c
Perceived usefulness significantly mediates the relationship between ChatGPT awareness and ChatGPT adoption intention.
Mediating role of perceived intelligence
The evolution of artificial intelligence has given rise to a new era where machines not only process information but also demonstrate a level of awareness that transcends mere functionality. ChatGPT is a marvel of language generation and comprehension among the vanguards of this evolution. At the core of ChatGPT’s perceived intelligence lies its ability to seamlessly navigate the vast tapestry of language. Each interaction becomes a linguistic dance, a symphony of words conducted with finesse, mesmerizing users with the eloquence displayed (Ali et al., 2023). ChatGPT awareness introduces a cognitive dimension that transcends mere information processing. As users engage in conversations, the agent’s ability to remember context, recall previous exchanges, and adapt to evolving dialogue constructs an illusion of conscious thought. The boundaries between programmed responses and cognitive intuition blur, contributing to the perceptual elevation of ChatGPT intelligence. The mirror of intelligence reflects not only the capabilities of ChatGPT but also the expectations and biases of its users. The awareness of interacting with an artificial entity shapes the user’s perception of intelligence. The mere knowledge that ChatGPT comprehends the conversation context and adapts to user inputs fosters an environment where the perceived intelligence becomes an amalgamation of machine prowess and human expectation (Sarraf et al., 2023).
ChatGPT, developed by OpenAI, represents a state-of-the-art conversational AI system. Studies have explored how users perceive the intelligence of ChatGPT, with findings indicating that perceived intelligence significantly influences users’ attitudes and behavioral intentions toward the system. The ability of ChatGPT to comprehend and generate human-like text significantly impacts users’ perceptions of its intelligence (Sohail et al., 2023). Users associate higher intelligence with systems that provide prompt and contextually relevant responses, enhancing their overall experience. ChatGPT’s ability to adapt to users’ conversational styles and preferences contributes to the perception of intelligence, fostering a more natural and engaging interaction (Bernabei et al., 2023). Instances of ChatGPT generating inaccurate or nonsensical responses may diminish users’ perceptions of its intelligence, highlighting the need for continuous improvement. Furthermore, TAM posits that PE and PU are fundamental predictors of users’ intentions to adopt ChatGPT technology. Recent extensions of TAM incorporate the concept of perceived intelligence, recognizing the importance of users’ perceptions of the system’s cognitive abilities in shaping their adoption intentions (Maheshwari, 2023).
The proliferation of AI-driven conversational agents has captured the attention of both researchers and practitioners, emphasizing the need to understand the factors that drive users’ adoption intentions. ChatGPT, an advanced language model developed by OpenAI, is an intriguing case study due to its widespread recognition (Kim et al., 2023). Perceived intelligence, a crucial factor in user interactions with AI, has been acknowledged in prior research as a key determinant of technology acceptance. Users’ beliefs regarding the cognitive abilities of ChatGPT are expected to influence their adoption intentions significantly. Users’ awareness of ChatGPT, encompassing knowledge about its capabilities and limitations, is a critical precursor to adoption (Fui-Hoon Nah et al., 2023). Prior studies (Abdelkader, 2023; Sahari et al., 2023) have acknowledged the importance of perceived intelligence in user acceptance of AI technologies. This review aims to consolidate existing knowledge on how awareness of ChatGPT influences users’ perceptions of its intelligence, subsequently affecting their adoption intentions. This study explores the influence of users’ awareness of ChatGPT on their adoption intentions, focusing on the mediating role of perceived intelligence. Based on the above assumption, the following hypotheses have been presented:
H4a
ChatGPT awareness significantly influences perceived intelligence.
H4b
Perceived intelligence significantly influences ChatGPT adoption intention.
H4c
Perceived intelligence significantly mediates the relationship between ChatGPT awareness and ChatGPT adoption intention.
Moderating role perceived trust
Perceived trust has emerged as a pivotal factor influencing user interactions with ChatGPT. Trust in ChatGPT systems is influenced by factors such as system reliability, transparency, and accountability. Users are more likely to engage with and accept AI technologies when they perceive them as trustworthy (Hyun Baek & Kim, 2023). As users become more aware of ChatGPT capabilities, their trust perceptions may either enhance or hinder their perceived ease of use. Previous research has established that users’ technology awareness significantly influences their perceptions of its ease of use. As users become more familiar with ChatGPT and its capabilities, their expectations and attitudes towards its usability will likely evolve. Initial encounters and learning experiences contribute to forming perceptions of ease or difficulty in utilizing the system. Users’ trust in ChatGPT, stemming from factors such as reliability, transparency, and ethical considerations, is posited to influence the degree to which awareness translates into perceived ease of use. Trust moderates users’ cognitive and affective responses to ChatGPT, consequently impacting their overall experience (Ali et al., 2023).
ChatGPT awareness pertains to users’ knowledge and understanding of the conversational agent’s existence, capabilities, and limitations. Awareness of ChatGPT may positively influence perceived usefulness by providing users with insights into the technology’s capabilities and potential benefits. Perceived usefulness reflects the user’s belief that ChatGPT will enhance their performance and efficiency in specific tasks or activities (Niu & Mvondo, 2024). The relationship between ChatGPT awareness and perceived usefulness is explored, acknowledging that informed users may have more realistic expectations and positive perceptions. High levels of trust may amplify the positive impact of awareness on perceived usefulness. Previous research (Paul et al., 2023) has explored the impact of user awareness on the perception of AI technologies. Awareness is a crucial factor influencing users’ expectations, judgments, and evaluations of AI systems.
The perception of intelligence in AI systems is shaped by their inherent capabilities, users’ awareness of the technology, and their trust in it. Increased awareness is likely to lead to more informed opinions about the system’s capabilities, potentially influencing perceptions of intelligence (Lim et al., 2023). Perceived intelligence is a subjective assessment that users make about an AI system based on their interactions and observations. As AI systems like ChatGPT become more sophisticated, understanding how users perceive their intelligence is essential for designing systems that align with user expectations (Davis et al., 1989). Trust plays a pivotal role in user acceptance and adoption of AI technologies. Users are more likely to engage with and rely on AI systems when they trust the technology to perform as expected (de Andrés-Sánchez & Gené-Albesa, 2023). Users who are more aware of ChatGPT may form judgments about its intelligence based on their trust in the system. Trust can either enhance or mitigate the impact of awareness on perceived intelligence (Abdelkader, 2023). Considering these concepts, the following hypotheses are developed for this study:
H5a
Perceived trust significantly moderates the relationship between ChatGPT awareness and perceived ease of use.
H5b
Perceived trust significantly moderates the relationship between ChatGPT awareness and perceived usefulness.
H5c
Perceived trust significantly moderates the relationship between ChatGPT awareness and perceived intelligence.
Figure 1 shows the theoretical framework.
Research Methodology
Data collection and sample characteristics
The awareness of ChatGPT has been gradually increasing among students which is driven by digital curiosity, education-related initiatives, and the platform’s expanding popularity for online knowledge seeking. Students are becoming increasingly aware of the possible applications of ChatGPT, which range from academic help to language acquisition and creative exploration (Jo & Bang, 2023). Students are expected to engage with ChatGPT to utilize its possibilities as awareness grows, adding to the growing landscape of artificial intelligence integration in education and daily life. In the current study, the sample population is Chinese university students who are eager to adopt and use the ChatGPT technology for research and academic purposes (Ma & Huo, 2023). Therefore, data was collected from six university students in Beijing, China, including research bachelor’s, master’s and PhD students. This research used quantitative technique and data for the empirical analysis of this research study was collected using an online survey method. The online survey method was chosen to collect the data from university students from November to December 2023. Due to technology awareness and digitalization, online data collection technique is considered most appropriate for Chinese students (M. F. Shahzad, Xu, Khan et al., 2023). To target the most authentic population, university students who are less familiar with and are not using the ChatGPT technology for their academic help are exempted from the study.
Moreover, to get the appropriate responses, 450 questionnaires were circulated, and 320 were retrieved and completed. All the reverted questionnaires were screened extensively to identify the outliers, missing values, and incomplete responses. Consequently, 130 cases were removed, leaving 320 acceptable cases for analysis—a 71% useable response rate. (Roscoe et al., 1975) suggested that an ideal sample size of 30–500 is acceptable for marketing and behavioral studies. Hence, the Roscoe approach is adopted for sample size selection, in which the 30–500 range was selected to develop the accuracy of the current study. In the current study, convenience sampling was utilized as the most suitable method to access the respondents. Although the results may be limited in generalizability, the convenience sampling approach is used because it is convenient for respondents and relevant to the measuring items (Shahzad et al., 2022). In terms of demographic survey, male students (58%) are more willing to adopt the ChatGPT than female students (42%) for academic purposes. Moreover, all the particulars of demographic information are exhibited in Table 1.
Items measurement
The survey instrument of the research was derived from the TAM model and with the help of multiple prior literature on ChatGPT adoption intention and related awareness. However, the survey instrument is structured into two main parts: the initial segment shows the demographic data such as gender, age group, qualification, year of study, and experience with ChatGPT, as shown in Table 1. Simultaneously, the questionnaire’s final portion is based on components adapted from past research. In order to measure the ChatGPT adoption intention (CGPTAI) construct, 5-items were adapted from prior research and modified to the context of Chinese university students (Strzelecki, 2023). While the term (CGPTAI) is stated as the individuals’ or organizations’ willingness or propensity to embrace and include ChatGPT, an OpenAI conversational artificial intelligence (AI) model, into their workflows, processes, or interactions (Sahari et al., 2023).
Likewise, we assessed the ChatGPT awareness (CGPTAW) construct with 9-items derived from (Farrukh, Xu, Marc et al., 2024), and minor modifications were made from the service sector to the educational sector of Chinese university students. It is referred to as the degree to which persons or entities are knowledgeable and informed about ChatGPT, an OpenAI conversational artificial intelligence model (Eppler et al., 2023). Further, the adapted constructs from the TAM model, such as perceived usefulness (PU) 5-items and perceived ease of use (PE) 6-items, were assessed by (Davis et al., 1989) and modified into the Chinese university context. Furthermore, perceived intelligence (PI) 3-items were adopted from past study (Maheshwari, 2023). Lastly, perceived trust (PT) 4 items were derived from prior literature on the service sector (Tarhini et al., 2017). These two constructs, perceived trust and intelligence, are modified slightly in the Chinese university student context. The term (PI) in terms of ChatGPT adoption is defined as how people perceive ChatGPT as a more intelligent language model (Ali et al., 2023), while perceived trust (PT) of ChatGPT adoption refers to how much trust students have in this OpenAI conversational language model (Sarraf et al., 2023). All of the aforementioned items are rated on a Likert scale of 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree and 5 = strongly agree. Similarly, demographic data is evaluated on a dichotomous scale, such as 1 = yes, 2 = no.
Common method bias
We used SPSS software to administer Harman’s one-factor test to thoroughly assess the possibility of common method bias in this investigation. Despite its acknowledged limitations, this approach is nevertheless advocated by modern research due to its simplicity and diagnostic efficacy (Howard & Henderson, 2023). According to our study, a single component only contributed 27% of the variation, considerably lower than the conventional common method bias criteria of 50% (Podsakoff et al., 2003). The findings mentioned above strongly imply that the integrity of this study is not jeopardized by common method bias.
Analysis and results
The present research employed Smart-PLS 4.0 software to examine the conceptual model in two areas: first the measurement model and the structural model. In the SEM technique, path analysis, regression model, correlation model, and confirmatory factor analysis are measured (Correia et al., 2024; Farrukh et al., 2023). The Smart PLS-SEM method simplifies the study of complicated models with multi-level effects, such as the examination of interactions among constructs in multifaceted models and their mediating role. (Sharma & Virani, 2023). To illustrate the links between the components, a structural model is used, and a multivariate statistical approach known as partial-least-squares is used to evaluate the measurement model synchronously. (Hair et al., 2013). The PLS-SEM was used in this study for data screening and analysis, along with assumption tracking. The current study used PLS-SEM techniques to evaluate the significant levels to estimate construct loadings, path coefficients, and weights.
Measurement model
In the measurement model, we examined the reliability, convergent, and discriminant validity among given relational constructs (Shahzad et al., 2024). In PLS-SEM, the construct reliability is determined with the help of Cronbach’s alpha (α), average variance extracted (AVE), and composite reliability (CR). However, the term Cronbach’s alpha (α) is introduced by Lee Cronbach to assess the internal consistency of constructs under investigation (Henseler et al., 2016). Internal consistency shows the inter-item correlation among constructs; it shows the closeness of each item or variable to each other (Hair et al., 2013). The Cronbach’s alpha value also increases as the correlation among variables increases. The acceptable value for Cronbach’s alpha is 0.7 and above for the reliability measurement, whereas values greater than 0.7 are most recommended for the robust model (Bagozzi & Edwards, 1998). See Fig. 2 for an explanation of the model estimation.
Further, Table 2 reveals that all the Cronbach’s Alpha (α) values are greater than 0.70, indicating that the data is accurate and reliable. Likewise, the composite reliability (CR) score is greater than the acceptable criteria of 0.70, indicating strong internal consistency among all variables. The primary goal of composite reliability is to assess the reliability of each construct by assessing outer loading values in data (Henseler et al., 2016). Similarly, the average variance extraction (AVE) approach tests convergent validity, which analyses the variance among latent constructs with a threshold value of 0.5 and excellent beyond 0.70. All the AVE values in this study are more than 0.5.
In the current study, discriminant validity is measured in two methods: the Fornell-Larcker criterion and other one is Heterotrait-Monotrait (HTMT) ratio (Fornell & Larcker, 1981) by examining cross-loading values of the given construct-items. Meanwhile, HTMT is thought to be a more effective approach than the Fornell-Larcker criteria as it exhibits clear differences amongst variables (Gefen & Straub, 2005). All of the HTMT and Fornell-Larcker values in the study are also within a limit and judged suitable for discriminant validity. Table 3 values exhibit the discriminant validity of the given study.
Structural model
In PLS-SEM, the structural model reflects the causal interactions between latent variables in the conceptual model (Farrukh, Javed, et al., 2024; Xu et al., 2024). The authors used structural equation modeling (SEM) with partial least squares (PLS) to determine direct and indirect correlations across constructs—a more appropriate technique for anticipating and investigating structural interactions as it allows for full and partial mediation (Hair et al., 2013). Likewise, the variance inflation factor (VIF) is examined for better findings, demonstrating multicollinearity among constructs (Farrukh, Xu, An et al., 2024; M. F. Shahzad, Xu, Rehman et al., 2023). The acceptable multicollinearity range is less than 5, indicating that all variables in the present research are significantly correlated.
To investigate the importance of coefficients of determination R2 and the structural path coefficients, the bootstrapping technique was implemented to recollect with a setting of 5000 resamples and 95% error-free bias (Bagozzi & Edwards, 1998). Moreover, Table 4 illustrates the model’s bootstrapping results, which predicts that all the constructs are significantly related to each other (either direct or indirect). ChatGPT awareness (CGPTAW) positively impacts the ChatGPT adoption intentions (CGPTAI), having values (β = 0.077, t = 2.976, and p = 0.003), as validated by H1. Furthermore, as proven by H2a, ChatGPT awareness has a significant impact on perceived ease of use (PE) with values (β = 0.258, t = 4.370, and p = 0.000). Perceived ease of use (PE) significantly affects ChatGPT adoption intentions (CGPTAI) (β = 0.694, t = 9.870, and p = 0.000), as supported by H2b. Furthermore, as proven by H3a, CGPTAW has significantly affected the PU with values (β = 0.431, t = 7.308, and p = 0.000). PU has positively impacted CGPTAI with values (β = 0.082, t = 2.879, and p = 0.004) as validated by H3b. Likewise, CGPTAW is significantly associated with PI values (β = 0.225, t = 3.656, and p = 0.00), as supported by H4a. PI has positively affected CGPTAI with values (β = 0.196, t = 2.711, and p = 0.007), as proven by H4b.
In our meditation analysis, we examined three mediators: perceived ease of use (PE), usefulness (PU), and intelligence (PI). We established a significant mediation effect in the relation among CGPTAW and CGPTAI, as indicated by values (β = 0.179, 0.036, 0.044), (t = 3.981, 2.582, 2.402) and (p < 0.05) supporting hypotheses H2c, H3c, and H4c. In terms of moderation, the results reveal that perceived trust (PT) significantly moderates the association among perceived ease of use, usefulness, and intelligence with values (β = 0.152, 0.081, 0.133, t = 2.551, 2.292, 2.314, and p < 0.05). Thus, H5a, H5b, and H5c have all been validated. Further, R2 values show how effectively the independent variable may define the Dependent variable. This research model’s R2 values are as follows: PE = 12.7%, PU = 34.0%, PI = 9.3% and CGPTAI = 71%. Figure 2 shows all of the route coefficient values.
Furthermore, Figs. 3, 4 and 5 show the moderating effect of perceived trust between PE, PU, PI and ChatGPT awareness. These figures show that after high ChatGPT awareness strengthens the connections between PE, PU, and PI under high perceived trust.
Discussion
The study’s main aim was to assess ChatGPT awareness and its adoption intentions among Chinese university students using the TAM model. Perceived ease of use, usefulness, and intelligence significantly mediate the association between ChatGPT awareness and adoption intention of ChatGPT. Further, perceived trust significantly moderates the relationship between ChatGPT awareness and perceived ease of use, usefulness, and intelligence. To fulfill this purpose, a conceptual framework was developed to assess ChatGPT awareness with the help of TAM constructs (PE, PU, and PI) and its effect on ChatGPT adoption intentions in the context of Chinese university students. We statistically verified the proposed conceptual model on Chinese university students who used the OpenAI ChatGPT application for academic activities. Furthermore, it examines the moderation effect of PT among CGPTAW, PE, PU, and PI towards the CGPTAI. To get the appropriate results, data were collected from 320 university students who use ChatGPT as an application for academic purposes. We used a survey-based questionnaire technique to collect feedback on using the ChatGPT application and distributed it to the intended audience. The results of the study are divided into five main sections:
First, the results predict that ChatGPT awareness positively affects ChatGPT adoption intentions. OpenAI’s ChatGPT application has been recognized as a significant tool for improving students’ academic learning. Given their increased awareness of this technology, students are likely to agree on the practical application of ChatGPT. To support the above argument, a study by (Ali et al., 2023) predicted the impact of ChatGPT awareness among research scholars and students for their learning purposes; results revealed that the awareness of ChatGPT application among research scholars and students would help to boost their knowledge, skill-set and are more willing to adopt the ChatGPT application for enhancement of their self-learning capabilities. Due to the emerging usage of the ChatGPT application, there is still a dearth of research available on the actual intention of adopting the Open AI tool.
Second, the study examined the mediation effect of PE among the CGPTAW and CGPTAI. The findings reveal that the mediating role of PE in terms of CGPTAW and CGPTAI is less explored in prior studies (Maheshwari, 2023; Saif et al., 2023) as earlier studies focused on PE in different technological adoption contexts while the current study has focused on CGPTAW among students. Additionally, the students feel more comfortable and at ease with using ChatGPT technology, resulting in more willingness to adopt the ChatGPT application.
Third, the current study has examined the mediating role of PU in the relation of CGPTAW and CGPTAI. This finding is consistent with prior literature (Niu & Mvondo, 2024) as users feel more productive results and enhance their performance and actual results due to the application of ChatGPT in their professional lives. Similarly, when students experience the benefits of using ChatGPT for tasks like project and assignment completion, report writing assistance, and software development, it consequently enhances students’ intention to adopt the ChatGPT.
Fourthly, the mediation effect of perceived intelligence among ChatGPT awareness and ChatGPT adoption intentions has been discussed in the present Chinese context. The findings reveal that all Chinese university students find ChatGPT more intelligent and intellectual in getting academic assistance than they are willing to adopt and use ChatGPT. This study’s results align with a prior study (Ali et al., 2023), which explains that college students find the ChatGPT application highly intellectual and intelligent due to prompt results of given technical queries with minimum error rate. Consequently, students become more aware and willing to adopt the application for educational purposes.
Lastly, the moderating role of PT is determined by the association among CGPTAW, PE, PU, PI, and CGPTAI. However, the result reveals that the perceived trust significantly moderates the relationship mentioned above. The current study has filled this gap by using perceived trust as a moderator, which has not been discussed in prior literature. However, perceived trust has been extensively studied in the prior literature on technology adoption, consumer behavior, and consumer psychology studies (Bilquise et al., 2023; Hyun Baek & Kim, 2023). However, it is predicted that the role of perceived trust as moderator in terms of ChatGPT adoption is suggested as the students’ perceived trustworthiness in the usage and performance of ChatGPT then it significantly enhances the chances to adopt the ChatGPT. The present study results theoretically and practically contribute to information technology, technology management and consumer psychology literature.
Conclusion
The comprehensive investigation into the features of prompting the adoption intention and subsequent utilization of ChatGPT, an AI-driven chat system, sheds light on pivotal determinants crucial for academia. This extensive analysis has delineated the multifaceted aspects shaping users’ inclinations toward adopting ChatGPT technology, offering valuable insights into strategic avenues for enhancing awareness, acceptance, and utilization of the TAM model. The present research has five different outcomes: Firstly, it confirmed that CGPTAW positively impacts CGPTAI. Second, PE has a significant mediating role in the association between CGPTAW and CGPTAI. Third, PU has a mediating role in the relationship between CGPTAW and CGPTAI. Fourth, PI has a significant mediating association between CGPTAW and CGPTAI. Lastly, PT significantly moderates the relationship among PE, PU, PI, and CGPTAW. These findings offer actionable strategies and insights for technology developers, organizations, and academia to enhance the acceptance, utilization, and further development of AI-driven chat systems like ChatGPT. By addressing these determinants, stakeholders can significantly influence user perceptions and intentions, fostering greater adoption and utilization of this innovative ChatGPT technology.
Theoretical implications and practical contribution
The present study provides several theoretical contributions that help understand the ChatGPT awareness and adoption intentions regarding PE, PU, and PI among Chinese university students. First, according to our best available knowledge, it is the only study that has discussed ChatGPT awareness and its adoption intentions in terms of (PE, PE, and PI) among Chinese students. Multiple studies are available that have tried to discuss ChatGPT adoption in multiple contexts but have not discussed the relation between PE, PU, and PI with CGPTAW. Second, the direct effect of CGPTAW on CGPTAI has not been discussed in prior studies (Maheshwari, 2023; Strzelecki, 2023); thus, this study examined the awareness of ChatGPT as a novel technology and its effect on students’ ultimate intention to adopt the technology. Third, integrating ChatGPT awareness with TAM (technology acceptance model) constructs is considered a novel contribution to the technology management literature and academic sector. Fourth, applying the TAM model with its constructs within the context of Chinese university students is a valuable contribution to support the study’s authenticity and credibility for future researchers. Notably, perceived ease of use (PE), usefulness (PU), and intelligence (PI) were critical drivers of students’ attitudes and behaviors toward ChatGPT. Finally, this study contributes to the development of existing conceptual models produced for various Open AI applications by providing significant proof of parameters within the setting of this new technology.
The current research presents practical insights that can help educational institutions integrate ChatGPT into academic settings. The favorable effect of PE on CGPTAI shows the importance of accessible interfaces and receptive tools. Nonetheless, when integrating ChatGPT into the academies, educational institutions must examine the implications for examinations. While students will inevitably use ChatGPT in their academic activities, considerable consideration should be made to its incorporation into the valuation phase. Developing the evaluations to support responsible ChatGPT use while maintaining students’ critical thinking skills and uniqueness in writing assessment is vital. Instructors can help in this area by offering valid rules and guidelines on utilizing AI tools appropriately and ethically during evaluations. Professors can support the effective use of ChatGPT in academic evaluations to confirm that the students know the anticipated aim and limitations.
Limitations and future directions
The present research has certain limitations and needs recommendations for future researchers. Firstly, this study was limited to a single environment in China’s Beijing universities. It used a small sample size, thus restricting the generalizability of the result to different settings or in various nations. Secondly, researchers only collected data from university students; future research should look into other educational contexts and use larger sample sizes to improve external validity. Thirdly, future researchers could look at comparative studies among other nations or student associates (e.g., college vs. university) to inspect the possible differences in CGPTAI and usage. Fourthly, students’ opinions and adoption intentions may have been influenced by their lack of knowledge of ChatGPT in various ways. Students who are unfamiliar with this application may trust largely on first insight or previous beliefs to shape their attitudes and intentions. As a result, the constrained familiarity shown in our study is a snapshot of their initial experiences. Finally, social, ethical, and socioeconomic aspects can affect students’ beliefs and behaviors regarding ChatGPT, and examining these dissimilarities can help provide further comprehensive knowledge of the domain.
Data availability
Data will be made available on request.
References
Abdelkader, O. A. (2023). ChatGPT’s influence on customer experience in digital marketing: Investigating the moderating roles. Heliyon, 9(8), e18770. https://doi.org/10.1016/j.heliyon.2023.e18770.
Al-Abdullatif, A. M. (2023). Modeling students’ perceptions of Chatbots in Learning: Integrating Technology Acceptance with the value-based adoption model. Education Sciences, 13(11), 1151. https://doi.org/10.3390/educsci13111151.
Ali, F., Yasar, B., Ali, L., & Dogan, S. (2023). Antecedents and consequences of travelers’ trust towards personalized travel recommendations offered by ChatGPT. International Journal of Hospitality Management, 114(August), 103588. https://doi.org/10.1016/j.ijhm.2023.103588.
Ayinde, L., Wibowo, M. P., Ravuri, B., Emdad, F., & Bin (2023). ChatGPT as an important tool in organizational management: A review of the literature. Business Information Review, 40(3), 137–149. https://doi.org/10.1177/02663821231187991.
Bagozzi, R. P., & Edwards, J. R. (1998). Organizational Research Methods. https://doi.org/10.1177/109442819800100104.
Bernabei, M., Colabianchi, S., Falegnami, A., & Costantino, F. (2023). Students’ use of large language models in engineering education: A case study on technology acceptance, perceptions, efficacy, and detection chances. Computers and Education: Artificial Intelligence, 5(October), 100172. https://doi.org/10.1016/j.caeai.2023.100172.
Bilquise, G., Ibrahim, S., & Salhieh, S. E. M. (2023). Investigating student acceptance of an academic advising chatbot in higher education institutions. Education and Information Technologies. https://doi.org/10.1007/s10639-023-12076-x.
Buabeng-Andoh, C., Yaokumah, W., & Tarhini, A. (2019). Investigating students’ intentions to use ICT: A comparison of theoretical models. Education and Information Technologies, 24(1), 643–660. https://doi.org/10.1007/s10639-018-9796-1.
Correia, A. B., Shahzad, M. F., Martins, J. M., & Baheer, R. (2024). Impact of green human resource management towards sustainable performance in the healthcare sector: role of green innovation and risk management. Cogent Business & Management, 11(1). https://doi.org/10.1080/23311975.2024.2374625.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly: Management Information Systems, 13(3), 319–339. https://doi.org/10.2307/249008.
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982–1003.
de Andrés-Sánchez, J., & Gené-Albesa, J. (2023). Explaining policyholders’ Chatbot Acceptance with an Unified Technology Acceptance and Use of Technology-based model. Journal of Theoretical and Applied Electronic Commerce Research, 18(3), 1217–1237. https://doi.org/10.3390/jtaer18030062.
Doris, M., N. M.-D., & Brennan (2018). Journal of Applied Learning & Teaching. The Irish Journal of Psychology, 1(1), 25–34.
Eppler, M., Ganjavi, C., Ramacciotti, L. S., Piazza, P., Rodler, S., Checcucci, E., Gomez Rivas, J., Kowalewski, K. F., Belenchón, I. R., Puliatti, S., Taratkin, M., Veccia, A., Baekelandt, L., Teoh, J. Y. C., Somani, B. K., Wroclawski, M., Abreu, A., Porpiglia, F., Gill, I. S., & Cacciamani, G. E. (2023). Awareness and Use of ChatGPT and Large Language Models: A Prospective Cross-sectional Global Survey in Urology. European Urology, xxxx, 1–8. https://doi.org/10.1016/j.eururo.2023.10.014.
Farrukh, M., Javed, I., & Zahid, I. (2024). The influence of the marketing orientation of textile companies in increasing their competitiveness. In Industry and innovation: Textile industry (pp. 95–118). Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-57804-5
Farrukh, M., Xu, S., Naveed, W., & Nusrat, S. (2023). Investigating the impact of artificial intelligence on human resource functions in the health sector of China : A mediated moderation model. Heliyon, 9(11), e21818. https://doi.org/10.1016/j.heliyon.2023.e21818
Farrukh, M., Xu, S., An, X., & Javed, I. (2024). Assessing the impact of AI-chatbot service quality on user e-brand loyalty through chatbot user trust, experience and electronic word of mouth. Journal of Retailing and Consumer Services, 79(March), 103867. https://doi.org/10.1016/j.jretconser.2024.103867
Farrukh, M., Xu, S., Marc, W., & Yang, X. (2024). Artificial intelligence and social media on academic performance and mental well-being: Student perceptions of positive impact in the age of smart learning. Heliyon, 10(8), e29523. https://doi.org/10.1016/j.heliyon.2024.e29523
Fornell, C., & Larcker, D. F. (1981). Evaluating Structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39. https://doi.org/10.2307/3151312.
Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3), 277–304. https://doi.org/10.1080/15228053.2023.2233814.
Gefen, D., & Straub, D. (2005). A practical Guide to Factorial Validity using PLS-Graph: Tutorial and Annotated Example. Communications of the Association for Information Systems, 16(July). https://doi.org/10.17705/1cais.01605.
Hair, J. F., Ringle, C. M., & Sarstedt, M. (2013). Partial least squares structural equation modeling: Rigorous Applications, Better results and higher Acceptance. Long Range Planning, 46(1–2), 1–12. https://doi.org/10.1016/j.lrp.2013.01.001.
Henseler, J., Hubona, G., & Ray, P. A. (2016). Using PLS path modeling in new technology research: Updated guidelines. Industrial Management and Data Systems, 116(1), 2–20. https://doi.org/10.1108/IMDS-09-2015-0382.
Howard, M. C., & Henderson, J. (2023). A review of exploratory factor analysis in tourism and hospitality research: Identifying current practices and avenues for improvement. Journal of Business Research, 154(October 2022), 113328. https://doi.org/10.1016/j.jbusres.2022.113328.
Hyun Baek, T., & Kim, M. (2023). Is ChatGPT scary good? How user motivations affect creepiness and trust in generative artificial intelligence. Telematics and Informatics, 83(March), 102030. https://doi.org/10.1016/j.tele.2023.102030.
Jo, H. (2023). Decoding the ChatGPT mystery: A comprehensive exploration of factors driving AI language model adoption. Information Development, 0(0), 02666669231202764. https://doi.org/10.1177/02666669231202764.
Jo, H., & Bang, Y. (2023). Analyzing ChatGPT adoption drivers with the TOEK framework. Scientific Reports, 13(1), 1–17. https://doi.org/10.1038/s41598-023-49710-0.
Kamarudin, N. A. B., Ikram, R. R., binti, R., Azman, F. N. B., Ahmad, S. S. S., Zainuddin, D., & Bin (2022). A study of the effects of short-term AI coding course with gamification elements on students’ cognitive Mental Health. TEM Journal, 11(4), 1854–1862. https://doi.org/10.18421/TEM114-53.
Kim, J., Kim, J. H., Kim, C., & Park, J. (2023). Decisions with ChatGPT: Reexamining choice overload in ChatGPT recommendations. Journal of Retailing and Consumer Services, 75(June), 103494. https://doi.org/10.1016/j.jretconser.2023.103494.
Lai, C. Y., Cheung, K. Y., & Chan, C. S. (2023). Exploring the role of intrinsic motivation in ChatGPT adoption to support active learning: An extension of the technology acceptance model. Computers and Education: Artificial Intelligence, 5(October), 100178. https://doi.org/10.1016/j.caeai.2023.100178.
Lee, Y. F., Hwang, G. J., & Chen, P. Y. (2022). Impacts of an AI-based chabot on college students’ after-class review, academic performance, self-efficacy, learning attitude, and motivation. Educational Technology Research and Development, 70(5), 1843–1865. https://doi.org/10.1007/s11423-022-10142-8.
Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. International Journal of Management Education, 21(2), 100790. https://doi.org/10.1016/j.ijme.2023.100790.
Ma, X., & Huo, Y. (2023). Are users willing to embrace ChatGPT? Exploring the factors on the acceptance of chatbots from the perspective of AIDUA framework. Technology in Society, 75(28), 102362. https://doi.org/10.1016/j.techsoc.2023.102362.
Maheshwari, G. (2023). Factors influencing students’ intention to adopt and use ChatGPT in higher education: A study in the Vietnamese context. Education and Information Technologies, 1–19. https://doi.org/10.1007/s10639-023-12333-z.
Menon, D., & Shilpa, K. (2023). Chatting with ChatGPT: Analyzing the factors influencing users’ intention to use the Open AI’s ChatGPT using the UTAUT model. Heliyon, 9(11).
Niu, B., & Mvondo, G. F. N. (2024). I am ChatGPT, the ultimate AI Chatbot! Investigating the determinants of users’ loyalty and ethical usage concerns of ChatGPT. Journal of Retailing and Consumer Services, 76(May 2023)), 103562. https://doi.org/10.1016/j.jretconser.2023.103562.
Oviedo-Trespalacios, O., Peden, A. E., Cole-Hunter, T., Costantini, A., Haghani, M., Rod, J. E., Kelly, S., Torkamaan, H., Tariq, A., Albert Newton, D., Gallagher, J., Steinert, T., Filtness, S., A. J., & Reniers, G. (2023). The risks of using ChatGPT to obtain common safety-related information and advice. Safety Science, 167(April), 106244. https://doi.org/10.1016/j.ssci.2023.106244.
Paul, J., Ueno, A., & Dennis, C. (2023). ChatGPT and consumers: Benefits, pitfalls and Future Research Agenda. International Journal of Consumer Studies, 47(4), 1213–1225. https://doi.org/10.1111/ijcs.12928.
Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879–903. https://doi.org/10.1037/0021-9010.88.5.879.
Rahman, S. (2023). Examining students ’ intention to use ChatGPT: Does trust matter? 39(6), 51–71. https://doi.org/10.14742/ajet.8956.
Roscoe, A. M., Lang, D., & Sheth, J. N. (1975). Follow-Up methods, questionnaire length, and Market differences in mail surveys. Journal of Marketing, 39(2), 20. https://doi.org/10.2307/1250111.
Sahari, Y., Al-Kadi, A. M. T., & Ali, J. K. M. (2023). A Cross Sectional Study of ChatGPT in translation: Magnitude of Use, attitudes, and uncertainties. Journal of Psycholinguistic Research, 2937–2954. https://doi.org/10.1007/s10936-023-10031-y.
Saif, N., Sajid, U. K., & Imrab, S. (2023). Chat-GPT; validating Technology Acceptance Model (TAM) in education sector via ubiquitous learning mechanism. Computers in Human Behavior, 143747. https://doi.org/10.1016/j.chb.2023.108097.
Sarraf, S., Kar, A. K., & Janssen, M. (2023). How do system and user characteristics, along with anthropomorphism, impact cognitive absorption of chatbots – introducing SUCCAST through a mixed methods study. Decision Support Systems, 178(November 2023), 114132. https://doi.org/10.1016/j.dss.2023.114132.
Shahzad, F., Shahzad, M. F., Dilanchiev, A., & Irfan, M. (2022). Modeling the influence of paternalistic leadership and personality characteristics on alienation and organizational culture in the aviation industry of Pakistan: The mediating role of cohesiveness. Sustainability (Switzerland), 14(22). https://doi.org/10.3390/su142215473.
Shahzad, M. F., Xu, S., Khan, K. I., & Hasnain, M. F. (2023). Effect of social influence, environmental awareness, and safety affordance on actual use of 5G technologies among Chinese students. Scientific Reports, 0123456789, 1–16. https://doi.org/10.1038/s41598-023-50078-4.
Shahzad, M. F., Xu, S., Rehman, O., & Javed, I. (2023b). Impact of gamification on green consumption behavior integrating technological awareness, motivation, enjoyment and virtual CSR. Scientific Reports, 1–18. https://doi.org/10.1038/s41598-023-48835-6.
Shahzad, M. F., Xu, S., & Baheer, R. (2024). Assessing the factors influencing the intention to use information and communication technology implementation and acceptance in China’s education sector. 1–15. https://doi.org/10.1057/s41599-024-02777-0.
Sharma, S., & Virani, S. (2023). Antecedents of international entrepreneurial intentions among students of international business: The mediating role of international entrepreneurship education. Journal of International Entrepreneurship, 0123456789. https://doi.org/10.1007/s10843-023-00329-2.
Sohail, S. S., Farhat, F., Himeur, Y., Nadeem, M., Madsen, D. Ø., Singh, Y., Atalla, S., & Mansoor, W. (2023). Decoding ChatGPT: A taxonomy of existing research, current challenges, and possible future directions. Journal of King Saud University - Computer and Information Sciences, 35(8), 101675. https://doi.org/10.1016/j.jksuci.2023.101675.
Strzelecki, A. (2023). Students’ Acceptance of ChatGPT in Higher Education: An extended Unified Theory of Acceptance and Use of Technology. Innovative Higher Education. https://doi.org/10.1007/s10755-023-09686-1.
Tarhini, A., Deh, R. M., Al-Busaidi, K. A., Mohammed, A. B., & Maqableh, M. (2017). Factors influencing students’ adoption of e-learning: A structural equation modeling approach. Journal of International Education in Business, 10(2), 164–182. https://doi.org/10.1108/JIEB-09-2016-0032.
Xu, S., Khan, K. I., & Shahzad, M. F. (2024). Examining the influence of technological self‑efficacy, perceived trust, security, and electronic word of mouth on ICT usage in the education sector. Scientific Reports, 1–16. https://doi.org/10.1038/s41598-024-66689-4
Acknowledgements
The authors thank the editors and anonymous reviewers for their valuable feedback to improve the quality of this work.
Funding
This work received financial support from the National Natural Science Foundation of China under grant number 72074014.
Author information
Authors and Affiliations
Contributions
All authors contributed equally to this work.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Shahzad, M.F., Xu, S. & Javed, I. ChatGPT awareness, acceptance, and adoption in higher education: the role of trust as a cornerstone. Int J Educ Technol High Educ 21, 46 (2024). https://doi.org/10.1186/s41239-024-00478-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s41239-024-00478-x