An advantage of digital technologies is that they are highly scalable. This also applies to the education sector, where large classes remain a popular method of instruction around the world because of their cost efficiency (Yardi, 2008). Technology is increasingly being used in classrooms to assist lecturers in achieving various pedagogical goals and scalable technologies can serve their objectives (Becker, Brown, Dahlstrom, Davis, DePaul, Diaz, & Pomerantz, 2018). Four such technologies are the following: classroom chat (CC), classroom response system (CRS), e-lectures, and mobile virtual reality (VR). User acceptance is a prerequisite for technology effectiveness. This can be assessed with the technology acceptance model (TAM) (Davis, 1989). By measuring the perceived usefulness (PU) and perceived ease of use (PEOU) after using the tools for several months, the model can predict the behavioral intention (BI) and actual usage. While the acceptance of some of these new digital technologies has been investigated elsewhere, there are no direct comparisons. Our aim was to compare the technology acceptance of the four tools in relation to each other after 3 months of usage.
To make these comparisons, 94 students used the digital tools for three months and then we measured the PU, PEOU and BI with a questionnaire.
In the remaining introduction, we first discuss the four digital learning technologies before we summarize the research on the technology acceptance model. We then briefly discuss the alignment theory and expected results.
Digital learning technologies
Classroom response system
CRSs, which are also known as student response systems, personal response systems, immediate response systems, electronic response systems, clickers, and audience response systems, have been highly accepted among educators (Hunsu, Adesope, & Bayly, 2016). A CRS allows lecturers to pose multichoice questions before, during, and after their lecture which students can answer on their own electronic devices. The answers are aggregated in real-time to display the results to individual students or the whole class. This allows lecturers to monitor the students’ understanding of topics (Caldwell, 2007) and, if necessary, elaborate on points that the students did not understand. Due to its anonymity, CRS helps to activate shy and hesitant students who would otherwise not ask questions in class (Graham, Tripp, Seawright, & Joeckel, 2007). Moreover, because students’ attention span lasts approximately 20 min (Burns, 1985), lecturers can use CRS to break up long presentations, directly activate students and let them actively process the content they just heard. Meta analyses have established small effects on cognitive learning outcomes and medium effects on noncognitive outcomes (Cain, Black, & Rohr, 2009; Castillo-Manzano, Castro-Nuno, Lopez-Valpuesta, Sanz-Diaz, & Yniguez, 2016).
We combined the CRS with course revision tasks because repetition is crucial for long term retention (Ebbinghaus, 2013; Pechenkina, Laurence, Oates, Eldridge, & Hunter, 2017). We posed the questions immediately after each lecture and then discussed the results at the beginning of the following lecture. This approach was motivated by several goals: (a) to encourage students to revisit the course content between lectures, (b) to activate students during lectures, (c) to give students a preview of typical exam questions, and (d) to provide feedback to students and the lecturer on students’ learning progress.
Classroom chat (CC)
While lecturers talk at the front of a class, students can also communicate with each other. These two simultaneous types of communication are called the frontchannel and backchannel (Aagard, Bowen, & Olesova, 2010) with the frontchannel referring to the lecturer communicating to the class and the backchannel referring to the communication the students have among each other. Lecturers have used chat tools to leverage backchannel communication with the goal of enabling student discussion of each other’s questions regarding the lecture. One disadvantage of this type of backchannel communication is its potential to distract students from following the lecture (Yardi, 2008). To avoid this problem, we adopted a new approach whereby the students can submit questions anonymously to the lecturer who in turn can respond during the lecture or at the beginning of the following lecture. In their meta-analysis, Schneider and Preckel (2017) found that questions can lead to higher levels of achievement. Moreover, using electronic applications that allow students to remain anonymous has been shown to encourage students who tend to be rather anxious and shy, especially when the topics are controversial (Stowell, Oldham, & Bennett, 2010).
E-lectures
Recording lectures has become popular at many universities (Brockfeld, Muller, & de Laffolie, 2018; Liu & Kender, 2004). Providing students with recordings allows individual reviewing of content and learning at their own pace (Demetriadis & Pombortsis, 2007). Moreover, e-lectures are useful when students miss a lecture due to illness or other reasons. This might explain why e-lectures are generally popular with students and lecturers (Gormley, Collins, Boohan, Bickle, & Stevenson, 2009). The impact of e-lectures on learning outcomes is currently under debate with conflicting evidence on its impacts (Demetriadis & Pombortsis, 2007; Jadin, Gruber, & Batinic, 2009; Spickard, Alrajeh, Cordray, & Gigante, 2002).
Mobile virtual reality
VR allows the creation of a virtual environment using a computer and a headset. Using visual—sometimes in addition to audio and tactile—input to achieve immersion into this virtual reality is a common goal in training and entertainment scenarios (Hawkins, 1995). Owing to the significant improvement in affordability and processing power, VR is being used in various educational settings (Cochrane, 2016; Merchant, Goetz, Cifuentes, Keeney-Kennicutt, & Davis, 2014). Building on the pedagogical model of constructivism, VR allows one to provide multiple representations of reality, knowledge construction, reflective practice, and context dependent knowledge by creating simulations, virtual worlds, and games (Merchant et al., 2014; Mikropoulos & Natsis, 2011). VR in education is increasingly used in science, technology, mathematics, and medicine (Radianti, Majchrzak, Fromm, & Wohlgenannt, 2020) with effect sizes ranging from 0.3 to 0.7 (Merchant et al., 2014). The use of VR in large psychology classes is, however, extremely rare (Mikropoulos & Natsis, 2011). VR has been shown to increase learning outcomes, particularly in individual gameplay tasks (Freina & Ott, 2015; Merchant et al., 2014), although some authors criticize the lack of quality and the number of studies focusing on learning outcomes (Jensen & Konradsen, 2018; Richards & Taylor, 2015).
Mobile VR is a specific subset of VR where instead of using a desktop computer and a specialized headset, the processor of a smartphone and its screen are used together with a carton headset (cardboard) to create the immersion experience. The advantages of mobile VR are the significantly lower costs for hardware and software. In tertiary education settings in developed countries, most students own a mobile VR capable smartphone, which translates to near complete availability. Despite its low costs, mobile VR remains less popular in educational settings—used only 20% of the time—than the higher quality and higher cost PC-based VR system (Radianti et al., 2020). The learning outcomes for mobile VR and desktop setups are similar (Moro, Stromberga, & Stirling, 2017).
Technology acceptance model
Based on the theory of reasoned action from Ajzen and Fishbein (1977), Davis (1989) developed the TAM in an effort to predict the use of a technology. The TAM predicts BI with PU and PEOU, with BI considered a very good predictor for future actual usage (Sumak, Hericko, & Pusnik, 2011). The TAM is the most widely employed and best known model to measure acceptance of various technologies (Estriegana, Medina-Merodio, & Barchino, 2019). It was successfully applied to a multitude of technologies such as social media (Abrahim, Mir, Suhara, Mohamed, & Sato, 2019; Dumpit & Fernandez, 2017), virtual learning environments (Kurt & Tingöy, 2017), mobile and digital libraries (Hamaad Rafique, Shamim, & Anwar, 2019), learning analytics visualization (Papamitsiou & Economides, 2015), and gamification (Rahman, Ahmad, & Hashim, 2018) and across many cultures (Cheung & Vogel, 2013; H. Rafique et al., 2018). PU has a substantial effect on BI, but PEOU has often been found to have indirect effects—mediated by PU—with its direct effect on BI ranging from nonexistent to high (Gefen & Straub, 2000; King & He, 2006; Hamaad Rafique et al., 2019). Many extensions and adjustments to the TAM have been proposed, including by the original author (Abdullah & Ward, 2016; Cheung & Vogel, 2013; Estriegana et al., 2019; Venkatesh & Bala, 2008; Venkatesh & Davis, 2000). Notable of these adjustments is the Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh, Morris, Davis, & Davis, 2003), where the authors analyzed the TAM and its competing prediction models and proposed that performance expectancy, effort expectancy, social influence, and facilitating conditions—moderated by gender, age, experience, and voluntariness of use—influence BI and Use behavior. UTAUT was then further extended to UTAUT 2, where Venkatesh, Thong and Xu (2012) added hedonic motivation, price value and habit as influencing factors. While UTAUT 2 has garnered in excess of 3000 citations in Google Scholar (Tamilmani, Rana, Prakasam, & Dwivedi, 2019), its original application lies in predicting use of consumer technologies. As such, the factor price value is not suited to this study because the digital tools were made available to the students for free. Furthermore, with seven factors effecting BI and three moderators, UTAUT 2 takes many more variables into account but does not provide much more explanatory power than the TAM. The TAM’s high level of explanatory power and parsimony played a role in the TAM remaining a highly influential method of measuring technology acceptance (Granić & Marangunić, 2019; Scherer, Siddiq, & Tondeur, 2019). Considering that one factor in UTAUT 2 does not apply to our setting and that providing information for so many factors for four digital tools twice would have likely effected participation in the study negatively while not providing much additional explanatory power, we decided to implement the TAM model.
Alignment theory
In course design, alignment refers to the degree of how well the expectations regarding the course material and the assessment match (FitzPatrick, Hawboldt, Doyle, & Genge, 2015; Webb, 1997). When attending a course, students evaluate which information will be assessed and will steer their efforts towards that material. Building on this idea, Biggs (2003) proposed in his constructive alignment model that lecturers should first define the intended learning outcomes and then determine an appropriate assessment regime. For example, introductory courses to general psychology usually include a lecture about theories of cognitive processes and their applications. The intended learning outcomes of such lectures could be remembering, understanding, and applying the theories, which correspond to the first three levels of Bloom’s taxonomy (Bloom, 1956). End of semester course examinations often entail multiple-choice tests using items designed to assess those learning outcomes (Tozoglu, Tozoglu, Gurses, & Dogar, 2004). As we will explain in the next section, alignment theory is highly relevant for making predictions on technology acceptance.
Expected results
King and He (2006) found in their comprehensive meta-analysis about the TAM, that the average path loadings were 0.186 for PEOU—> BI, 0.505 for PU—> BI and 0.469 for PEOU—> PU. Since all tools are straightforward to use—the tools offer core functionalities without additional customization options—we expected them all to rate highly on PEOU. We expected that CRS and E-Lectures would both score high on PU and BI since both technologies are able to provide additional help for course assessment preparation and thus aligns well with student goals (Biggs, 2003). Additionally, CRS delivers students a preview of what exam questions might look like and provides direct feedback on how well exam-like questions were answered (Cain et al., 2009). E-lectures support students by allowing them to revisit lectures in order to prepare for the exam (Demetriadis & Pombortsis, 2007). Previous research indicates that both CRS and E-lectures are very popular with students (Gormley et al., 2009; Hunsu et al., 2016).
We expected CC and VR to score in the medium range on PU and BI. Previous research indicates that mainly shy students profit from CC (Stowell et al., 2010), so many students will only experience a small benefit, if any. VR might be exciting and visceral but it does not provide exam-relevant input and thus the alignment with student goals is low (Biggs, 2003; Merchant et al., 2014).