- Research article
- Open Access
Adaptive quizzes to increase motivation, engagement and learning outcomes in a first year accounting unit
International Journal of Educational Technology in Higher Educationvolume 15, Article number: 30 (2018)
Adaptive learning presents educators with a possibility of providing learning opportunities tailored to each student’s individual needs. As such, adaptive learning may contribute to both improving student learning outcomes and increasing student motivation and engagement. In this paper, we present the findings from a pilot of adaptive quizzes in a fully online unit at an Australian higher education provider. Results indicate that adaptive quizzes contribute to student motivation and engagement, and students perceive that adaptive quizzes support their learning. Interestingly, our results reveal that student scores did not increase significantly as a result of the introduction of adaptive quizzes, indicating that students may not be best placed to assess their own learning outcomes. Despite this, we conclude that adaptive quizzes have value to increase student motivation and engagement.
Higher education is a rapidly changing landscape. The student cohort is shifting due to increased enrolments from diverse backgrounds: international students who may not speak English as a first language, mature-age students and students from lower socioeconomic status backgrounds. Furthermore, courses are increasingly offered partially or entirely online, and make use of technology for delivery or assessment. Perhaps related to these changes is the finding that university attrition rates have increased since 2009 (Department of Education and Training, 2016). Personalised adaptive learning may offer a solution to combat high attrition rates and cater for a diverse student cohort. Adaptive technologies “facilitate the personalisation of educational activities” (O'Donnell, Lawless, Sharp, & Wade, 2015, p. 25). Adaptive learning provides participants with a personalised and dynamic learning experience which may enhance motivation, engagement, satisfaction and potentially learning outcomes.
This paper examines the use of adaptive quizzes. Adaptive quizzes were developed as part of a Digital Assessment Project, an initiative between Swinburne Online and Swinburne University of Technology in developing and implementing prototype assessments to be trialled by Swinburne Online. Adaptive quizzes were one of eight prototype assessments developed for use in a fully online unit. The aims of the quizzes were twofold: 1) to enable students to prepare for the unit’s three summative assessments, and 2) as a mechanism to keep students motivated and engaged in their own learning progress throughout the teaching period. Comparisons between student learning outcomes in each assessment and their overall learning outcomes in each teaching period were examined along with students’ perceptions of their levels of motivation, engagement and learning outcomes as a result of accessing the quizzes.
Adaptive testing is a system of testing that changes to meet the current level of the student and ongoing progress the student makes. Students are presented with tasks of immediate difficulty: If the tasks are completed successfully, the difficulty level increases. If the tasks are not completed successfully, the difficulty level decreases. This allows for a more accurate model of the students’ current level of attainment. A key benefit to online adaptive learning is efficiency, as questions are quickly tailored to a student’s level, thereby decreasing tedium. Learners acquire knowledge by building associations between different concepts and gain skills by building progressively complex actions from component skills (Jisc e-learning team, 2010).
Adaptive quizzes allow for formative assessment feedback on basic conceptual competence. However, this is not always an advantage as it can encourage an instrumental approach to testing where the student learns in order to pass the test rather than to learn skills that will be relevant in a wider context. The testing effect, whereby the act of being tested not only assesses learning but enhances later long-term retention, is recognised (Roediger & Karpicke, 2006), though this effect decreases as the complexity of the material increases (Van Gog & Sweller, 2015). The use of adaptive quizzes can also encourage assessment design to focus on objective knowledge rather than the development of process skills. Adaptive quizzes do, however, provide opportunities for students to be given immediate feedback, including automated feedback, on aspects of performance most in need of improvement (Simkins & Maier, 2010).
Adaptive learning systems have been shown to improve student engagement and learning outcomes (Barla et al., 2010; Förster, Weiser, & Maur, 2018; Phelan & Phelan, 2011), yet other studies have shown no clear benefits (Becker-Blease & Bostwick, 2016; Griff & Matter, 2013; Murray & Pérez, 2015). Student factors such as motivation have been shown to have an effect on the impacts of adaptive learning technologies (Förster et al., 2018; Liu et al., 2017). A case study of online student drills in calculus (Jonsdottir, Jakobsdottir, & Stefansson, 2015) found that learners decided to stop requesting drill items if the last question was answered correctly in comparison to when the last question was answered incorrectly. The probability of stopping also increases with higher scores but decreases with increased difficulty and the number of questions answered.
Adaptive testing in the form of online quizzes can allow for more frequent practice and can be used for distributed or spaced practice which has been shown to have positive learning benefits (Carpenter, Cepeda, Rohrer, Kang, & Pashler, 2012; Dunlosky, 2013; Karpicke & Bauernschmidt, 2011; Van der Kleij, Feskens, & Eggen, 2015). Quizzes can be used to reinforce learning at regularly spaced intervals providing the opportunity and prompting for distributed practice. Online testing, both fixed-item and adaptive, in the form of multiple-choice questions can be inexpensive to create, administer and score. However, if the potential of the technology is to be exploited and simulations and video integrated into testing, costs will increase as a result. The long-term costs of distorting the curriculum toward declarative knowledge, instead of the conceptual structures required, can be high.
Students are generally satisfied with adaptive quizzing (Becker-Blease & Bostwick, 2016; House, Sweet, & Vickers, 2016) or other adaptive learning technologies (Barla et al., 2010; Griff & Matter, 2013; Liu, McKelroy, Corliss, & Carrigan, 2017) as a learning tool. Students generally report that such technologies are easy to use and perceive that they help to increase their learning of course content.
Despite much effort invested in the area of adaptive learning management systems (LMSs) in past decades, none of these systems are widely used outside educational research (Georgouli, 2011, p. 66). This may be due to the finding that the authoring tools currently available for creating personalised learning activities are not easy to use by non-specialists due to the technical competencies required (Armani, 2005).
Definitions of key terms
In this paper, the term adaptive learning (technology) is used to refer to the customisation of the learning experience by dynamically making adjustments based on learner input (Liu et al., 2017; Somyürek, 2015). Adaptive quizzes are online quizzes which make use of adaptive learning technologies.
We define student engagement as “both the time and energy students invest in educationally purposeful activities and the effort institutions devote to effective educational practices” (Kuh, Cruce, Shoup, Kinzie, & Gonyea, 2008, p. 542). Student engagement has been defined in a number of ways and is often implied rather than explicitly defined in research studies (Kuh et al., 2008; Trowler, 2010; Zepke, 2015). More recent studies have critiqued the conceptualisation of student engagement and the assumption that it links to quality learning and teaching, and student success (For discussion, see Zepke, 2015).
For the purpose of this study, we define student motivation as the desire to learn. This definition does not make reference to or differentiate between intrinsic or extrinsic motivation (Pintrich, Smith, Garcia, & McKeachie, 1993; Stage & Williams, 1990), where intrinsic motivation is the desire to learn for the sake of gaining understanding (Byrne & Flood, 2005), and extrinsic motivation is learning for the sake of external rewards (Paulsen & Gentry, 1995).
This paper has two aims. The first is to examine possible relationships between adaptive quizzes and student learning outcomes in the three unit assessments and overall learning outcomes in the unit. The second aim is to investigate students’ perspectives about the extent to which they believe adaptive quizzes aid their motivation, engagement and learning outcomes. Therefore the research questions this study sought to answer are:
Was there an association between using adaptive quizzes and improved student learning outcomes in the three assessments and on their overall learning outcomes?
What were the students’ perceptions about their levels of motivation, engagement and learning outcomes as they completed the adaptive quizzes?
The adaptive quizzes were trialled in an online unit offered by the higher education provider, Swinburne Online. Swinburne Online is a partnership between the employment website Seek and Swinburne University, an Australian urban university. Swinburne Online delivers online tertiary qualifications on Swinburne University’s behalf. The higher education courses are developed for the online environment and delivered fully online.
Data collection included both qualitative and quantitative data. Qualitative data comprised student survey responses and quantitative data comprised student scores. The quantitative data were examined from two teaching periods of a first year core accounting unit, both in 2014. A convenience sampling method was used in this study, with participants comprising 849 students in total from two consecutive iterations of the same unit: teaching period 1 (TP1) and teaching period 2 (TP2). TP1 was the iteration of the unit prior to the introduction of the quizzes in TP2. Quantitative data from both TP1 and TP2 in the form of students’ results in each piece of assessment as well as their overall result in the unit were collected from the LMS. The quantitative results of student scores were analysed using SPSS version 23.
The TP1 cohort was included in this study as a comparison group. The class size of TP2 was almost double that of TP1 due to the requirement that all students take the core unit, and the reality that it is not always offered. The entry requirements and all assessments in TP1 and TP2 were unchanged, and the students in both cohorts were similar in that they were in their first year of the course and completing the core accounting unit in their first, second or third teaching period. Students in both TP1 and TP2 were given exactly the same three assessments in their units. The quizzes were an extra activity that students in TP2 could choose to participate in. They did not replace any other task or activity and, as such, would have added to the total time students devoted to their study. The aim of the quizzes was to prepare the students for the units’ three summative assessments. The quizzes were made available to students for the entire duration of TP2.
Qualitative data were collected from the survey responses of students in TP2 when adaptive quizzes were introduced, to ascertain students’ perceived motivation, engagement and learning outcomes following the use of the quizzes. The surveys were analysed using Opinio and Microsoft Excel to produce descriptive statistics. Open-ended questions from the student survey responses were analysed thematically (Braun & Clarke, 2006) by the lead and second authors to investigate students’ perceptions of how the adaptive quizzes affected their motivation, engagement and learning outcomes. Braun and Clarke’s six-step process of analysis and reporting patterns was used to analyse the survey data. The first author searched for and reviewed themes and the second author reviewed approximately 10% of responses to validate this process. Inter-rater reliability was calculated as a percentage of agreement in this two-rater model. High levels of inter-rater reliability of 90% were achieved through comparisons and moderation of any inconsistencies that arose between the authors.
The participants in this study comprised the 259 students in TP1 and 590 in TP2 enrolled at census date. All were first year students in an online introductory accounting unit offered by, Swinburne Online, an Australian higher education provider. Of the 590 enrolled in the unit in TP2, 354 submitted all three assessments pieces. These 354 students are the students recognised as still participating in the unit until the end, and are the students who would have been participating when the link for taking part in the online survey was uploaded to the unit.
There were 47 students who completed the online survey producing a response rate, from these 354 students, of 13%. These students represented gender equally. Of these respondents, 21.3% were in the 18 to 29 year age bracket, 38.3% were aged between 30 and 39 and 27.7% between 40 and 49. This response rate gives a nonresponse bias of 87%. Representation of populations is recognised as being more important than response rates (Cook, Heath, & Thompson, 2000), however, as the survey response was relatively low and self-selected, the respondents cannot be seen as representative of the entire cohort. Nonetheless, they provide a level of insight into these students’ experiences that may indicate possible trends that require further research.
The narrow range of demographic data available is a limitation to this study as it does not capture the diversity that may exist in a group of students studying online, or the differences between the demographics of on-campus and fully-online students (Johnson, 2015). For example, online students tend to be older (Quinn & Stein, 2013) and are more likely to be female than on-campus students (Quinn, 2011). The responses should therefore be viewed with a degree of caution, as the low response rate may indicate a bias present in the cohort of students responding.
The percentages of students attempting each of the quizzes are provided in Fig. 1. Students were required to complete a level before progressing to the next level. As such, levels 1 and 2 can be interpreted as generally representing completed quiz attempts. These quiz attempt percentages are calculated from the 590 students enrolled in the unit at census date. The first quiz (topic 1, level 1) was accessed by 63% of students, while the last quiz (topic 8, level 3) was attempted by only 6% of students.
Adaptive quiz design
The quizzes were introduced to assist students’ independent learning and were designed with a focus on improving student motivation, engagement and learning outcomes. The quizzes were designed and structured to support progression of concepts and understanding in preparing for assessments by asking appropriate questions, by spacing the timing of quizzes throughout the teaching period and by providing immediate feedback. The simple correct/incorrect feedback that was provided in this project has been found to be effective for lower-order learning outcomes (Van der Kleij et al., 2015), which was the focus of these adaptive quizzes. The quizzes aligned in content with all three of the unit’s summative assessments:
Assessment 1: Three online multiple-choice tests (25%)
Assessment 2: Ratio analysis report (25%)
Assessment 3: Final exam (50%)
Assessment 1 examined the knowledge of students on all areas of the unit and was divided into three online tests to be completed over the duration of the unit. Assessment 2 was designed to examine the students’ ability to analyse data. Assessment 3 examined the entire course content. Prior to conducting assessments 1, 2 and 3, the students who had participated in the quizzes had been tested on this content in order to prepare them for the formal assessment.
All questions were multiple-choice items. The adaptive quizzes used a different question pool to the assessed tests, but were the same style of questions and used the same interface. All questions related to the same learning outcomes as the assessed fixed-item multiple-choice tests. The adaptive functionality was provided using adaptive release based on ability in the previous level of questions. Students were provided a correct/incorrect label for each question. Each quiz contained 10 random questions and the students had unlimited attempts at each quiz. Immediate feedback was provided as it can help to reduce the risk of learning false facts from multiple-choice quizzes (Marsh, Roediger, Bjork, & Bjork, 2007).
Three levels of difficulty for each quiz were developed for each of the eight different topic areas covered in the unit. All students started with questions that would require foundation skills or lower-order thinking skills on each of the eight topics. Once a student had achieved a score of 90% or more in a quiz, the next level would become available. Only when the tests were completed successfully, were students able to access a more difficult level, progressing the students towards higher-order thinking skills. Lower-level questions focused on developing knowledge and comprehension, while higher-level questions encouraged students to apply and analyse knowledge learned. The questions covered the majority of the learning outcomes in the unit. Student progress was indicated by a progress bar in the weekly quizzes for each topic and overall for the unit. The progress bar for the quizzes increased for every 10% improvement achieved.
The quiz items aligned with the learning outcomes of the unit. The following three questions illustrate the three difficulty levels and test students’ analytical thinking skills. The correct answer is underlined. A lower-level difficulty question is provided in 1), a medium-level difficulty question is provided in 2), and a higher-level difficulty question is provided in 3). Questions 1 and 3 relate to the learning outcome: describe the effect of business transactions on the key elements and components of the three main accounting reports. Question 2 relates to the learning outcome: understand simple cost concepts and their relevance to small business management.
Under accrual accounting, income is:
The cash received from customers for goods or services provided by the business.
The cash collected from accounts receivable.
The money the owner puts into the business to start operations.
The inflow of assets or the reduction in liabilities that arise as a result of trading operations.
The classification of a cost as either direct or indirect depends primarily on:
The computer tracing system within the organisation.
The definition of the cost object.
The knowledge of the accountant.
The type of business.
Which transaction would not appear in the body of a statement of cash flows?
Acquisition of assets by means of a share issue.
Purchase of a building by incurring a mortgage to the seller.
Conversion of a liability to equity.
All of the above.
A comprehensive pool of questions was developed for each difficulty level, so that students re-attempting a level saw a random selection of questions testing the topic area that they were struggling with. A test on a particular topic was not considered complete until the student had obtained a score of 9 or 10 out of 10 on the higher-level questions. Students retained access to the quiz levels they had mastered as a revision resource, should they want to answer further questions at this level. Additionally, if they were struggling with medium-level questions, they were encouraged by the online tutor to return to the lower-level questions, but did not lose access to the medium-level questions.
Student learning outcomes in assessments and overall
In order to answer the question of whether the use of adaptive quizzes was associated with increased student learning outcomes in the unit, student results from teaching periods 1 and 2 were compared. Only the results where students had submitted an assessment piece where examined. For this reason, the number of students (N) completing each assessment piece in each study period varies. Student scores for each of the three assessments and overall for the unit in TP1 and TP2 are provided in Table 1.
Histograms were generated to examine the distribution of the data for each individual assessment in each teaching period and overall scores. In all cases, the scores were non-normally distributed with negatively skewed distributions. The overall standard deviation in both teaching periods was quite large which indicates increased variability in the observed data. This could be attributed to the diversity of the first year students studying in the unit, as online student cohorts often display a wide range of backgrounds (Johnson, 2015).
While the means generated for each assessment piece and overall where found to be higher, it was necessary to test if this increase in assessment scores was significant. In order to test whether the student scores for TP1 and TP2 belong to significant different groups, a Mann-Whitney U test was performed for each assessment and overall. This test is used to compare differences between two independent groups when the dependent variable is continuous, but not normally distributed.
A Mann-Whitney U test indicated the overall assessment scores in TP2 were higher (N = 446, Median = 64) than the overall assessment scores in TP1 (N = 244, Median = 60), however, this difference was marginally significant; U = 49,530, p = 0.051. The alpha level for each Mann-Whitney U test was set at 0.05.
Results of a Mann-Whitney U test revealed that assessment 1 scores were higher in TP2 (N = 446, Median = 20) than in TP1 (N = 258, Median = 19.5). This difference in score was significant; U = 52,352.5, p = 0.046, r = − 0.07534. The scores for assessment 2 in TP2 and TP1 shared the same median (TP1: N = 179, TP2: N = 366, Median = 19). Despite having the same median, however, the Mann-Whitney U test indicted that the distribution for the two groups’ assessment 2 scores had a different shape. This difference was significant with U = 3566.50, p = 0.030, r = − 0.09166. The effect sizes of assessments 1 and 2 in TP2 are very small, indicating that the results can not be interpreted as supporting that adaptive quizzes contributed to better assessment scores. A Mann-Whitney U test indicated the assessment 3 scores in TP2 were higher (N = 354, Median = 30) than the scores in TP1 (N = 179, Median = 29). This difference was not significant with U = 30,130, p = 0.355.
These results indicate that the scores of assessment 1 and 2 significantly improved from the introduction of the adaptive quizzes, but that the scores of assessment 3 did not improve after the introduction of the adaptive quizzes. The low number of students completing the quizzes also affects this finding, and as such, we cannot conclude that the use of adaptive quizzes and improved student learning outcomes were related.
Students’ perceived motivation, engagement and learning outcomes
Student survey data were examined in order to explore whether the students perceived benefits from the adaptive quizzes in relation to their levels of motivation, engagement and learning outcomes. The results presented here are adjusted for relative frequency, as some questions were not answered by up to three students at most.
Overwhelmingly, respondents enjoyed using the adaptive quizzes for their learning. A majority of the surveyed students at 88.9% either agreed or strongly agreed that they had enjoyed completing the quizzes and 95.6% would like to experience quizzes again in their studies. Students stated that “adaptive quizzes are very useful” and that they would like to see more adaptive quizzes.
The majority of respondents (84.5%) agreed or strongly agreed that receiving regular feedback from the adaptive quizzes motivated them to keep trying. One student stated that the most helpful aspect of the quizzes was “the convenience of doing them and the fact that you can go back and look at answers. The summary at the end of answers to the questions was helpful”. Students particularly appreciated that the feedback provided was instant with 95.5% agreeing or strongly agreeing that it was beneficial to receive immediate feedback. Many students stated that the most useful aspect of the quizzes was the instant feedback, with one of the students suggesting this was “because you get results straight away”.
The survey participants (82.2%) agreed or strongly agreed that they enjoyed the challenge posed by the increasing difficulty level of the adaptive quizzes. One student stated: “instant feedback and the progressive difficulty was encouraging and promoted self-motivation to solve problems”. Several students’ responses indicated that they found the most useful aspect of the quizzes was the increasing difficulty levels, with responses containing: “the increasing difficulty levels of the quizzes”, “the quick feedback and increasing difficulty fast tracked my learning”, and “quick feedback and also the increase in levels difficulty”. At least one student, however, was unable to discern between difficulty levels stating that “the degree of difficulty did not seem to increase with each level”.
Over a quarter of respondents (27.3%) indicated that they disliked the limited responses provided by the adaptive quizzes and expressed dissatisfaction with the lack of detail of the feedback they received. Rather than the simple correct/incorrect feedback, students wanted to know where they had gone wrong with their answers. Students stated that “being shown the correct answer if you got a question wrong was great. However if this was backed up with the workings/calculations it would be more beneficial”. Another commented: “when you get a question that needs a formula to work out answer [sic] it just shows you correct answer. It would be more helpful if it gave correct formula [sic] as well”. Many respondents wanted to see the details of how correct answers were calculated, stating: “more detail would have help [sic] as to how the answer was reached however discussions on [the LMS] helped in this area”, and “not having a breakdown of how the answer was worked out made understanding my error harder”. These sentiments were echoed throughout the survey comments: “without a detailed explanation of each question, and the process behind getting the answer right, it was all self-learning”, “it would be great to get a more detailed answer for the results. Explaining how they came to certain answers and providing calculations for the answers would assist in understanding how they came to the answer and be able to apply the methods next time”, and “lack of feedback on the questions I got wrong. Sometimes [sic] could not find the reason my answer was incorrect and ended up checking with either the tutor or other students”. However, respondents also commented on how they valued the feedback that the quizzes provided: “the ability to have feedback immediately helped me amend my study while it was fresh”, and “[you] got to know what you were learning each week was being tested as you went”.
A majority of students who responded to the survey (86.4%) agreed or strongly agreed that seeing how the level of the adaptive quizzes changed gave them a good understanding of the level they were at. Furthermore, 88.6% of the students agreed or strongly agreed that the quiz results helped them to identify exactly where their gaps in knowledge were. One student commented: “the quizzes gave me a good understanding of where I was up to. It also showed if I had missed something while reading”. Another stated that for them, the most useful aspect of the adaptive quizzes was “being able to identify what I was missing from that week”. Students felt that the weekly quizzes provided a means of continually testing whether their knowledge was up to the level it needed to be and identifying which areas they needed to work more on. Most respondents (88.9%) agreed or strongly agreed that seeing their performance in an adaptive quiz assisted them with self-assessment of their work, and 77.8% of them agreed or strongly agreed that the quizzes helped them to focus on the important information in the unit.
The students who responded to the survey appreciated the spaced, weekly practice that allowed them to continually practice what they had learnt in class, with 93.3% of students agreeing or strongly agreeing that adaptive quizzes were a good opportunity to practice using their knowledge each week.
Overwhelmingly, the students who completed the survey felt the quizzes had benefits for their learning outcomes. Almost all, 93.3%, agreed or strongly agreed that overall, the adaptive quizzes were helpful for their learning. One stated: “I think the quiz [sic] were fantastic for the online study. If I wasn’t completing them after each week I would not know as much as I do. It made me realise if I had missed something etc.”. Furthermore, the quizzes were generally seen as useful for test and exam revision: “the quizzes were great revision for the tests”. Another commented: “the quizzes that you can do and do and do and do are very helpful prep for learnings and exams. Thanks”, however this sentiment was not universal: “I don’t feel these quizzes have prepared me for the exam. But could be useful in other units”. Criticism of the adaptive quizzes occurred rarely in the data revealing that the overwhelming response from students was positive.
Discussion and conclusion
This research sought to explore whether the implementation of adaptive quizzes in an online first year unit was associated with improved learning outcomes and student perceptions of increased motivation, engagement and learning outcomes. In regards to the first research question, our findings did not directly suggest that the adaptive quizzes significantly improved student learning outcomes. Assessment 2, which involved writing a report, and therefore aligned with the adaptive quizzes in content only, not format, was the only one of the three assessments to show a significant increase in score after the introduction of the adaptive quizzes. Assessment 1, which resembled in both form and content the adaptive quizzes, did not show increased scores. Furthermore, the increase found in the combined student scores was not significant. We therefore conclude from these findings, that the adaptive quizzes in their current form were not connected with significant improvements in student scores, despite the slight increases found.
Few studies have examined and reported on changes to student performance following the introduction of adaptive testing, and those that do show mixed results (Barla et al., 2010; Becker-Blease & Bostwick, 2016; Griff & Matter, 2013; Murray & Pérez, 2015; Phelan & Phelan, 2011). One of the few studies that found positive changes to student scores after using adaptive testing (Barla et al., 2010) reported that scores increased on average from 2.5 to 3.6 out of 6. This increase was attributed to the use of adaptive testing. Further experiments revealed that adaptive testing assists below-average performing students more than average or above-average performing students. Likewise, a study by Phelan and Phelan (2011) found that students who used an adaptive quiz had statistically significant higher scores than students who did not use the quizzes.
The findings of the current study, however, sit with those that have not found differences between student scores following the use of adaptive quizzes. A recent study (Becker-Blease & Bostwick, 2016) for example, found no statistically significant differences in scores between the students using adaptive quizzes and the control group. Similarly, Griff and Matter (2013) found no statistically significant improvements in student performance by students using an adaptive learning system and those given online questions. Another recent study (Murray & Pérez, 2015) also found no statistically significant difference in test scores between students using adaptive learning and students using traditional instruction. We argue that reporting on findings that show limited or no measurable effects is essential to contributing to the balance between the reality and the research findings regarding educational technology (Henderson, Selwyn, & Aston, 2017; Selwyn, 2015, 2016).
Regarding the second research question of student perceptions, findings reveal that those who responded to the survey were generally very favourable toward the adaptive quizzes, in line with recent research that students enjoy adaptive quizzes and technologies (Becker-Blease & Bostwick, 2016; Griff & Matter, 2013; House et al., 2016; Rossano, Pesare, & Roselli, 2017) and perceive that they help them with their learning. Most of the students enjoyed the quizzes and would like to experience them again. Students appreciated the instant feedback provided by the quizzes and the regularity of the feedback motivated students. The main aspect of the quizzes that students would like to see improved was to provide more comprehensive feedback, particularly regarding the details of where students had gone wrong in their calculations. Interestingly, the aspects of the quizzes that students liked, such as regular and instant feedback, were not specific to adaptive quizzes, but could apply more generally to any form of online formative assessment. As such, adaptive multiple-choice quizzes should be viewed as just one of many types of online assessments capable of delivering formative feedback. Further research could also examine whether more comprehensive feedback would have yielded better outcomes for the students.
This research reveals a common divide in the literature – namely that student preference or satisfaction does not necessarily equate to increased learning outcomes as measured through student response (Bjork, Dunlosky, & Kornell, 2013; Clark, 2010; Dowell & Neal, 1982; Galbraith, Merrill, & Kline, 2012; Sitzmann, Brown, Casper, Ely, & Zimmerman, 2008; Spooren, Brockx, & Mortelmans, 2013; Stark-Wroblewski, Ahlering, & Brill, 2007). While these findings should not urge educators to disregard the importance of student preference for, or satisfaction with their education, they do highlight the consideration that needs to be given to student preference and satisfaction while optimising learning outcomes.
The findings presented here reveal that there are challenges involved in developing adaptive quizzes that students perceive to increase their motivation and engagement, while also leading to improved learning outcomes. As such, we suggest that further research is needed in order to align student motivation and engagement with student learning outcomes using adaptive release testing technologies.
We acknowledge that the small sample size and effect sizes are a limitation of the study and warrant further investigation. Although students were informed that using the adaptive quizzes might benefit their learning, they were otherwise not provided with other sources of motivation to undertake the quizzes. Future research will explore why the majority of students chose not to use the quizzes, and whether those students who did use the quizzes, improved their performance compared to those that did not.
Armani, J. (2005). VIDET: A visual authoring tool for adaptive websites tailored to non-programmer teachers. Journal of Educational Technology & Society, 8(3), 36–52.
Barla, M., Bieliková, M., Ezzeddinne, A. B., Kramár, T., Šimko, M., & Vozár, O. (2010). On the impact of adaptive test question selection for learning efficiency. Computers & Education, 55(2), 846–857.
Becker-Blease, K. A., & Bostwick, K. C. (2016). Adaptive quizzing in introductory psychology: Evidence of limited effectiveness. Scholarship of Teaching and Learning in Psychology, 2(1), 75–86.
Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417–444.
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa.
Byrne, M., & Flood, B. (2005). A study of accounting students' motives, expectations and preparedness for higher education. Journal of Further and Higher Education, 29(2), 111–124.
Carpenter, S. K., Cepeda, N. J., Rohrer, D., Kang, S. H., & Pashler, H. (2012). Using spacing to enhance diverse forms of learning: Review of recent research and implications for instruction. Educational Psychology Review, 24(3), 369–378.
Clark, R. C. (2010). Evidence-based training methods: A guide for training professionals. Alexandria, VA: ASTD Press.
Cook, C., Heath, F., & Thompson, R. L. (2000). A meta-analysis of response rates in web- or internet-based surveys. Educational Psychology and Measurement, 60(6), 821–836.
Department of Education and Training (2016). Higher education attrition. In Success and retention rates.
Dowell, D. A., & Neal, J. A. (1982). A selective review of the validity of student ratings of teachings. The Journal of Higher Education, 53(1), 51–62.
Dunlosky, J. (2013). Strengthening the student toolbox: Study strategies to boost learning. American Educator, 37(3), 12–21.
Förster, M., Weiser, C., & Maur, A. (2018). How feedback provided by voluntary electronic quizzes affects learning outcomes of university students in large classes. Computers & Education, 121, 100–114. https://doi.org/10.1016/j.compedu.2018.02.012.
Galbraith, C., Merrill, G., & Kline, D. (2012). Are student evaluations of teaching effectiveness valid for measuring student learning outcomes in business related classes? A neural network and bayesian analyses. Research in Higher Education, 53(3), 353–374.
Georgouli, K. (2011, 30 September - 2 October). Virtual learning environments -An overview. Paper presented at the 15th Panhellenic Conference on Informatics, Kastoria, Greece.
Griff, E. R., & Matter, S. F. (2013). Evaluation of an adaptive online learning system. British Journal of Educational Technology, 44(1), 170–176.
Henderson, M., Selwyn, N., & Aston, R. (2017). What works and why? Student perceptions of ‘useful’ digital technology in university teaching and learning. Studies in Higher Education, 42(8), 1567–1579.
House, S. K., Sweet, S. L., & Vickers, C. (2016). Students' perceptions and satisfaction with adaptive quizzing. AURCO Journal, 22(Spring), 104–110.
Johnson, G. M. (2015). On-campus and fully-online university students: Comparing demographics, digital technology use and learning characteristics. Journal of University Teaching and Learning Practice, 12(1).
Jonsdottir, A. H., Jakobsdottir, A., & Stefansson, G. (2015). Development and use of an adaptive learning environment to research online study behaviour. Educational Technology & Society, 18(1), 132–144.
Karpicke, J. D., & Bauernschmidt, A. (2011). Spaced retrieval: Absolute spacing enhances learning regardless of relative spacing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37(5), 1250–1257.
Kuh, G. D., Cruce, T. M., Shoup, R., Kinzie, J., & Gonyea, R. M. (2008). Unmasking the effects of student engagement on first-year college grades and persistence. The Journal of Higher Education, 79(5), 540–563.
Liu, M., Kang, J., Zou, W., Lee, H., Pan, Z., & Corliss, S. (2017). Using data to understand how to better design adaptive learning. Technology, Knowledge and Learning, 22(3), 271–298.
Liu, M., McKelroy, E., Corliss, S. B., & Carrigan, J. (2017). Investigating the effect of an adaptive learning intervention on students’ learning. Educational Technology Research and Development, 65(6), 1605–1625.
Marsh, E. J., Roediger, H. L., III, Bjork, R. A., & Bjork, E. L. (2007). The memorial consequences of multiple-choice testing. Psychonomic Bulletin & Review, 14(2), 194–199. doi: https://doi.org/10.3758/BF03194051
Murray, M. C., & Pérez, J. (2015). Informing and performing: A study comparing adaptive learning to traditional learning. Informing Science: the International Journal of an Emerging Transdiscipline, 18, 111–125.
O'Donnell, E., Lawless, S., Sharp, M., & Wade, V. (2015). A review of personalised e-learning: Towards supporting learner diversity. International Journal of Distance Education Technologies, 13(1), 22–47. https://doi.org/10.4018/ijdet.2015010102.
Paulsen, M. B., & Gentry, J. A. (1995). Motivation, learning strategies, and academic performance: A study of the college finance classroom. Financial Practice & Education, 5(1), 78–89.
Phelan, J., & Phelan, J. (2011). Improving biology mastery through online adaptive quizzing: An efficacy study. In Paper presented at the toward formative assessments supporting learning: Design, validation, and mediating factor: The annual meeting of the American Educational Research Association. New Orleans: LA.
Pintrich, P. R., Smith, D. A., Garcia, T., & McKeachie, W. J. (1993). Reliability and predictive validity of the motivated strategies for learning questionnaire (MSLQ). Educational and Psychological Measurement, 53(3), 801–813.
Quinn, F. (2011). Learning in first-year biology: Approaches of distance and on-campus students. Research in Science Education, 41(1), 99–121.
Quinn, F., & Stein, S. (2013). Relationships between learning approaches and outcomes of students studying a first-year biology topic on-campus and by distance. Higher Education Research & Development, 32(4), 617–631.
Roediger, H. L., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention, 249.
Rossano, V., Pesare, E., & Roselli, T. (2017). Are computer adaptive tests suitable for assessment in MOOCs? Journal of e-Learning and Knowledge Society, 13(3), 71–81.
Selwyn, N. (2015). Minding our language: Why education and technology is full of bullshit … and what might be done about it. Learning, Media and Technology, 437–443. https://doi.org/10.1080/17439884.2015.1012523.
Selwyn, N. (2016). Digital inclusion: Can we transform education through technology? Paper presented at the Encuentros conference. Spain: Barcelona.
Simkins, S. P., & Maier, M. H. (2010). Just-in-time teaching: Across the disciplines, across the academy. Virginia: Scott Stylus Publishing, LLC.
Sitzmann, T., Brown, K. G., Casper, W. J., Ely, K., & Zimmerman, R. D. (2008). A review and meta-analysis of the nomological network of trainee reactions. Journal of Applied Psychology, 93(2), 280–295.
Somyürek, S. (2015). The new trends in adaptive educational hypermedia systems. The international review of research in open and distributed. Learning, 16(1), 221–241.
Spooren, P., Brockx, B., & Mortelmans, D. (2013). On the validity of student evaluation of teaching the state of the art. Review of Educational Research, 83(4), 598–642.
Stage, F. K., & Williams, P. D. (1990). Students' motivation and changes in motivation during the first year of college. Journal of College Student Development, 31(6), 516–522.
Stark-Wroblewski, K., Ahlering, R. F., & Brill, F. M. (2007). Toward a more comprehensive approach to evaluating teaching effectiveness: Supplementing student evaluations of teaching with pre–post learning measures. Assessment & Evaluation in Higher Education, 32(4), 403–415.
team, J. e.-l. (2010). Effective assessment in a digital age: The Joint Information Systems Committee. In University of Bristol.
Trowler, V. (2010). Student engagement literature review. The Higher Education Academy, 11, 1–15.
Van der Kleij, F. M., Feskens, R. C., & Eggen, T. J. (2015). Effects of feedback in a computer-based learning environment on students’ learning outcomes: A meta-analysis. Review of Educational Research.
Van Gog, T., & Sweller, J. (2015). Not new, but nearly forgotten: The testing effect decreases or even disappears as the complexity of learning materials increases. Educational Psychology Review, 27(2), 247–264. https://doi.org/10.1007/s10648-015-9310-x.
Zepke, N. (2015). Student engagement research: Thinking beyond the mainstream. Higher Education Research & Development, 34(6), 1311–1323.
The authors would like to thank Ms. Belinda Davey and Ms. Heather Russell in their role as learning designers developing the adaptive quizzes. The authors would like to thank Dr. Katie Richardson for support in developing the paper in response to reviewers’ comments and Ms. Jenny Trevitt for assistance with the review of the literature.
Availability of data and materials
Please contact corresponding author.
Ethics approval and consent to participate
All data used in this study was de-identified to ensure the confidential and anonymous treatment of participants’ data. The project was approved for human research ethics by Swinburne’s Human Research Ethics Committee (SUHREC) and follows the Australian Government’s National Statement on Ethical Conduct in Human Research (2007). Any conflicts of interest were minimal and resolved by employing researchers who were not involved in the student assessments. To access the de-identified data used in this study, please email the corresponding author and provide a statement regarding the purposes of your request.
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
I felt well prepared for the assessments in this unit.
Ratio analysis report
I found the assessment guidelines clear and unambiguous.
Did any of the following encourage you to increase your discussion board activity in this unit?
the Student Toolbox
the Blackboard resources
the adaptive quizzes leading up to the marked assessments
the marked assessments
Did any of the following motivate you to complete the assessment in this unit?
the Student Toolbox
the Blackboard resources
the adaptive quizzes leading up to the marked assessment
Had you heard of Adaptive Quizzes before completing the assessment in this Unit?
I have enjoyed completing the Adaptive Quizzes in this unit.
It was good getting feedback straight away.
The Adaptive Quiz results helped me to identify exactly where the gaps in my knowledge were.
Regular feedback from my Adaptive Quiz performance motivated me to keep trying.
I disliked the limited responses that Adaptive Quizzes allow.
I enjoyed the challenge posed by the increasing difficulty level of the Adaptive Quizzes.
Seeing how the level of the Adaptive Quizzes changed gave me a good understanding of the level I was at.
I tended to focus only on the information I would need for each Adaptive Quiz.
Adaptive Quizzes helped me to focus on the important information.
Adaptive Quizzes were a good opportunity each week to practice using my knowledge.
Seeing my performance in an Adaptive Quiz assisted me with self-assessment of my work.
Overall, the Adaptive Quizzes were helpful for my learning.
What features/aspects of Adaptive Quizzes have been most useful to you and why?
What features/aspects have been least useful to you and why?
Would you like to experience Adaptive Quizzes again in your study?
Do you have any general feedback?