Skip to main content
  • Research article
  • Open access
  • Published:

The contributions of mixed insights to advancing technology-enhanced formative assessments within higher education learning environments: an illustrative example

Abstract

Technology-enhanced formative assessment (TEFA) represents strategies for improving student learning and motivation, yet researchers point to methodological issues underpinning claims of effectiveness. This mixed methods paper, using an empirical example, illustrates the novel contributions of mixed insights in informing the implementation of two TEFA classroom strategies. An embedded mixed methods case study design bounded by an 8-week undergraduate course across three terms was used to answer the following research question: How can a mixed methods approach examining the influences to and effects of involvement in TEFA offset the weaknesses inherent to either qualitative or quantitative data and guide collection, analysis, and integration? A qualitative dominant crossover mixed analysis strategy generated four novel mixed insights from the integration of 175 classroom-based observations, 26 instructional team meeting summaries, and 274 end-of-course student questionnaires. These are represented in a case summary and joint displays: influences on involvement, effects on learning, accessibility of feedback, and impacts on instruction. The mixed insights have important implications for theory, research, and practice related to TEFA strategies and highlight the contribution that mixed methods approaches can have in advancing educational technology in higher education.

Introduction

There is growing evidence of the usefulness and, indeed, unrealized potential of mixed methods research for tackling complex problems (Mertens, 2015; Poth, in press). Complex research problems address societal issues involving multiple, interacting influences with no known solution or established methods and expertise for studying the interrelated contexts in which they take place. An increased use of mixed methods research in such projects around the world, funded by diverse sectors is captured in studies measuring prevalence across a variety of disciplines and journals (Molina-Azorin & Fetters, 2016). The motivations for its use vary greatly, yet the rationale underlying my working definition of mixed methods research emphasizes designed to generate previously inaccessible insights by integrating qualitative and quantitative data. Mixed methods research requires the integration of both quantitative and qualitative data, and assumes that their collective contribution mitigates inherent weaknesses in either type of data. There exists an untapped potential for mixed methods research to contribute to advancements within the field of educational technology in higher education. This potential is especially evident under conditions of research complexity. Thus, my experiences as a mixed methods researcher, educational technology adopter, higher education instructor, and classroom assessment expert provide the impetus for my desire to contribute to the discourse around the use of mixed methods research within the field of educational technology in higher education. I do so through this paper focused on technology-enhanced formative assessments (TEFA).

Emergence of technology-enhanced formative assessments

Newer theories recognize the learning progression as inherently complex — learning is situated in and influenced by the dynamic contexts in which it takes place, with outcomes that are challenging to predict (Wiliam, 2016). We are only beginning to recognize the prominence of features present in the learning environment that explicitly check for understanding and then support learning through adjustments in instruction. General agreement exists that formative assessment represents such classroom strategies whereby instructors, students, and peers elicit, interpret, and apply evidence of student learning for the purpose of supporting learning and adjusting instruction (e.g., Shepard, Hammerness, Darling-Hammong, Rust, Snowden, et al., 2005; Wiliam, 2016). Developing ways for students and instructors to access this information through TEFA is viewed as a necessary and practical means of improving the quality of higher education teaching and learning environments (Maier, Wolf, & Randler, 2016).

The inclusion of technologies in higher education teaching and learning environments has been revolutionizing the ways in which instructors and students interact. Indeed, in online environments there has been debate about the differences in effectiveness between synchronous and asynchronous interactions. On-demand access has led to the creation of masses of digitized lectures and learning activities and a marked increase in research that is assessing student experience and impacts on learning; for example, there has been an exploration of alternative means of assessment in massive open online courses, MOOCs (Sánchez-Vera & Prendes-Espinosa, 2015). Less is known about how technologies are shaping the experiences of instructors and students in the real-time classroom environments.

Among the key TEFA strategies included in face-to-face classroom implementations are audience response system (ARS) and practice quizzes because research has established both the feasibility of implementation and the impact on student experience within higher education contexts (e.g., Maier et al., 2016; Wieman, 2010). A remote device is used to offer ARS by the instructors whereas the two practice quizzes are offered through the online class learning management system (LMS). In this paper, we present a case study in which the university has adopted the use of Moodle as the campus-wide LMS. The aim of the practice quizzes was to provide students with examples of the type of items, scope of content, and the level of cognition required for their midterm and final exams. ARS and practices quizzes are unique in their capacity to provide students with immediate feedback based on their anonymous answers to closed questions and in showing them a graph of classmates’ answers afterward. The individual and aggregate class results can also be accessed by the instructor so that they can vary or remediate instruction if necessary. In so doing, the TEFA strategies adhered to principles of formative assessment: students are actively involved, students’ learning benefits, access to feedback is timely, and adjustments to instruction are evident (Wiliam, 2016).

Formative assessment represents effective strategies for improving student motivation and learning, yet key methodological issues exist with the evidence that underpins claims of effectiveness (e.g., Bennett, 2011). First there are the difficulties with establishing the causal effect of formative assessments on student achievement and the expanded research focus; different meta-analyses have shown varying effect sizes that are often attributed to learning content, feedback procedures, or learner characteristics (Black & Wiliam, 1998; Kingston & Nash, 2011). Calls for research exploring the holistic interaction (and effects) among learners and the learning environment afforded by TEFA remain unanswered (Kay & LeSage, 2009). Second, the validity and reliability of many studies have been limited in their measurement of only students’ perceptions of engagement and learning by the use of only one or two quantitative items (Offerdahl & Tomanek, 2011) and because they rely on anecdotal data which draws strong criticism (Ha & Finkelstein, 2013). A more rigorous approach is necessary to capture educational benefits and challenges reflective of all involved in TEFA. The purpose of the research reported in this paper was to generate a comprehensive understanding of influences to and effects of involvement in two TEFA strategies within a higher education learning environment through the integration of multiple perspectives. Specifically related to their involvement in the two TEFA strategies across three terms, students were asked directly, classroom interactions were observed, and instructor viewpoints were sought.

Tapping the potential of mixed methods research

Mixed methods research has emerged as an approach for generating insights from the integration of qualitative and quantitative data that otherwise would have been inaccessible. Researchers point to mixed methods research as offering more comprehensive evidence because it mitigates, to a large extent, the limitations of relying on either quantitative or qualitative data sources alone (Bryman, 2006). The rationale for the use of mixed methods research in the present study is that it offsets strengths and weaknesses (Plano Clark & Ivankova, 2016). This is especially noteworthy when working under research conditions such as the present study’s: addressing complex problems that require new ways of study beyond our existing mixed methods research practices. “Business as usual will not lead to effective use of research to address wicked [complex] problems” (Mertens, 2015, p. 5). This call to action for the community of mixed methods researchers was recently followed up in an article inspired by the taskforce report for the Mixed Methods International Research Association in their description of the future of mixed methods research as kaleidoscopic, with its “seemingly unpredictable patterns full of rich possibilities for diversity and potential to provide opportunities to see things that have not yet been seen” (Mertens, Bazeley, Bowleg, Fielding, Maxwell, et al., 2016; p. 222).

As mixed methods researchers, our responses to complex problems pose dilemmas and offer opportunities. All too often, our responses involve attempts to reduce, control, or simply ignore the effects of complexity rather than considering new approaches including adopting new research designs and integration strategies. The time has come for mixed methods research to guide work within the field of educational technology—to address some of the dilemmas and harness some of the opportunities afforded by mixed methods research under conditions of complexity within higher education teaching and learning environments. Thus, this mixed methods study offers an innovative perspective from which to examine the influences to and effects of two TEFA strategies through the integration of both instructor and student perspectives within an undergraduate course. The present paper illustrates, using an empirical example, the contributions of a mixed methods research approach can have in generating mixed insights on two TEFA strategies: the implementation of the audience response system and practice quizzes.

Methods

Yin (2014) identifies a case study as useful for documenting implementation of an intervention — in this case, an instructional intervention embedding opportunities for engaging with TEFA strategies — within a real-life course context, making it the most appropriate methodology for my research. According to Creswell and Plano Clark (2018), the mixed methods case study design involves embedding both qualitative and quantitative data into a case (Fig. 1). In the present study, quantitative data were embedded within a qualitative case study bounded by the duration of the 8-week course repeated over three terms, and bounded by those involved in the course. A unique aspect of the research was the concurrent timing of the multiple sources of data collected from convenience samples representing differing perspectives. This sampling strategy was necessary for this mixed methods study rather than the matched samples often involved in embedded case study designs. This design helped to capture the perspectives of instructional team members and what has been documented related to the influences and effects of TEFA strategies on students. As well, there are two points of interface between the qualitative and quantitative data: first within the teaching terms for each TEFA strategy and then across terms and TEFA strategies.

Fig. 1
figure 1

Mixed methods case study design

Ethical considerations

The study design and procedures were reviewed and approved by the university’s institutional review board and the researcher undertook all appropriate measures to ensure informed consent, protect participant confidentiality, and mitigate power issues. An external research assistant recruited participants. Informed consent was indicated by signing consent forms (instructional team) or by the overt action of choosing to complete the online questionnaire (students). To further mitigate potential power issues between instructors and graduate students under their supervision as teaching assistants as well as between instructional team members and the undergraduate students enrolled in their classes, all data collection activities were embedded within the course activities and data was collected anonymously. The researchers did not have access to the data until submission of the course grades for undergraduate students and performance evaluations for instructors and teaching assistants.

Study context

The selection of ARS and practice quizzes for this study was pragmatic as they were the only TEFA strategies that were implemented across the three terms. The undergraduate course involved large class-sized, face-to-face lectures. An instructional team approach had initially been adopted within the course in 2010 as a collaborative milieu wherein educational responsibility for the organization, instruction, and assessment was shared among an instructional team — often including an instructor and graduate teaching assistants (GTA) (Zhou, Kim, & Kerekes, 2011). The author played a dual role as a researcher and instructor in this research. A full description of the aims and activities involved in the team-instruction approach used in the current study is provided elsewhere (Yapp & Poth, 2013). The team approach meant that the instructional team members were involved in all aspects of the course and fostered mutual respect for individual contributions of team members. The implementation of the two TEFA strategies aligned well with the instructional team’s focus on trying new ways of enhancing student access to immediate feedback and integrating technology. At the beginning of the study, the instructional team had been working together for 2 years and had already been monitoring and integrating student feedback into their instructional decisions. Among the roles and responsibilities for team members were:

  • Planning; for example, preparing course materials and embedding feedback mechanisms,

  • Delivery; for example, collecting data and interacting with students), and

  • Development; for example, participating in weekly team meetings and leading review of data findings).

Each member was tasked with individual and shared responsibilities and allocated time to contribute their ideas and perspectives with the understanding that learning was a key aspect of the work. The team philosophy was based on the idea that, together, the team could attain more than each individual could separately to respond to emerging student needs.

Participants

The required undergraduate course is offered each term through the Faculty of Education at a research-intensive western Canadian university. All members of the instructional team and students enrolled across three terms (Fall 2013, Winter 2014, and Fall 2014) were invited to participate. In all, the study involved a convenience sample of 11 members of the instructional team (one coordinator, tjree instructors, and seven GTAs) and 274 students who completed questionnaires (45% response rate). The overall student sample reported the following demographic characteristics (see Table 1 for each term): 73.6% female; 65% completing an elementary-teacher program stream (as opposed to the secondary-teacher program stream); and 77.1% being enrolled in the 4-year Bachelor of Education program (as opposed to the 2-year post-graduate program). Every class and team meeting that took place during the time bounded by the case study was also included.

Table 1 Student demographics

Data sources and collection procedures

The three data sources and methods included 26 instructional team meeting summaries, 274 student questionnaires, and 175 instructional class observations.

Instructional team meeting summaries

Throughout each term, the instructional team members met weekly to discuss and make decisions related to course planning, delivery, and adjustments. A protocol was developed to guide note-taking during the meeting for the purpose of documenting team meeting interactions, reports of students’ involvement in TEFA strategies and emerging issues, and meeting outcomes from decisions. These meeting notes were primarily qualitative in nature.

Student questionnaires

At the end of each term, a web-based questionnaire was administered to students anonymously using an online survey delivery system: SurveyMonkey. The five-part questionnaire (measuring participation in TEFA, examining TEFA uses, documenting participant demographics, assessing course experiences, and determining assessment preferences) was developed by the research team with assistance from a measurement expert. It included both quantitative and qualitative questions for a total of 35 items with comments available for the majority.

Class observations

Throughout each term, a class observation guided by a protocol was completed by a GTA who attended each lecture for the dual purposes of documenting emergent issues and class interactions focused on student involvement in TEFA strategies as well as course delivery as a means of assessing instructional fidelity across sections of the same course. The protocol consisted of 14 dichotomous (yes/no) items (e.g., Were opportunities provided for students to use the audience response system?) and two open-ended questions about emergent issues and questions students asked during class. Protocol training was provided and inter-rater reliability was assessed among multiple GTAs during the first two classes until 90% reliability was reached.

Data analysis and integration procedures

The data analysis and integration procedures involved the separate analysis of each data type (i.e., qualitative and quantitative data) within each source and then across data sources.

Qualitative data analysis

The qualitative data analysis was undertaken using a computer assisted qualitative data analysis software: NVivo©. This involved the thematic analysis of comments generated by the student questionnaires, classroom observations, and team instructional meetings within each term. The iterative process followed the activities described by the data analysis spiral supplemented by the taking of memos (Creswell & POth 2017). An independent researcher not involved in the study worked with the study researcher separately to highlight the text into large meaning units and make initial notes. This helped build consensus and enhance reliability among the coders by establishing descriptions of initial categories and codes that were eventually applied across all qualitative data sources. After applying the codes, the coders met to discuss the coding and refined codes until inter-rater reliability reached 90%. They then identified nine thematic codes and finalized the codebook. The codebook consisted of five columns: category, codes, definition (i.e., what the code means), anti-definition (i.e., what the code does not mean), and examples of verbatim quotes that represent the code.

Quantitative data analysis

The separate quantitative data analysis involved eliminating missing data by using listwise deletion of questionnaire and classroom observation items and then generating descriptive statistics (mean, standard deviation, and frequency) using SPSS Statistics 20 software within each data source and terms. The differences across terms related to rates of participation and demographics were subsequently explored with a Kruskal-Wallis test. Post-hoc Mann–Whitney U analysis was done on a significant Kruskal-Wallis test to identify areas of difference.

Integration

Qualitative themes and quantitative results were compared via a qualitative dominant crossover mixed analysis (Onwuegbuzie & Hitchcock, 2015). To do this, the case summaries were generated to represent the areas of convergence and divergence highlighted by the basic type of qualitative dominant crossover mixed analysis where the nine qualitative thematic categories were used as the organizational framework on which to integrate the quantitative findings. The integration across three instructional terms and two TEFA strategies revealed four mixed insights. Several strategies served to enhance reliability in the analysis procedures and generate multiple sources of validity evidence for the mixed insights. Key among these efforts is the use of multiple qualitative coders; visual plotting of the generation of the mixed insights from the qualitative dominant crossover analysis strategy (Fig. 2); and provision of integrated findings through the case summary and joint displays (Tables 1 and 2).

Fig. 2
figure 2

Study’s data collection, analysis, and integration

Table 2 Audience response systems joint display

Summary of the case

At the pre-term instructional team meeting for the Fall 2013 term, evidence of enthusiasm was apparent in the summary description that all tasks were complete and “course materials and activities were ready to go.” The ARS implementation seemed to be going well during the first few weeks: classroom observations captured an increasing rate of student involvement and interest in the ARS activities, and several team members expressed satisfaction with the overall regularity of student participation they had observed in the classroom. The team had even observed student uses of the information gleaned from the ARS activities and saw the instructors consider how this information could inform instructional changes. For example, the third instructional team meeting summary captured the discussions about students asking to review particular content areas that they had found confusing, which led to the instructional team adding examples to the subsequent lecture notes. By the midpoint of the term, classroom observations and team members reported having reached “a plateau” described as participation consistently estimated to be the majority of students attending class yet no visible increases in rates of participation across more than two classes. Interestingly, concern with student participation became a consistent subject during subsequent instructional team meetings because GTAs and instructors both observed a slight decrease in frequency of participation in the latter half of the term.

The initial very high rate of participation in the practice quizzes (87% completed both) reported by students at the end of the Fall 2013 term was familiar to members of the instructional team. This is because the team had advertised this activity as being helpful for exam preparation and students had already confirmed their participation verbally in class. Whereas some comments were general, stating they “ … liked the practice exams … ,” others explained the students use prior to a summative exam saying, “exam practice [quizzes] [are] excellent study tool[s]” and “It gave me an idea of what types of questions would be asked.” Unanticipated by the instructional team were the anecdotal comments describing the single attempt for each practice quiz as limiting its benefit.

At the post-term meeting for the Fall 2013 term, team members seemed to be surprised by the questionnaire results: students’ high participation rate in audience response technology activities (72.6%) was viewed by the instructional team as encouraging, yet the lower rate of purchase of the stand-alone remote (61.5%) was unexpected. It became clear that students most often cited cost ($40) as a barrier to purchasing the remote. However, this did not seem to impact their participation in the ARS activities. Indeed, several students reported engaging in the ARS activities regardless of whether they were individually able to contribute to the group response using the remote; one student noted, “I learned as much from watching the [ARS] questions as if I had [been] using one.” Also noted in the summary was a lengthy discussion about the contrast between what could be considered the high rate of students’ self-reported participation at the end of the first term with the decreasing pattern of participation observed by instructional team members as well as discussion of how cost might be addressed.

The team members seemed initially skeptical that though the TEFA strategies were viewed as useful for learning, the effect for the ARS was not dependent on participation with remote. The uncertainty was evident in the meeting summaries where the GTAs voiced a concern that students were not buying the remote, “if they don’t buy it how can they learn from it?” The questionnaire results revealed that the vast majority of students during the Fall 2013 term reported using the TEFA strategies foremost for an understanding check of course content (94.4% for ARS and 94.0% for practice quizzes) and for areas of weakness identification for remediation (92.2% for ARS and 96.6% for practice quizzes). This is compared with fewer, yet still more than 80% of respondents agreeing that the TEFA strategies were useful for providing peer comparisons, authentic preparation, and engaging interactions. Further, students attributed their enjoyment of ARS activities to the ability to respond anonymously for guiding personal use of the information (94%); one student expressed it was a “fun way to check my own understanding.” Similarly, from the instructional team perspective, the ARS activities were thought to enable student access to data related to individual understandings of course content that could then be compared with their peers, if desired. Team instructional meeting summaries captured the persistent concern that a potential cumulative effect of the ever-decreasing use during the term had resulted in fewer students using the ARS remote over time and thus hampered the ability for students to compare with classmates. Thus, two members were tasked with seeking less expensive alternatives to the remote.

The questionnaire findings also indicated that timing of the students completing the practice quizzes influenced their perceptions of their usefulness in supporting learning their own learning (96.6%). Whereas the majority of students reported use of the practice quizzes at the end of studying as a final check of their understanding of the material, another group of students used the practice quizzes before beginning to study as a means of focusing their efforts on areas of weakness. Clearly students valued the on-demand aspects of the practice quizzes saying, “by doing it more than once, I can see how I am improving”. Specifically, students attributed not being able to complete the quiz multiple times to a perception of limited usefulness evidenced by a lower rate of participation in the end-of-term practice quiz compared with that offered at the middle of the term (88.9 and 94% respectively). The student’s desire for greater access to the practice quizzes were echoed across the team meeting summaries. Unanimous agreement among the instructional team about the lack of pedagogical reasons to limit access to the practice quizzes led to allowing multiple attempts during the Winter 2014 term.

At the pre-term instructional team meeting for the Winter 2014 term, the GTAs reported that serendipitously, a more-cost-effective, phone-based application using the same system as the remote was now available for a quarter of the cost — just $10. One team member had examined its viability including ease of access and compatibility for use along with the remote and consensus was reached to offer the options in tandem. The ARS implementation across the two platforms (i.e., stand-alone remote and phone-based application) seemed to go really well for the first 2 weeks of class; classroom observations indicated the majority of students attending class were participating. However, following the third class, the classroom observations and team members reported a marked decrease in ARS participation and one of the GTAs was assigned to further investigate. Interestingly, this timing was found to coincide with the end of the free trial of the phone-based application. A further trend that was noted across the classroom observations with some consistency was the increased interest for ARS activities as practicing exam-type items. This lead to discussions at subsequent team meetings about why this might be the case rather than using the questions for an understanding check, and team members were tasked with talking with students informally. The following meeting for one of these instances provided information that students reported primary interest in the ARS activities as a practice to the exam, “I like knowing what the exam questions might look like”.

The initial high rate of participation in the practice quizzes (84% completed both) was reported by students at the end of the Winter 2014 term. These students described the practice quizzes as generally useful for exam preparation and specifically for identifying areas of weakness, yet raised concerns about the lack of access to the correct answers. Representative comments include “I have yet to do the final exam practice quiz, however I do intend to do it prior to the final exam. I just want to review my notes and study before I take it to see what I need to look over again” and, the practice exams “ … guide studying and evaluate which sections of the course [I] needed to revisit”. At the same time, additional LMS features were being introduced that would provide students access to more information than whether the item had been answered correctly or not. One team member was tasked with examining potential applications of those practice quiz features.

At the post-term meeting for the Winter term 2014, the team members reviewed the questionnaire results that initially seemed puzzling related to participation and usefulness of the TEFA strategies. Together, the rates of purchase of the remote and app during the second term remained at similar rates to the first term (61%), yet there was a small decline (five percentage points) in overall participation to 67.9%. A reasonable explanation advanced by the instructional team for the lower rate of purchase (1.2%) of the phone-based application by students was that some students may be used to accessing apps for free or for modest costs and thus be unwilling to pay $10 for the phone-based app. Another difference was the extent of agreement related to the usefulness of the TEFA strategies; particularly for the ARS.

An understanding check of course content remained the most useful for both strategies (90% for ARS and 92.6% for practice quizzes) followed by authentic preparation (83.3% for ARS and 92.6% for practice quizzes). Indeed, the ability to receive personalized feedback immediately was reported as being useful by the vast majority of students (90.0% for ARS and 92.6% for practice quizzes). There was less agreement between the perceived usefulness of strategies such as weakness identification for remediation (92.6% for the practice quizzes compared with 76.7% for ARS). This is compared with fewer, yet still more than 60% of respondents for ARS and 80% of respondents for practice quizzes agreeing that the TEFA strategies were useful for providing peer comparisons and engaging interactions. These results lead to team discussions to consider how the ARS activities could be more engaging. One member took on the task of developing more frequent opportunities for ARS to be used in subsequent lectures.

A new LMS feature allowed team members to give students access to the total number of correct responses for each attempt during Winter 2014 and these effects were discussed at the post-term team meeting. Similar to the preceding terms, students consistently reported a high rate of participation in the practice quizzes for exam preparation either at the beginning of the studying process or to confirm exam readiness. An unanticipated outcome was that these students also desired access to the correct answers, saying, “[t]he practice quizzes would be more beneficial if they had answers.” The team meeting summaries documented the struggle the instructional team experienced while attempting to resolve the dilemma: while students might benefit from immediate access to the answers, students might equally benefit from having to find the answer themselves. Ultimately one team member was tasked with creating a feature allowing students to review their incorrect items using prompts that would ultimately unveil the correct answer.

At the pre-term instructional team meeting for the Fall 2014 term, further discussions ensued, which lead them to offer both ARS platforms and open and on-demand access to the practice quizzes during this term. Through the classroom observations, the team members noted consistent participation in the ARS and practice quizzes and did not note any problems with the compatibility of the platforms or quiz access. The review of questionnaire results at the end of the term showed a small increase in overall participation rates (up seven percentage points from the previous term, to 75%) and purchase rates (up four percentage points from the previous term, to 65.8%). The team members noted surprise that despite the lower cost of the phone-based application ($10), the rate of purchase was higher for stand-alone remotes (44.7%) than it was for the phone-based apps (21.1%). The team members reported similar perceived usefulness of ARS activities across platforms. Indeed, the vast majority of students reported TEFA strategies as most useful for an understanding check of course content (97% for ARS and 93.4% for practice quizzes) and weakness identification for remediation (92.4% for ARS and 93.4% for practice quizzes). Further, more than 80% of respondents agreed that the TEFA strategies were useful for providing peer comparisons, authentic preparation, and engaging interactions. It is also key to note that the vast majority of students found the ability to respond anonymously and receiving immediate feedback from the TEFA strategies as highly useful (97% for ARS and 93.4% for practice quizzes). The final integration across three instructional terms and two TEFA strategies revealed four mixed insights.

Integrated findings and discussion

The integrated findings across terms summarized in joint displays (see Tables 2 and 3) and differences between fall and winter terms that were statistically significant; this was also noted in the qualitative data. Four mixed insights were generated and are discussed across the two TEFA strategies and terms: influences on involvement, effects on learning, accessibility of feedback, and impacts on instruction.

Table 3 Practice quizzes joint display

Influences on involvement

The consistent higher participation trends in the practice quizzes than in the ARS may be associated with the reported access barriers of the costs of the remote and phone app reported by students across the terms. Common across both the ARS and practice quizzes was the consistent perceived usefulness of the TEFA strategies for practicing the types of items expected on the exams. Overall, the students seem more focused on the usefulness of the TEFA strategies for preparing for the exam than on engaging with the activity for enjoyment. Interestingly, the analysis across terms revealed differences between fall and winter that were statistically significant (H = 6.003, p = 0.05) in the students’ perceptions about the ARS providing engaging interactions. The qualitative comments and observations provide some context for this finding because in the fall terms, the students described the effects for the ARS on learning was not dependent on participation with the remote whereas these understandings were not evident in the winter term. Further, the consistent 20 percentage point differences in participation rates between ARS and practice quizzes are noteworthy because even though students also reported access barriers for the practice quizzes related to number of attempts and information they could access, the rates of participation for practice quizzes were higher. These findings may reflect the social context in which each of the TEFA takes place — whereas the ARS occurs within the classroom context as a group, the practice quizzes are completed online, independently — and eventually on demand — so they may garner greater participation. These findings align with the suggestion that practice quizzes foster interaction among students and between students and instructors (Sancho & Escudero, 2012) as well as favourable experiences (Blanco & Ginovart, 2013).

Effects on learning

The consistently high levels of agreement in the ratings, observations, and comments about the usefulness across terms related to both the practice quizzes provides evidence of the TEFA strategies as effective for supporting learning. Overall, the students seem focused on the usefulness of the TEFA strategies for both checking understandings and identifying areas of weakness for further study. Interestingly, the analysis across terms revealed differences between fall and winter that were statistically significant (H = 9.036, p = 0.011) in the students’ perceptions about the ARS for identifying areas of weakness for remediation. The qualitative comments and observations provide some context for this finding because in the fall terms, the students described needing to study “more” but not necessarily knowing what to study. These findings based on several groups of students begin to address the concerns with previous ARS studies that relied on small sample sizes highlighted by Ha and Finkelstein (2013). To compare, the practice quizzes seemed to support students’ study efforts related to specific content. These findings may reflect the timing of participation in each TEFA strategy. Whereas the ARS were embedded throughout the term, the practice quizzes were generally taken during exam preparation and thus students might be more focused on particular content. These findings suggest that practice quizzes provide students the opportunity to regulate their own learning but that this learning is predicated on access to information beyond whether their answers are correct. Indeed, the work of Sancho and Escudero (2012) supports this line of thinking —that practice quizzes provide the opportunity for immediate feedback to inform students about how they are performing related to the course expectations yet these activities are most useful when they can be completed on-demand and aligned with students’ needs.

Accessibility of feedback

The usefulness of the TEFA strategies for providing timely access to feedback is reflected in the consistently high levels of agreement in the ratings, observations, and comments across terms. Overall, the students seem focused on the usefulness of the feedback provided by the TEFA strategies for both personal use and peer comparisons. Interestingly, the analysis across terms revealed differences between fall and winter that were statistically significant (H = 11.498, p = 0.003), in the students’ perceptions about the ARS for comparing with peers. The qualitative comments and observations provide some context for this finding because in the fall terms, the students, observations, and team meetings described the instructors as immediately responding to the group’s ARS results whereby the instructor response was not evident in the winter term.

These findings may reflect the instructional use of the feedback provided by each TEFA strategy. The classroom observations provide further evidence of the instructors’ use of individuals’ formative results and comparisons within and across groups. Whereas the instructor could make explicit the use of the ARS in class and impact the students directly, the feedback practice quizzes was generally taken into account for the following term and thus was less visible. These findings begin to address the under-presented issue in the ARS literature described by Ha and Finkelstein (2013) related to the instructors’ specific formative use and how students perceive the usefulness of the information for their own learning.

Impacts on instruction

The analysis across terms and strategies generated understandings about the instructional adjustments that were evidence-based decisions. As the team became aware of emerging issues, they would discuss them at the team meetings, and then seek further information in the classroom observations, anecdotally from students, and in the end-of-term student questionnaires. For the ARS, the cost to purchase a remote was an early and recurring issue over the three terms. Even after offering a lower-cost alternative (i.e., phone based apps), ongoing resistance to any cost was apparent. Interestingly, the qualitative data revealed that participation in the activities of the ARS was not dependent upon purchase of the remote. These findings begin to shed much needed light as described by Offerdahl and Tomanek (2011) on the impact of ARS on instructors’ thought processes and actions resulting from the implementation of such activities. Students described learning as much from observing the activities as they did from participating, which was not apparent in the quantitative findings alone. Gaining understandings of the participation rates also lead the team to increase the frequency of use in the classroom both because the students were benefiting from and enjoying the interactions, and to justify the cost for many students. In comparison, the adjustments to the practice quizzes responded to demands for unlimited access and greater specificity in the feedback provided. As new understandings emerged about the role of peer comparisons, the team gave access to class results on practice quizzes.

Implications and limitations

The mixed insights offer important implications in terms of advancing theory, research, and practice related to TEFA strategies and highlight the contribution that mixed methods approaches can have in advancing educational technology in higher education. First, in terms of theory and research and specifically to how the two strategies examined in this paper (ARS and on-demand practices quizzes) support learning from the perspectives of the instructional team and students. From a theoretical perspective, this study points to educational technology as an important mediator for creating significant classroom-based learning experiences. Specifically, this study points to the effectiveness of the TEFA strategies for supporting student motivation, student use for directing their own learning, and team use for informing instructional adjustments that have effects for students immediately and in the future. The advancements made by Ha and Finkelstein (2013) are an excellent starting place for further quantitative study of the student perspective of the impact of ARS and its use for informing instructional decisions. It is important not to exclude other types of my consideration of effectiveness in this study, for example while peer interactions are often less timely than ARS or practice quizzes, written peer feedback remains a useful tool for supporting learning (Ion, Barrera-Corominas, & Tomàs-Folch, 2016).

From the examination of the illustrative example of two effective TEFA strategies, it seems that assessing the impacts and influences of educational technology in higher education teaching and learning contexts is also complex and thus theory and research in this area needs to heed the same calls as the field of mixed methods research does more generally. This includes not only continuing to refine theory in light of new and emerging understandings but also to continue to develop new TEFA strategies and ways of capturing the influence and effects of these strategies through the use of mixed methods. Indeed, this reflects the thinking highlighted by Li (2014) that technology-enhanced learning will need to take into account the interrelated relationships among the learner, the learning context, the technology, and (I would argue) the instructor. The use of a mixed methods research approach to the case study impacted this research in three important ways: First, the integration of qualitative and quantitative data at two points of interface generated rich understandings specific to each term and then the cross-analysis of terms and strategies allowed for new trends and differences to be noted. Second, the integration of multiple perspectives provided access to new understandings that had been previously inaccessible; notably the instructional adjustments and the reasons underpinning the adjustments. Third, the integration of multiple data sources enhanced validity evidence as it allowed for triangulation within the descriptive case summary and, specifically, the qualitative findings provided context for participation trends that had previously not been captured across terms. By capturing the implementation processes over time and across different groups of students rather than simply the outcomes from one term, I offer an illustrative example of an embedded mixed methods case study design reflective of the necessary openness to emerging and innovative ways that are appropriate for the complex conditions under which research is undertaken.

In terms of practice, this study points to three areas to consider when implementing TEFA strategies; frequency of use, cost to students, and ease of on-demand access. Instructors adopting educational technology in their higher education teaching and learning environments need to be aware of the literature guiding implementation of TEFA and the influences of their own assessment experiences on their instructional practices. This is because researchers have established that instructors’ classroom assessment practices are influenced by their attitudes, prior experiences, knowledge, skills, and motivation related to the nature of assessment and learning (Calderhead, 1996). Meaningful learning interactions incorporating educational technology and formative assessments require appropriate structures (Sorensen & Takle, 2005). Thus, more comprehensive understandings related to influences on involvement, effects on learning, accessibility of feedback, and impacts on instructors could influence future implementations of TEFA and thus have important practice consequences and impacts on classroom teaching and learning environments.

To provide guidance to educational technology researchers in higher education, I designed this research with dimensions of high quality research in mixed methods (Collins, 2015) and the criteria for publishing (Onwuegbuzie & Poth 2016; Fetters & Freshwater, 2015) in mind. Based on these standards, this research represents high quality mixed method research in the following five ways: the use of literature to clearly justify a mixed methods approach for generating mixed insights; the detailed descriptions of the procedures to create transparency of the methodology used in the case study; the mixing strategy used to address the convergent purpose underpinning the intentional integration; the procedures and practices for maintaining ethical standards, rigor, and complexity built into the design itself; and validity evidence gathered to support the findings from each strand separately as well as the mixed insights.

The results need to be interpreted with the following two limitations in mind: First, the design is somewhat unique in that it used a convenience concurrent mixed sampling strategy with involvement in the case study because of the relationship between the qualitative data collected from observations and team meetings and the quantitative data collected from student questionnaires. The use of a convenience sample meant that the individuals were available and willing to participate — similar to the concurrent implementation described by Teddlie and Yu (2007). The data captured the key perspectives for the case study and met the minimum sample size recommendations was met (Collins, 2010). Second, the results of the quantitative strand may be underrepresented in the crossover mixed analysis. For example, in the crossover analysis I chose to focus on items rather than means or relationships at the scale level. This was done because the items had been researcher-created and may have simplified the quantitative perspective through the use of item-level descriptive statistics. Future research could use a similar design with established constructs.

Conclusions

This study illustrates the contributions of novel mixed insights about the influences to and effects of an audience response system as a TEFA strategy with the educational significance important for educational policies and practices enhancing teaching and student engagement in higher education. Specifically, this empirical case study which focused on ARS and practice quizzes generated novel insights about the TEFA strategy’s effectiveness in supporting instructional adjustments and peer comparisons that had not been previously documented, and it provided further understandings of the effectiveness of features such as timely and specific feedback, authentic practice, and engaging interactions. The inclusion of TEFAs within higher education classrooms requires new approaches both in the ways people teach and the ways people learn. For instructors, modeling the desirability and usefulness of feedback for informing adjustments on the go helps students to see the value of such information. For students, embracing the opportunities to guide their own learning is a necessary and appropriate transition in the current higher education context.

The mixed insights were not previously accessible using either qualitative or quantitative research approaches; indeed, the mixed methods approach integrating the instructional team and student perspectives was essential. In short, from the results presented in this paper, I advocate TEFA strategies as an important contributor towards creating continuous assessment systems where learning and assessment go hand in hand and students are provided the opportunity to drive their own learning. And consequently, it is hoped that this study will stimulate further investigations of the impacts of additional TEFA strategies.

Abbreviations

ARS:

Audience response system

GTA:

Graduate teaching assistant

LMS:

Learning management system

TEFA:

Technology-enhanced formative assessment

References

  • Bennett, E. G. (2011). Formative assessment: A critical review. Assessment in Education: Principles, Policy and Practice, 18(1), 5–25. https://doi.org/10.1080/0969594X.2010.513678.

    Article  Google Scholar 

  • Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy, and Practice, 5(1), 7–74.

    Article  Google Scholar 

  • Blanco, M., & Ginovart, M. (2013). On how Moodle quizzes can contribute to the formative e-assessment of first-year engineering students in mathematics courses. Universities and Knowledge Society Journal, 9, 354–370.

    Google Scholar 

  • Bryman, A. (2006). Integrating quantitative and qualitative research: How is it done? Qualitative Research Journal, 6(1), 97–113. https://doi.org/10.1177/1468794106058877.

    Article  Google Scholar 

  • Calderhead, J. (1996). Teachers’ beliefs and knowledge. In D. C. Berliner, & R. C. Calfee (Eds.), Handbook of educational psychology, (pp. 709–725). New Yoyk: Simon & Schuster Macmillan.

    Google Scholar 

  • Collins, K. (2010). Advanced sampling designs in mixed research: Current practices and emerging trends in the social and behavioral sciences. In A. Tashakkori, & C. Teddlie (Eds.), Sage handbook of mixed methods in Social & Behavioral Research, (2nd ed., pp. 353–378). Thousand Oaks: Sage.

    Chapter  Google Scholar 

  • Collins, N. K. (2015). Validity in multimethod and mixed research. In S. Hesse-Biber, & B. Johnson (Eds.), The Oxford handbook of multimethod and mixed methods research inquiry, (pp. 240–256). New York: Oxford University Press.

    Google Scholar 

  • Creswell, J., & Poth, C. (2017). Qualitative inquiry and research design: Choosing among five approaches (4th ed.). Thousand Oaks, CA: Sage.

  • Creswell, J., & Plano Clark, V. (2018). Designing and conducting mixed methods research, (3rd ed., ). Thousand Oaks: Sage.

    Google Scholar 

  • Fetters, M., & Freshwater, D. (2015). The 1 + 1 = 3 integration challenge. Journal of Mixed Methods Research, 9, 115–117. https://doi.org/10.1177/1558689815581222.

    Article  Google Scholar 

  • Ha, J. H., & Finkelstein, A. (2013). Understanding the effects of professors’ pedagogical development with clicker assessment and feedback technologies and the impact on students’ engagement and learning in higher education. Computers & Education, 65, 64–76. https://doi.org/10.1016/j.compedu.2013.02.002.

    Article  Google Scholar 

  • Ion, G., Barrera-Corominas, A., & Tomàs-Folch, M. (2016). Written peer-feedback to enhance students’ current and future learning. International Journal of Educational Technology in Higher Education, 13(15), 1–11. https://doi.org/10.1186/s41239-016-0017-y.

    Google Scholar 

  • Kay, R. H., & LeSage, A. (2009). Examining the benefits and challenges of using audience response systems: A review of the literature. Computers & Education, 53(3), 819–827. dx.doi.org. https://doi.org/10.1016/j.compedu.2009.05.001.

    Article  Google Scholar 

  • Kingston, N., & Nash, B. (2011). Formative assessment: A meta-analysis and a call for research. Educational Measurement: Issues and Practice, 30(4), 28–37.

    Article  Google Scholar 

  • Li, Z. (2014). Rethinking the relationship between learner, learning contexts, and technology: A critique and exploration of Archer’s morphogenetic approach. Learning, Media and Technology, 41, 501–520. https://doi.org/10.1080/17439884.2014.978336.

    Article  Google Scholar 

  • Maier, W., Wolf, N., & Randler, C. (2016). Effects of a computer-assisted formative assessment intervention based on multiple-tier diagnostic items and different feedback types. Computers & Education, 95, 85–98. https://doi.org/10.1016/j.compedu.2015.12.002.

    Article  Google Scholar 

  • Mertens, D. M. (2015). Mixed methods and wicked problems. Journal of Mixed Methods Research, 9, 1–6. https://doi.org/10.1177/1558689814562944.

    Article  Google Scholar 

  • Mertens, D. M., Bazeley, P., Bowleg, L., Fielding, N. G., Maxwell, J. A., Molina-Azorin, J. F., & Niglas, K. (2016). Expanding thinking through a kaleidoscopic look into the future: Implications of the mixed methods international research Association’s task force report on the future of mixed methods research. Journal of Mixed Methods Research, 10, 221–227. https://doi.org/10.1177/1558689816649719.

    Article  Google Scholar 

  • Molina-Azorin, J. F., & Fetters, M. (2016). Mixed methods research prevalence studies: Field specific studies on the state of the art in mixed methods research. Journal of Mixed Methods Research, 10(2), 123–128. https://doi.org/10.1177/1558689816636707.

    Article  Google Scholar 

  • Offerdahl, E. G., & Tomanek, D. (2011). Changes in instructors’ assessment thinking related to experimentation with new strategies. Assessment & Evaluation in Higher Education, 36(7), 781–795 dx.doi.org/10.1080/02602938.2010.488794.

    Article  Google Scholar 

  • Onwuegbuzie, A., & Hitchcock, J. H. (2015). Advanced mixed analysis approaches. In S. Hesse-Biber, & B. Johnson (Eds.), The Oxford handbook of multimethod and mixed methods research inquiry, (pp. 275–295). New York: Oxford University Press.

    Google Scholar 

  • Onwuegbuzie, A., & Poth, C. (2016). Editors' afterword: Toward evidence-based guidelines for reviewing mixed methods research manuscripts submitted to journals. International Journal of Qualitative Methods , 15, 1–13. https://doi.org/10.1177/1609406916628986.

  • Poth, C. (in press). The curious case of complexity: Implications for mixed methods research practices. International Journal of Multiple Research Approaches.

  • Plano Clark, V. L., & Ivankova, N. V. (2016). Mixed methods research: A guide to the field. Thousand Oaks: Sage.

    Google Scholar 

  • Sánchez-Vera, M. M., & Prendes-Espinosa, M. P. (2015). Beyond objective testing and peer assessment: Alternative ways of assessment in MOOCs. RUSC. Universities and Knowledge Society Journal, 12(1), 119–130 http://dx.doi.org/10.7238/rusc.v12i1.2262.

    Article  Google Scholar 

  • Sancho, T., & Escudero, N. (2012). A proposal for formative assessment with automatic feedback on an online mathematics subject. Universities and Knowledge Society Journal, 9, 240–260 http://dx.doi.org/10.7238/rusc.v9i2.1285.

    Google Scholar 

  • Shepard, L. A., Hammerness, K., Darling-Hammong, L., Rust, F., Snowden, J. B., Gordon, E., … Pacheco, A. (2005). Assessment. In L. Darling-Hammong, & J. Bransford (Eds.), Preparing teachers for a changing world: What teachers should learn and be able to do, (pp. 275–326). San Francisco: Jossey Bass.

    Google Scholar 

  • Sorensen, E. K., & Takle, E. S. (2005). Investigating knowledge building dialogues in networked communities of practice. A collaborative learning endeavor across cultures. Interactive Educational Multimedia, 10, 50–60.

    Google Scholar 

  • Teddlie, C., & Yu, F. (2007). Mixed methods sampling: A typology with examples. Journal of Mixed Methods Research, 1, 77–100. https://doi.org/10.1177/2345678906292430.

    Google Scholar 

  • Wieman, C. (2010). Why not try a scientific approach to science education? In J. C. Hughes, & J. Mighty (Eds.), Taking stock: Research on teaching and higher learning in education, (pp. 175–190). Montreal: McGill-Queen’s University Press.

    Google Scholar 

  • Wiliam, D. (2016). Leadership for teacher learning. West Palm Beach: Learning Sciences International.

    Google Scholar 

  • Yapp, A., & Poth, C. (2013). Evaluating the impact of formative assessment on large class learning environments: Integrating multiple perspectives. Paper presented at the annual meeting of the Canadian Evaluation Society, Toronto, ON.

  • Yin, R. K. (2014). Case study research: Design and method, (5th ed., ). Thousand Oaks: Sage.

    Google Scholar 

  • Zhou, G., Kim, J., & Kerekes, J. (2011). Collaborative teaching of an integrated methods course. International Electronic Journal of Elementary Education, 3, 123–138.

    Google Scholar 

Download references

Acknowledgements

The author would like to thank Alvin Yapp and Erin Sulla for their assistance collecting data, as well as Lia Daniels for her contributions in the larger project and Adrienne Montgomery for helpful comments during the draft stages of this manuscript.

Funding

This work was supported by a Social Sciences and Humanities Research Council of Canada (SSHRC) Standard Grant (410–2011-0095) and a University of Alberta Teaching and Learning Enhancement Fund Grant (RES0004915).

Author information

Authors and Affiliations

Authors

Contributions

Not applicable

Corresponding author

Correspondence to Cheryl Poth.

Ethics declarations

Competing interests

The author declares that she has no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Poth, C. The contributions of mixed insights to advancing technology-enhanced formative assessments within higher education learning environments: an illustrative example. Int J Educ Technol High Educ 15, 9 (2018). https://doi.org/10.1186/s41239-018-0090-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41239-018-0090-5

Keywords