The aims of the present study were twofold: First, we investigated effects of a single-session training intervention on pre-service teachers’ PV. We expected that the training intervention consisting of a short introductory text about small-group tutoring and exemplifying videos would foster pre-service teachers’ ability to notice and interpret relevant tutoring strategies. Second, we applied two design principles from multimedia learning research to the training intervention to investigate differential effects of design. We hypothesized that both segmenting the video examples and using focused self-explanation prompts for analysis would lead to a higher increase of PV compared to non-segmented videos and open prompts through the course of a one-hour training.
After the training phase, the pre-service teachers noticed more relevant events in a video of small-group tutoring (e.g., the tutor reacting in a suboptimal way to a student’s incorrect answer) and relied more on theoretical knowledge in their interpretations than before training. Additionally, the number of unfocused comments (e.g., about classroom climate) decreased. Thus, the pre-service teachers improved on all PV indicators during the course of the study. Especially with regard to the interpreting component of PV, this finding is remarkable, as interpreting noticed events based on theoretical knowledge seems to be particularly challenging for pre-service teachers (Jacobs et al., 2010; Stürmer et al., 2016).
Contrary to what we expected, there were no differences in improvement between the different training conditions. Participants did not improve further in the posttest, when they analyzed training video examples that were segmented into meaningful units or when they received additional support for analysis through focused self-explanation prompts. During training, however, both measures supported the pre-service teachers in linking observations from the video to knowledge about tutoring strategies outlined in the introductory text. Thus, we may not speak of an effect of segmenting and focused self-explanation prompts but only with segmenting and focused prompts (Salomon, 1990).
The remainder of this discussion section consists of two main parts. First, we discuss four potential explanations for why the additional support measures may not have resulted in higher learning gains in the respective conditions. The first of these explanations concerns segmenting; the second focuses on self-explanation prompts; and the remaining two explanations might apply for both support measures. We conclude each section with implications for further research. Additionally, Fig. 5 summarizes our main findings and implications for each of the hypotheses.
In the second part, we take a broader perspective that goes beyond the particular support measures investigated here. We take a look at the basic idea of applying general design principles from multimedia learning to the teacher education context (especially in PV). Additionally, we discuss advantages and potential downsides of the single-session intervention format used in this study. Based on our findings, but also on theoretical considerations, we formulate some implications for further research on the one hand and for teacher education practice on the other hand.
Explanation 1: Segmenting is tiring, resulting in depleted resources for working on the posttest
Explanation: Segmenting the videos supported pre-service teachers in their analyses on the one hand. On the other hand, with the repeated disruption of the natural flow of events, a reduction in authenticity might have come at the expense of segmentation. Additionally, in the segmented conditions, participants had to comment on the training phase videos seven times in total, whereas in the non-segmented conditions participants commented only twice. Thus, participants in the segmented conditions might have perceived the task as more demanding. When working on a task without breaks (massed practice), working memory resources get more easily depleted compared to spaced practice with breaks between tasks wherein resources can be restored (Chen et al., 2018). Although the segmented conditions seemed to provide such breaks at first sight (by switching the task of watching and commenting more often), the opposite might be the case. The required minimum time on task was the same for all participants; however, participants in the non-segmented conditions might not have used the entire time for commenting on the task. As it was more difficult for them to recall important events from the long video, they might have stopped commenting at some point and just waited for the timer to run down. In the segmented conditions, however, participants recalled more relevant events, so they really used the entire minimum time on task for commenting. Thus, participants in the non-segmented conditions might have had a small break between the training phase videos, allowing them to restore their working memory resources, while participants in the segmented conditions were active throughout the entire training phase and thus suffered more from fatigue and working memory resource depletion.
We did not find any significant differences in participants’ cognitive load that would support this claim. However, in an open feedback box, 14 participants in the segmented conditions stated that they struggled with staying focused through the course of the study, while only four in the non-segmented conditions did. Thus, participants in the segmented conditions might have indeed benefited more from the training but did not show all their learning gains in the posttest, as they had less working memory resources left for completing the posttest task.
Ideas for future work: In further studies, not providing the posttest immediately, but rather with some delay (e.g., one week) after training might cancel the effects of fatigue and resource depletion. Additionally, when implementing a PV intervention with several alternating phases of video viewing and comment writing, teacher educators should make sure to schedule enough breaks so as not to overload learners’ cognitive capacities.
Explanation 2: Focused self-explanation prompts were not focused enough
Explanation: Focused prompts should raise the quality of self-explanations, as they direct the learner’s attention away from irrelevant aspects (Wang & Adesope, 2017). However, in this study, the design of the focused prompts might not have fulfilled its purpose. Participants who received the focused prompts indeed made more comments about relevant events (i.e., strategies that were outlined in the introductory text) than those who received open prompts. However, they also commented more on irrelevant events—on average, eight comments compared to about five in the open-prompts conditions.
Participants might have misperceived the focused prompts as an invitation to be behaviorally active by clicking on as many options as possible, which is a risk of providing interactivity in learning environments (Atkinson & Renkl, 2007; see also active responding theory; Robins & Mayer, 1993). On the one hand, the provided text stems facilitated comment writing, but on the other hand, they might have lured participants into placing quantity over quality. Writing comments without provided text stems, however, required more effort, so participants in the open-prompts conditions might have been more selective in what events they perceived relevant enough to comment on. Thus, although participants in the focused-prompts conditions commented on more relevant events than participants in the open-prompts conditions, they still might have spent less time elaborating on them.
Ideas for future work: Further studies could investigate self-explanation prompts that provide even more guidance, thereby minimizing the risk of focusing on irrelevant aspects (e.g., drag and drop task with a list of relevant events already provided). Another option to increase guidance could be step-by-step prompts that explicitly guide the pre-service teachers in commenting on observed events.
Explanation 3: The transition from high support to no support is difficult
Explanation: The third explanation applies for both segmenting and focused self-explanation prompts. Throughout the training phase, participants in the segmented conditions immediately commented on short segments that already structured the information and highlighted single events. In the non-segmented conditions, however, participants practiced recognizing relevant events in a steady flow of information, keeping their thoughts in mind until the end of the video. The posttest video was non-segmented and required from the participants exactly what was practiced in the non-segmented conditions. Similarly, participants in the open-prompts conditions practiced writing their comments freely and structuring them themselves—which was also required in the posttest—while participants in the focused-prompts conditions never practiced writing a comment from scratch.
According to the concept of Transfer Appropriate Processing (TAP; Morris et al., 1977), learners perform best when the activity required in an assessment task resembles the activity that had to be performed during training. In our study, participants in the non-segmented and open-prompts conditions were already familiar with the posttest task and thus, they had an advantage over those participants who received support during training (through segmenting, focused prompts, or both). This advantage in the posttest might have compensated for the differences that were present during training.
To facilitate transfer, one could alter the assessment task to make it more similar to the training task. However, when teaching in a real classroom, teachers can neither stop a situation to immediately think about an event, nor receive scaffolds that help them access their theoretical knowledge about teaching and learning. Thus, in the long-term, PV interventions should ultimately prepare pre-service teachers to analyze classroom situations without support.
Ideas for future work: While segmenting and focused self-explanation prompts are applied to single elements of a learning task (i.e., one video example or one analysis task), the structure of the whole course should also follow instructional design principles. For a PV course aiming toward the long-term goal of pre-service teachers being able to notice and interpret important events without support, the Four Components of Instructional Design (4C/ID) approach for training complex skills (van Merriënboer & Kirschner, 2018) suggests transitioning from high support during the early phases of training to no support during the final phase of training.
The intervention presented in this study was designed as a single session that can be flexibly integrated into existing courses in teacher education (e.g., as a kick-off intervention). However, to facilitate application of the skills learned in this supported environment, one could add an additional fading phase in which support is gradually reduced step-by-step. In terms of the self-explanation prompts, fading from focused prompts (e.g., fill-in-the-blanks) to open prompts (e.g., open questions) has shown to be effective in other learning domains (Berthold et al., 2009). For segmenting, of course, one opportunity for fading could be a stepwise increase in segment length. However, the task of segmenting classroom video is not trivial, so it might not be possible to segment a classroom video into segments of the desired length (Hennessy et al., 2016). Another option for fading the support of segmenting could be to separate the two types of support that segmenting provides—highlighting the structure of the information and providing more processing time. To apply fading, one could first withdraw the additional processing time between segments, but maintain the support of highlighting structure, for example, by temporarily darkening the screen at meaningful times (Spanjers et al., 2012). Further studies could investigate whether the beneficial effects of segmenting and focused self-explanation prompts during training could translate to the posttest when the intervention is designed according to 4C/ID, meaning that the task is not changed all at once, but rather support is slowly faded out.
Explanation 4: Large effects of the intervention itself overshadowed smaller effects of specific support measures
Explanation: Overall, participants improved to a substantial degree in their PV during the training intervention (large effects for noticing and unfocused comments, medium effect for interpreting). We interpret this large improvement as being a result of our intervention design, which followed established guidelines for using video in teacher education (Kang & van Es, 2019). Thus, the combination of a theoretical introduction into a topic and analysis of corresponding video examples might already have been enough to exploit the major advantages of video. However, additional support measures (i.e., segmented videos and focused self-explanation prompts) had no additional effects on participants’ improvement.
While video is considered a valuable tool to foster (pre-service) teachers’ PV (Gaudin & Chaliès, 2015), it is still not used extensively in initial teacher education (Christ et al., 2017). For this study, almost half of the participants (43%) indicated that they had not yet worked with video of any form in the course of their studies. Moreover, the topic of small-group tutoring was relatively new to the participants. The large improvement in PV stemming from a gain in theoretical knowledge and practicing with video examples might have overshadowed smaller differential effects of the additional support measures segmenting and focused self-explanation prompts.
Similarly, in a meta-analysis on simulation-based learning in higher education, Chernikova et al. (2020) found a large beneficial effect of simulations in general but only minor additional effects of extra scaffolding measures within simulations (e.g., checklists or step-by-step guidance). Both simulations and video analysis can be seen as new, interesting, and “refreshing” (as some of the participants in the present study stated) ways of learning. According to the novelty effect (Clark, 1983), learners encounter new and unfamiliar media or technologies (such as video or simulations) with increased motivation and effort, which results in high learning gains especially in the beginning of training. However, this initial increase of motivation decreases with time, and beneficial effects of the medium itself disappear when learners get more familiar with the new media and technologies. In sum, the large overall improvement we found in this study could be explained by the fact that both video analysis and the topic of tutoring were relatively new to the participants and the combination of a theoretical introduction and corresponding video examples already provided high support. This overall improvement might have overshadowed smaller differences between the training phase conditions.
Ideas for future work: Further studies should investigate whether specific support measures in video-based interventions show their full potential when participants are already used to working with video in order to avoid novelty effects. Moreover, in the present study, the introductory text might have already contributed a great deal to the pre-service teachers’ PV improvement. Thus, further studies could investigate a topic more familiar to the participants, so that an introductory text serves more as a reminder instead of providing new knowledge. Reducing the impact of the introductory text on participants’ improvement might make subtler effects of video design and prompt type more visible.
Limitations and further studies
Beyond the suggestions in the preceding sections, we want to propose additional ideas for future work that arise from some of this study’s limitations. In this study, we assessed learning outcomes only within an immediate posttest. Thus, participants had already worked on video analysis for an hour and might have been tired and not fully focused. With this limitation in mind, it is remarkable that we did find such a strong increase in performance. However, this increase could also be due to short-term effects. Thus, further studies using a delayed posttest could investigate whether such a condensed training intervention is also suited to induce lasting effects on pre-service teachers’ PV.
To enhance comparability between pretest and posttest, we used videos that were based on the same scripts but played by different actors. Although they were not identical, we cannot rule out potential effects of repetitive practice due to exposure to the same scenes twice. However, pretest and posttest were separated by about a week and participants watched different videos in between, so we assume that pure repetitive practice played a minor role, if any. Nevertheless, in further studies, an additional control group that does not receive the intervention might help to disentangle effects of the intervention from pure effects of repetitive practice.
Implications for research and practice
Considering that PV is a complex skill that takes years of training and experience, this study’s single-session intervention on the very specific topic of small-group tutoring interactions may seem limited in both duration and scope. However, we argue that such single-session PV interventions nevertheless provide a valuable tool for teacher education research and practice. First, short interventions allow systematic investigation of single training elements. With relatively little time and money spent, the effectiveness of different video types, instructions, or additional support elements can be experimentally compared. In the present study, we investigated the design principle of segmenting as well as two types of self-explanation prompts. However, this study procedure could be also used as a template for examining effects of other design principles from multimedia learning (e.g., the signaling principle; van Gog, 2014) to determine those principles that bring the most benefit to the context of teacher education.
Another potential research question to investigate with such a study template concerns the different facets of pre-service teachers PV. One could, for example, compare whether different interventions have different effects on their noticing, interpreting, and decision-making. Moreover, teacher educators could use such a condensed video-analysis program for assessing major deficits in their students’ PV, so they can tailor further instruction to the specific needs of their students. Thus, single-session interventions provide a practical research tool to investigate both the effectiveness of PV training elements and the quality of pre-service teachers’ PV components on a micro level (Farrell et al., 2022). Insights gained on the micro level can then be used to inform decisions about the curricular embedding of video analysis in teacher education (Blomberg et al., 2013).
Second, despite its short duration, the intervention investigated in the present study significantly improved biology pre-service teachers’ PV of tutoring interactions, probably because the development of complex skills such as PV follows a power-law of practice, predicting a steep increase in performance in the early phases of learning (Newell & Rosenbloom, 1981). Hence, single-session interventions could offer a complement to longer PV interventions such as the video clubs (van Es & Sherin, 2021). One major advantage of such a condensed intervention format lies in its potential to be embedded flexibly into various course formats at various stages of teacher professional development. At university, single-session PV interventions provide an opportunity to integrate practice examples—for example, in the form of a kick-off session—into courses focusing on educational theories, instructional design, or even subject matter, and thus answer the call of teacher educators and researchers to overcome the theory–practice gap in initial teacher education (McDonald et al., 2013).
In the later stages of teacher professional development, where teachers have limited time for continuing training in addition to their daily classroom practice, having condensed PV training formats available might also be particularly helpful. The intervention used in the present study was tailored to the particular prerequisites of pre-service teachers in the early stages of teacher education (i.e., limited prior knowledge, little to no classroom experience). However, support measures that are beneficial for novice learners sometimes have no effect or even negative effects for more proficient learners (see expertise-reversal-effect, e.g., Lee & Kalyuga, 2014). Thus, we cannot assume that this particular intervention, which contains a high level of support (e.g., tailored introductory texts, short video clips instead of whole lessons, and focused analysis questions), would be appropriate for expert teachers with higher levels of theoretical prior knowledge or more classroom experience. We recommend considering the particular needs of the target group (pre-service teachers, beginning teachers, or experienced teachers) when designing PV training interventions.