Skip to main content
  • Research article
  • Open access
  • Published:

The influence of teacher annotations on student learning engagement and video watching behaviors

Abstract

While course videos are powerful teaching resources in online courses, students often have difficulty sustaining their attention while watching videos and comprehending the content. This study adopted teacher annotations on videos as an instructional support to engage students in watching course videos. Forty-two students in an undergraduate course at a university in Taiwan were randomly divided into a control group that watched a course video without teacher annotations, and an experimental group that watched a course video with teacher annotations. The collected data included a learning engagement survey, students’ video watching behaviors, and student interviews. The results showed that there were differences in student learning engagement between the control and experimental groups. The teacher annotations increased students’ behavioral and cognitive engagement in watching the video but did not increase their emotional engagement. In addition, this study identified how students learned when watching the course video with the teacher annotations through highlights of the video content, literal questions, reflective questions and inferential questions. The results concluded that teacher annotations and student learning engagement were positively correlated. The students acknowledged that their retention and comprehension of the video content increased with the support of the teacher annotations.

Introduction

The number of online courses in higher education has increased as digital technologies have opened up the possibilities of online learning. A Taiwan Open Course Consortium, for example, was established in 2008 and is now supported by 22 universities to provide students with open online courses in the fields of science, technology, mathematics, and humanity for life-long learning (Sheu & Shih, 2017). Nearly 70% of universities in Taiwan have already developed online programs to enrich the breadth and depth of educational opportunities and interconnectivity. Over 12 thousand college students have taken at least one online course. The high registration rates of online courses have led researchers to explore effective instructional online teaching models by analyzing students’ online learning behaviors, learning strategies (Tsai, Lin, Hong, & Tai, 2018), and learning experiences (Shen, Cho, Tsai, & Marra, 2013). However, little research has paid attention to course videos which students watch to acquire knowledge in online courses. One of the research topics which remains underexplored is how to improve students’ learning engagement when watching course videos.

Course videos are powerful primary teaching materials that teachers use to present learning content in online courses in subjects such as language, science, engineering and physics (Aldera & Mohsen, 2013; Dufour, Cuggia, Soula, Spector, & Kohler, 2007). The popularity of course videos in online courses is attributed to their multimedia presentation of learning content using moving images, audio, and text to help students gain a deep understanding of abstract content, which can be difficult to verbalize but easy to demonstrate (Lange & Costley, 2020). The visual and auditory features of course videos also enable students to better retain learning content in online courses (Jonassen, Peck, & Wilson 1999; Schnotz & Rasch, 2005). Another useful feature of course videos for student learning is control features such as “play”, “forward”, and “stop,” which allow students to consume learning content at their own pace. Based on these features, video-based content is shown to be better than reading-based content in improving students’ learning outcomes (Love, Hodge, Grandgenett, & Swift, 2014).

Although course videos are powerful teaching resources in online courses, problems with course videos have been discussed in the literature. For example, some students cannot sustain their attention when the video content is long and difficult to understand (Hughes, Costley, & Lange, 2019). The absence of teacher support is another problem with course videos. Without teacher support, students, especially beginners not familiar with video topics, often encounter difficulty in comprehending the video content and sustaining their attention (Homer, Plass, & Blake, 2008). Thus, teacher support should be provided to assist students to comprehend video content and improve their learning engagement (Ronchetti, 2010; Zhang, Zhou, Briggs, & Nunamaker Jr, 2006). This study adopted teacher annotations on videos as a method of instructional support to address the following research questions:

  • What is the difference in learning engagement between the students who watched a course video with teacher annotations and those who watched a course video without teacher annotations?

  • What is the relationship between the teacher annotations and the students’ learning engagement?

  • What are the students’ perceptions of watching a course video with teacher annotations?

Literature review

Learning engagement and technologies

Learning engagement refers to the time and effort which students invest in learning activities (Heflin, Shewmaker, & Nguyen, 2017). It typically involves three components: behavioral engagement, cognitive engagement, and emotional engagement (Appleton, Christeson, & Furlong, 2008). Behavioral engagement is the students’ participation in learning activities, such as playing, stopping, and rewinding the course video (Fredricks, Blumenfeld, Friedel, & Paris, 2005; Goggins & Xing, 2016). Cognitive engagement is the cognitive effort students make to acquire and comprehend complex concepts, and it involves the use of high-order thinking skills such as analysis, reasoning, and critiquing (Finn & Zimmer, 2012; Ding, Kim, & Orey, 2017). Emotional engagement is regarded as students’ psychological perceptions of learning activities (Jung & Lee, 2018). These three components of learning engagement were used as criteria to predict students’ learning performance and success in learning environments (Phan, McNeil, & Robin, 2016).

Past researchers (e.g., Cheng & Chiu, 2016; Topu & Goktas, 2019) have explored technologies to foster student learning engagement. These technologies included gamification tools (Ding, et al., 2017), Information and Communication Technologies (ICTs) (Chen, Lambert, & Guidry, 2010), and 3D virtual environments (Topu & Goktas, 2019). Göksün and Gürsoy (2019), for example, utilized gamification tools such as Kahoot to promote learning engagement and academic performance. They divided a total of 97 participants into the Kahoot experimental group (N = 30), the Quizizz experimental group (N = 33), and the control group (N = 34). The three groups underwent the six-week-long instructional activities and took an academic achievement test and a student engagement survey before and after the instructional activities. Their results showed that the Kahoot experimental group had higher scores on the student engagement survey and on the academic performance test than the control group. In the same vein, Rashid and Asghar (2016) also found that the students who used ICT technologies such as social media, internet search engines, and video games in their learning scored higher in the traditional student engagement measures. In addition, 3D virtual environments such as Second Life have been developed to allow students to deal with realistic tasks and interact with 3D avatars to effectively engage them in learning activities (Topu & Goktas, 2019). These research findings demonstrate the potential of using technologies to improve student learning engagement.

Learning engagement in video watching

Although the value of using technologies to support learning engagement has been well established, few studies have investigated how to promote student learning engagement in watching online course videos. Past studies mainly focused on the development of annotation tools to help students comprehend video content. These annotation tools included Microsoft Research Annotation System (MRAS), Media Annotation Tool (MAT), VideoANT, Open Video Annotation, and Annotating Academic Video. MRAS was the earliest video annotation tool, designed by Bargeron, Gupta, Grudin, and Sanocki (1999) to enable students to take notes on a section while watching a video. Bargeron et al. (1999) compared the video annotations with handwritten note-taking to investigate students’ preferences for the two types of the annotations. The results showed that the students preferred video annotations over traditional note-taking because taking notes on the video made it easier to organize and contextualize the notes within the video content. Later, video annotation tools evolved from individual annotations to collaborative annotations, which encourage group discussion on video content. MAT, for example, was developed by RMIT University to allow students to read each other’s video annotations and provide feedback. In these annotation tools, students were able to annotate a video section and read and reply to peers’ video annotations. But asking students to annotate videos did not absolutely enhance their learning engagement in watching online course videos (Piolat, Olive & Kellogg, 2005; Risko, Foulsham, Dawson, & Kingstone, 2013).

Students’ difficulty in annotating videos according to the cognitive theory of multimedia learning (CTML) (Mayer, 2001) is due to the limited cognitive capacity of working memory. Students can only process a small portion of the visual and auditory information in a video at one time in their working memory. Students may stop and replay the video content several times so as to reflect on the content and identify the gap between the video content and their existing knowledge structure. Annotating videos can thus become an intensive and overwhelming cognitive activity for students. Piolat et al. (2005) also found that students felt cognitively overwhelmed while annotating and watching videos at the same time. Annotating videos became a distraction for the students, especially when the students were not familiar with the video content. Therefore, asking students to annotate videos significantly increased their cognitive load. In addition, students’ annotations on videos are not thought-provoking enough to promote deeper reflection on video content. Risko et al. (2013) indicated that students preferred teachers to annotate video content so they could read the annotations to better comprehend the video. These studies suggested that teacher annotations on videos are necessary to engage students in watching course videos.

The impact of the teacher annotations on student learning engagement in watching videos remains underexplored because past research focused on students’ comprehension of video content and student annotations (e.g., Mu 2010; Li, Kidziński, Jermann, & Dillenbourg, 2015). It is still unknown how teacher annotations can support student engagement in watching course videos. In addition, previous studies relied on self-reported data, such as interviews. The use of self-reported data has been criticized because the data might provide a biased result rather than an accurate depiction of the actual learning behaviors (Gonyea, 2005). Video analytic techniques are encouraged to supplement the self-reported data to assure the interpretations of the results are valid and consistent. To fill in these research gaps, this study aimed to investigate how teacher annotations engage students in watching course videos through video analytics, which can capture a more objective and nuanced picture of the student’s behaviors while watching videos embedded with teacher annotations.

The rationale of teacher annotations on videos

Teacher annotations on videos are defined as the behavior of adding notes on a specific segment of a video (Cross, Bayyapunedi, Ravindran, Cutrell, & Thies, 2014). Teacher annotations on videos are regarded as a teaching strategy to align with students’ cognitive structures to overcome the limited capacity of students’ working memory by attracting attention and indexing and organizing information (Barak, Herscoviz, Kaberman, & Dori, 2009; Jones, Blackey, Fitzgibbon, & Chew, 2010). These features of annotations support the cognitive process of watching course videos by helping students (a) focus their attention on an important segment of video content, (b) interpret or summarize video content, and (c) share personal reflections on the video content (Bargeron, et al., 1999; Ibrahim, Callaway, & Bell, 2014).

The demonstration-based training (DBT) model could be used as a pedagogical design to integrate video annotations into course videos through four interrelated processes: attention, retention, production, and motivation (Grossman, Salas, Pavlas, & Rosen, 2013). Attention refers to the active process of filtering and selecting video content to be transferred into working memory (Anderson, 2010). To sustain students’ attention to the video content, teachers can highlight key points by adding an arrow or a circle (Richter, Scheiter, & Eitel, 2015). The second process of the DBT model, retention, is the process of organizing and integrating video information into the existing knowledge structure in the long-term memory (Bandura, 1986). One strategy for supporting students’ retention of video content via video annotation is the use of tagging. Tagging is used to annotate the key points of a short video section with a word or a short phrase. The tags on the video section can help students summarize and grasp the structure and concepts of the learning content.

The last two processes of the DBT model are production and motivation. Production entails students using what they have learned from the video content to accomplish a learning task, and motivation refers to students’ engagement in the process of watching course videos (Bandura, 1986). To facilitate production and motivation, teachers can use video annotations to build an interactive learning environment by generating questions on a video section for students to answer. The teachers’ questions on the video section, furthermore, can prompt students to re-examine, clarify, defend, or elaborate on their thoughts about the video content, thus leading to deeper comprehension and engagement (Kessler & Bikowski, 2010; Storch, 2005).

Methods

Participants

Forty-two students were recruited from an undergraduate course at a university in Taiwan. The age of the students ranged from 20 to 22. Among the 42 students, 25 were female and 17 were male. They had taken online courses in which they were required to watch online course videos to acquire knowledge. However, this was the first time for them to watch a course video with the support of teacher annotations. The 42 students were randomly divided into a control group (N = 20) that watched a video without teacher annotations and an experimental group (N = 22) that watched a course video with teacher annotations.

Research design

The research was conducted over 9 weeks. In week 1, the researcher randomly divided the students into control and experimental groups and introduced the research project. When introducing to the research project, this study adopted the blinding strategy to reduce the Hawthorne effect. The researcher did not announce that the students would be allocated to either the control or experimental groups, and stated that all students would watch a course video with teacher annotations in a separate period of time: before and after week 9. Before week 9, the experimental group watched the video with teacher annotations, while the control group watched the video without teacher annotations. After week 9, the researcher did not collect the data from students but allowed the control group to watch the video with teacher annotations and the experimental to watch the video without teacher annotations. By doing so, the students received the equal opportunity of watching the video with teacher annotations and did not know they were allocated to either the control or experimental group, which reduced the Hawthorne effect. From weeks 2 to 4, the researcher selected and annotated a course video from Youtube. The length of the video was around 15 min and presented a story from Chinese literature which the students had never seen before. VideoANT, a web-based annotation tool, was used to allow the researcher to make timeline-based textual comments in synchronization with videos and shared annotations with the students. VideoANT was selected for several reasons. First, VideoANT supports teacher-student interactions by allowing teachers to (a) tag a video segment on which they wish to make a comment, and (b) share their annotations for their students to read and reply to (see Fig. 1), while most video annotation tools, such as TurboNote, ReClipped, and Cincopa, do not allow students to provide feedback on teachers’ annotations. Second, VideoANT is an open educational resource offered to everyone, unlike commercial annotation tools which require users to pay to access their full set of features. Third, as VideoANT has been widely applied by previous researchers in educational settings, the use of VideoANT in this study could contribute to the literature by providing pedagogical suggestions for further use of VideoANT in educational settings.

Fig. 1
figure 1

The VideoANT interface

During weeks 5 and 6, the researcher assigned the annotated course video for the students to watch. Both the control and the experimental groups watched the same video. The students in the experimental groups watch the video with the teacher annotations, while the control group watched the video without seeing annotations. The researcher as the teacher used the DBT model as an instructional framework to annotate the course video in three ways: by tagging, questioning, and signaling (see Fig. 2). Based on the DBT model, the teacher annotations were categorized into the (a) highlights of the video content with a phrase or a keyword, (b) the literal questions of the video content, (c) the reflective questions of the video content, and (d) the inferential questions of the video content. In the course video, the teacher annotations consisted of 3 highlights, 2 literal questions, 3 highlights of video content, 2 reflective questions, and 2 inferential questions. Students in the experimental groups were told they could decide whether to view the teacher annotations and respond to the annotations. After watching the video, both groups immediately took a learning engagement survey and were also asked to write a short reflection on their learning experience while watching the video.

Fig. 2
figure 2

DBT model of integrating video annotations into instructional videos

Data collection and analysis

The collected data included (a) a learning engagement survey, (b) students’ video watching behaviors, and (c) student interviews. A learning engagement survey was distributed to the control and experimental groups after they watched the video to examine the differences in learning engagement between the two groups (RQ1). The learning engagement survey was adapted from the School Engagement Measure (SEM; Fredricks, et al., 2005), which consists of 21 Likert-scale items (5 = agree; 4 = basically agree; 3 = hard to say; 2 = not quite agree; 1 = disagree). Each of the 21 items was categorized as behavioral engagement, emotional engagement, or cognitive engagement. Examples of the question items in the survey or the three categories above are I watched the video several times towards understanding the course video (behavioral engagement), I am interested in participating in watching the course video (emotional engagement), and I always try to understand the course video even if it is not easy (cognitive engagement). Fredricks et al. (2005) reported Cronbach’s alpha values of 0.72 for behavioral engagement, 0.83 for emotional engagement, and 0.84 for cognitive engagement. The learning engagement surveys of the control and experimental groups were analyzed using a multivariate analysis of variance (MANOVA) to compare the learning engagement between the two groups in watching the course video.

The students’ video watching behaviors were recorded via a screen recorder app. Students were interviewed to explain their watching behaviors including clicking on the teacher annotations and playing, rewinding, and forwarding the video. The examples of the interview questions included “Why did you pause, resume, forward the video at this moment”, “Why did you click on the teacher annotations?”, “Why did you click on the peers’ answers to the teacher annotations?”. The students’ video watching behaviors were analyzed with the learning engagement survey using the Pearson correlation to investigate the relationship between the teacher annotations and student learning engagement (RQ2). Interview data were then collected to further explain how the teacher annotations were related to the students’ learning engagement in watching the video. The extreme-case sampling strategy was used to select students from the experimental group for the interview, as Patton (2002) indicated that “more can be learned from intensively studying extreme or unusual cases than can be learned from statistical depictions of what the average case is like” (p. 170). Thus, the three students who had the highest scores on the learning engagement survey were selected to be interviewed by the researcher at the end of the experiment. The three students were asked to explain their video watching behaviors during the interview. The students’ interview data were analyzed using Braun and Clarke’s (2006) thematic analysis. Three themes related to students’ learning paths emerged from the data: reflective questions, literal questions, and teacher’s highlights of video content.

The student interview was conducted with the students in the experimental group after they watched the course video with annotations. During the interview, the students responded to the such questions as, “Why did you click pause or resume the video at this moment”, “Why did you click on the teacher annotations?”, “What did you like or dislike about the teacher annotations?” and “How did you benefit from the teacher annotations when watching the video?” to explore students’ perceptions of watching the course video with teacher annotations (RQ3). The students’ responses to the questions were analyzed using an inductive analysis approach (Attride-Stirling, 2001; Seidman, 2006) featuring a sequence of activities including (1) organizing and reading through data, (2) coding data, (3) generating themes, (4) interrelating themes, and (5) interpreting the themes.

Results

Differences in learning engagement between the students who watched the course video with teacher annotations and those who watched the video without annotations

The students who watched the course video with teacher annotations and those who watched the video without annotations all took a learning engagement survey after watching the video. Table 1 shows the means and standard deviations of each learning engagement variable for the two groups. The standard deviations of the three learning engagement variables were below 1, indicating that they are clustered closely around the mean. The multivariate result presents a significant difference in the learning engagement of the two groups, Wilk’s lambda = 0.49, F = 13.08, p = 0.00 (see Table 2). The effect size (Eta Square, \({\eta }^{2}\)) was calculated to measure the magnitude of the difference when a significant difference was observed. Cohen (1992) stated that the larger effective size the stronger the relationship between variable and suggested that the effect size of 0.20, 0.50, and 0.80 denote small, medium, and large effect sizes respectively. The \({\eta }^{2}\) = 0.58 reached a medium effective size, showing that 58% of the learning engagement was associated with the teacher annotation scaffold. The results suggest that the use of teacher annotations on the course video resulted in the differences in student learning engagement in watching the video.

Table 1 Means and standard deviations for learning engagement in video watching
Table 2 Multivariate effects for learning engagement in video watching

ANOVAs were conducted to investigate the statistical difference in each of the learning engagement variables between the two groups. The univariate ANOVA results show significant differences in behavioral engagement, F(1, 40) = 33.31, p = 0.00, and cognitive engagement, F(1, 40) = 15.29, p = 0.00. However, there was no significant difference in emotional engagement F(1, 40) = 3.32, p = 0.08. The results show a medium effect size in behavioral engagement, \({\eta }^{2}\) = 0.45, and a small effect size in cognitive engagement, \({\eta }^{2}\) = 0.28. The ANOVA results (see Table 3) suggest that the teacher annotations could increase students’ behavioral and cognitive engagement in watching the course video, but not their emotional engagement.

Table 3 Univariate effects for learning engagement in video watching

Relationship between the teacher annotations and learning engagement

The teacher annotations consisted of 2 literal questions, 3 highlights of video content, 2 reflective questions, and 2 inferential questions. The average number of student clicks on these four types of teacher annotations is shown in Table 4. “Literal questions” and “Highlights of video content” were the top 2 teacher annotations that the students clicked on when watching the course video. The Pearson correlation results indicate that the literal questions were positively correlated with cognitive engagement, r = 0.484, p = 0.022, and behavioral engagement, r = 0.536, p = 0.010. The highlights of video content were positively correlated with cognitive engagement, r = 0.591, p = 0.004. The reflective questions were positively related to cognitive engagement, r = 0.561, p = 0.007. However, the inferential questions were not related to any types of learning engagement. These results conclude that the teacher’s literal questions could increase students’ behavioral and cognitive engagements in watching the course video. The highlights of the video content and reflective questions could also promote students’ cognitive engagement in watching the course video.

Table 4 The total number of student clicks on the teacher annotations (N = 22)

Student 1 (S1), student 5 (S5), and student 9 (S9) were selected from the experimental group to illustrate how the teacher’s literal questions, highlights of the video content, and reflective questions enhanced student behavioral and cognitive engagement in watching the course video. S1, S5, and S9 were selected using the extreme case sampling strategy to represent the students who had higher behavioral and cognitive engagement (see Table 5). Emotional engagement was not discussed because no significant improvement in emotional engagement was found among the students.

Table 5 The behavioral and cognitive engagement of the selected students

Teacher annotations: literal questions

The teacher’s literal questions were intended to elicit responses directly stated in the video content. Figure 3 shows an example of the learning path prompted by a literal question: What did the little boy take from the tree? After clicking on the literal question, S1 went back to a video segment to review the video content. She also clicked on the teacher’s highlights of the video content to look for the answer and reviewed the question before answering. It was observed that the learning path of the literal question was an interactive process in which S1 rewound the video, clicked on the teacher’s highlights, and reviewed the literal question several times until she could find the right answer to the question.

Fig. 3
figure 3

S1′s learning path following the literal question

As Fig. 3 shows, S1 clicked on the literal question 4 times, rewound the video 6 times, and clicked on the teacher’s highlights 6 times. When asked to describe her learning path following the literal question, S1 stated, “It is not easy to find the correct answer to the teacher’s questions [the literal questions]. So I watched the video back and forth and read the question many times to get the right answers.” She further elaborated, “But I liked the experience. The questions pushed me to spend more time watching the video and I was able to better remember the video content.” These results indicate that the literal questions which the teacher annotated on the video promoted the student’s cognitive and behavioral engagement by stimulating the student to review and remember the video content.

Teacher annotations: reflective questions

The reflective questions asked the students to relate the video content to their life experiences or share personal feelings about the video. The learning path prompted by the reflection question was different from that of the literal questions. As Fig. 4 shows, S5 stopped the video 2 times and read peers’ responses 4 times after clicking on the reflective question “Has anyone done the same thing to you in your life?”.

Fig. 4
figure 4

S5′s learning path following the reflective questions

Stopping the video and viewing peers’ responses were the two unique behaviors prompted by the reflective question. When asked about his learning path following the reflective question, S5 explained that he stopped the video because “the questions [reflective questions] were about personal stuff. Even though I could understand the video content, I still need time to find the connection between the video content and my life.” The statement shows that the reflective questions led the students to go beyond their current understanding of video content to build the connection between the video content and their life experiences. Therefore, the reflection question enabled the students to increase their cognitive engagement in video watching. S5 also stated that reading peers’ responses to the literal questions were interesting because “everyone’s answers were different. So, I could learn something from reading peers’ responses.” The statement demonstrates that the reflective questions were positively related to student cognitive engagement because it motivated the students to discuss and analyze the video with their classmates, which helped them comprehend the video from different perspectives.

Teacher annotations: highlights of video content

Highlighting video content was a means of signaling important points in the video's message. S9′s learning path prompted by the teacher’s highlights involved first clicking on the teacher’s highlights, then continuing the video, and finally going back to review the highlights.

Figure 5 shows that S9 reviewed the teacher’s highlights 5 times. Such behaviors represented S9′s cognitive engagement in comprehending the video content. S9 explained that “because the teacher’s highlights were just like hints for the important message of the video content, I think it is important to read them carefully.” In other words, the teacher’s highlights functioned to direct the students’ attention to the important video content and encourage the students to devote more cognitive effort to understanding the main video messages by reviewing.

Fig. 5
figure 5

S9′s learning path prompted by the teacher’s highlights

Students’ perceptions of watching the course video with the teacher annotations

This was the first time for the students to watch a course video with the assistance of teacher annotations. Nineteen out of 22 students enjoyed the learning experience, as S2′s comment reflects: “The activity was fun. In the past, the teacher simply asked us to watch the video, which was a little boring.” These 19 students identified the benefits of the teacher annotations. For example, four students made the following statements in their reflections:

“I can remember the important message of the video content with the teacher’s highlights” (S2).

“I can better understand the video content” (S3).

“The teacher questions helped me recall the video content” (S8).

“I replayed the video several times in order to answer the teacher questions” (S17).

As these statements show, the first benefit of the teacher annotations for the students was to help them comprehend the video at the surface level. For example, the teacher’s highlights made it easy for the students to locate the main video message so that they could remember the factual information in the video. The teacher questions were also useful in helping them retain the video content since the students reviewed the video in order to answer the questions correctly.

Deep comprehension of the video content was achieved by the students with the assistance of the teacher annotations and their peers’ responses. For instance, six students made the following statements in their reflections:

“I liked the teacher questions, because the teacher questions pushed me to think more about the video content” (S9).

“I especially liked the peer responses. After reading those responses, I would have more ideas or questions about the video content” (S15).

“Reading my classmates’ responses was the fun part because some of their answers were creative and interesting” (S17).

“They [the responses] helped me clarify the points I did not understand in the video” (S20).

“I spent more time reading peers’ responses [to the teacher questions]” (S1).

“I like the teacher annotations. But it is more interesting to read peers’ responses to the teacher annotations.” (S14).

As the excerpts above demonstrate, the teacher questions and the peers’ responses opened the eyes of the students to different ways of interpreting the video content. Specifically, their peers’ responses supported the students to extend the video content beyond personal understanding, as S15, S17, and S20 stated in the excerpts. The excerpts of S1 and S14 also show that the peer responses helped to sustain students’ attention and enhanced their enjoyment of the video activity. The results suggest that teachers should focus more on the use of question type annotations to increase students’ behavioral and emotional engagement in watching the course video.

Discussion and conclusions

This study examined the influence of the teacher annotations on student learning engagement using an annotation tool, VideoAnt. The results corroborate previous studies (e.g., Colasante & Douglas, 2016; Miller, Zyto, Karger, Yoo, & Mazur, 2016; Mirriahi, Jovanovic, Dawson, Gašević, & Pardo, 2018; Pardo et al., 2015) showing that annotation tools are beneficial for enhancing student learning engagement. The results further discovered that the teacher annotations fostered the students’ behavioral and cognitive engagement, but not emotional engagement. The results echoed Colasante and Lang’s (2012) study showing that the students’ emotional engagement decreased while students actively engaged in learning with the teacher annotations through a media annotation tool. This study attributed the students’ low emotional engagement to the fact that the teacher annotations distracted them from watching the video, as several students indicated that they could not enjoy the video because they needed to pay attention to the teacher annotations. But this study would argue that the students’ low emotional engagement did not synergistically interact with their cognitive and behavioral engagement, since the students who saw the annotations demonstrated higher behavioral and cognitive engagement. Teachers can add annotations on a course video to promote students’ learning engagement and increase their comprehension of the video content, even though this may decrease students’ motivation. Future studies may explore how to foster students’ emotional engagement when watching the course video with teacher annotations.

The main contribution of this study is to present the learning paths of the students that were prompted by the teacher annotations including teacher’s highlights of the video content, literal questions, and reflective questions. These learning paths reveal how the teacher annotations benefited students’ learning of the course video based on Bloom’s Taxonomy. The teacher’s highlights could support students to “remember” the video content, as the students clicked on the teacher’s highlights to review the important segment of the video content. The results were in line with the findings of previous studies (Ibrahim, Callaway, & Bell, 2014) stating that annotations can reduce students’ cognitive load of memorizing the course video. In addition, the learning paths showed that the teacher’s literal questions prompted students to stop, rewind, and replay the video. Such results suggest that the teacher’s literal questions motivated the students to clarify the video content to enhance their “comprehension” of the video. The other type of teacher annotation was the reflective questions, which encouraged the students to view their peers’ responses. The results showed the potential of using reflective questions to facilitate the students to discuss and “analyze” the video content, as S15 said “I especially liked the peer responses. After reading those responses, I would have more ideas or questions about the video content” in the result section. The results above suggest that the teacher annotations effectively promoted students’ cognitive engagement to help students remember and understand the course video content.

This study concludes that the teacher annotations and student learning engagement were positively correlated. Nineteen out of 22 students in this study acknowledged the value of the teacher annotations on the course video, as they perceived that their retention and comprehension of the video content increased with the support of the annotations. In line with previous studies (Chen, Li, & Chen, 2020), several students pointed out that they really liked the fact that the annotation tool allowed them to view peers’ responses to the teacher’s questions. In addition, this study found that very few students responded to the peers’ responses that they read, possibly because the researcher did not give enough instruction regarding how to respond to peer feedback on the annotation. Future studies may explore how video annotation tools and teacher annotations can support collaborative learning while students watch the course video and investigate the interactive patterns among students that lead to successful performance.

There are few limitations of this study. Firs, the novelty effect could often occur with the introduction of novel technology (Tsay, Kofinas, Trivedi, & Yang, 2020). The introduction of teacher annotation might lead to the novelty effect on student learning engagement. As a result, this study utilized qualitative data including students’ video watching behaviors and student interviews to examine and reveal students’ motivations to engage with teacher annotations, which were addressed in the second and the third research questions. A longitudinal study is also encouraged to be conducted for future studies to examine the novelty effect on student learning engagement with the teacher annotations. Second, this study could only recruit 42 students due to the policy of maintaining small class size to improve students’ learning. The small sample size may influence the generalizability of quantitative results; however, this study collected students’ video watching behaviors and student interviews to validate the quantitative results and offer insights not been seen in the quantitative results. Third, this study did not investigate how the teacher annotations are helpful for beginners. Future studies can consider students’ level to explore the impact of the teacher annotations on student learning engagement and exam performance.

Availability of data and materials

The data of this study is not open to the public due to participant privacy.

References

  • Aldera, A. S., & Mohsen, M. A. (2013). Annotations in captioned animation: Effects on vocabulary learning and listening skills. Computers & Education, 68, 60–75.

    Article  Google Scholar 

  • Anderson, J. R. (2010). Cognitive psychology and its implications (7th ed.). New York: Worth.

    Google Scholar 

  • Appleton, J. J., Christenson, S. L., & Furlong, M. J. (2008). Student engagement with school: Critical conceptual and methodological issues of the construct. Psychology in the Schools, 45(5), 369–386.

    Article  Google Scholar 

  • Attride-Stirling, J. (2001). Thematic networks: An analytic tool for qualitative research. Qualitative Research, 1(3), 385–405.

    Article  Google Scholar 

  • Bandura, A. (1986). Social foundations of thought and actions: A social cognitive theory. Englewood Cliffs: Prentice Hall.

    Google Scholar 

  • Barak, M., Herscoviz, O., Kaberman, Z., & Dori, Y. J. (2009). MOSAICA: A web–2.0 based system for the preservation and presentation of cultural heritage. Computers & Education, 53(3), 841–852.

    Article  Google Scholar 

  • Bargeron, D., Gupta, A., Grudin, J., & Sanocki, E. (1999). Annotations for streaming video on the Web: System design and usage studies. Computer Networks, 31(11), 1139–1153.

    Article  Google Scholar 

  • Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101.

    Article  Google Scholar 

  • Chen, C. H., & Chiu, C. H. (2016). Employing intergroup competition in multitouch design-based learning to foster student engagement, learning achievement, and creativity. Computers & Education, 103, 99–113.

    Article  Google Scholar 

  • Chen, C. M., Li, M. C., & Chen, T. C. (2020). A web-based collaborative reading annotation system with gamification mechanisms to improve reading performance. Computers & Education, 144, 1–14.

    Article  Google Scholar 

  • Chen, P. S. D., Lambert, A. D., & Guidry, K. R. (2010). Engaging online learners: the impact of web-based learning technology on college student engagement. Computers & Education, 54, 1222–1232.

    Article  Google Scholar 

  • Cohen, J. (1992). A Power primer. Psychological Bulletin, 112(1), 155–159.

    Article  Google Scholar 

  • Colasante, M., & Douglas, K. (2016). Prepare–Participate–Connect: Active learning with video annotation. Australasian Journal of Educational Technology, 32(4), 68–91.

    Google Scholar 

  • Colasante, M. & Lang, J. (2012). Can a media annotation tool enhance online engagement with learning? A multi-case work-in-progress report. In J. Cordeiro, M. Helfert and M. J. Martins (Eds.), Proceedings of the 4th International Conference of Computer Supported Education (vol. 2, p. 455–464). Porto, Portugal: The Institute for Systems and Technologies of Information, Control and Communication.

  • Cross, A., Bayyapunedi, M., Ravindran, D., Cutrell, E., & Thies, W. (2014). VidWiki: enabling the crowd to improve the legibility of online educational videos. In Proceedings of the 17th ACM conference on computer supported cooperative work & social computing (pp. 1167–1175). ACM.

  • Ding, L., Kim, C., & Orey, M. (2017). Studies of student engagement in gamified online discussions. Computers & Education, 115, 126–142.

    Article  Google Scholar 

  • Dufour, J. C., Cuggia, M., Soula, G., Spector, M., & Kohler, F. (2007). An integrated approach to distance learning with digital video in the French–speaking Virtual Medical University. International Journal of Medical Informatics, 76(5–6), 369–376.

    Article  Google Scholar 

  • Finn, J. D., & Zimmer, K. S. (2012). Student engagement: What is it? Why does it matter? Handbook of research on student engagement (pp. 97–131). Boston: Springer.

    Chapter  Google Scholar 

  • Fredricks, J. A., Blumenfeld, P., Friedel, J., & Paris, A. (2005). School engagement. What do children need to flourish? (pp. 305–321). Boston: Springer.

    Chapter  Google Scholar 

  • Goggins, S., & Xing, W. (2016). Building models explaining student participation behavior in asynchronous online discussion. Computers & Education, 94, 241–251.

    Article  Google Scholar 

  • Göksün, D. O., & Gürsoy, G. (2019). Comparing success and engagement in gamified learning experiences via Kahoot and Quizizz. Computers & Education, 135, 15–29.

    Article  Google Scholar 

  • Gonyea, R. M. (2005). Self-reported data in institutional research: Review and recommendations. New Directions for Institutional Research, 2005(127), 73–89.

    Article  Google Scholar 

  • Grossman, R., Salas, E., Pavlas, D., & Rosen, M. A. (2013). Using instructional features to enhance demonstration–based training in management education. Academy of Management Learning & Education, 12(2), 219–243.

    Article  Google Scholar 

  • Heflin, H., Shewmaker, J., & Nguyen, J. (2017). Impact of mobile technology on student attitudes, engagement, and learning. Computers & Education, 107, 91–99.

    Article  Google Scholar 

  • Homer, B. D., Plass, J. L., & Blake, L. (2008). The effects of video on cognitive load and social presence in multimedia–learning. Computers in Human Behavior, 24(3), 786–797.

    Article  Google Scholar 

  • Hughes, C., Costley, J., & Lange, C. (2019). The effects of multimedia video lectures on extraneous load. Distance Education, 40(1), 54–75.

    Article  Google Scholar 

  • Ibrahim, M., Callaway, R., & Bell, D. (2014). Optimizing instructional video for preservice teachers in an online technology integration course. American Journal of Distance Education, 28(3), 160–169.

    Article  Google Scholar 

  • Jonassen, D. H., Peck, K. L., & Wilson, B. G. (1999). Learning with technology: A constructivist perspective. Columbus: Merrill Prentice Hall.

    Google Scholar 

  • Jones, N., Blackey, H., Fitzgibbon, K., & Chew, E. (2010). Get out of MySpace! Computers & Education, 54(3), 776–782.

    Article  Google Scholar 

  • Jung, Y., & Lee, J. (2018). Learning engagement and persistence in massive open online courses (MOOCS). Computers & Education, 122, 9–22.

    Article  Google Scholar 

  • Kessler, G., & Bikowski, D. (2010). Developing collaborative autonomous learning abilities in computer mediated language learning: Attention to meaning among students in wiki space. Computer Assisted Language Learning, 23(1), 41–58.

    Article  Google Scholar 

  • Lange, C., & Costley, J. (2020). Improving online video lectures: learning challenges created by media. International Journal of Educational Technology in Higher Education, 17, 1–18.

    Article  Google Scholar 

  • Li, N., Kidziński, Ł, Jermann, P., & Dillenbourg, P. (2015). MOOC video interaction patterns: What do they tell us? Design for teaching and learning in a networked world (pp. 197–210). Cham: Springer.

    Chapter  Google Scholar 

  • Love, B., Hodge, A., Grandgenett, N., & Swift, A. W. (2014). Student learning and perceptions in a flipped linear algebra course. International Journal of Mathematical Education in Science and Technology, 45(3), 317–324.

    Article  Google Scholar 

  • Mayer, R. E. (2001). Multimedia learning. Cambridge and New York: Cambridge University Press.

    Book  Google Scholar 

  • Miller, K., Zyto, S., Karger, D., Yoo, J., & Mazur, E. (2016). Analysis of student engagement in an online annotation system in the context of a flipped introductory physics class. Physical Review Physics Education Research, 12(2), 1–12.

    Article  Google Scholar 

  • Mirriahi, N., Jovanovic, J., Dawson, S., Gašević, D., & Pardo, A. (2018). Identifying engagement patterns with video annotation activities: A case study in professional development. Australasian Journal of Educational Technology, 34(1), 57–72.

    Article  Google Scholar 

  • Mu, X. (2010). Towards effective video annotation: An approach to automatically link notes with video content. Computers & Education, 55(4), 1752–1763.

    Article  Google Scholar 

  • Pardo A, Mirriahi N, Dawson S, Zhao Y, Zhao A, Gašević D (2015) Identifying learning strategies associated with active use of video annotation software. In: Proceedings of the fifth international conference on learning analytics and knowledge (pp 255–259.). New York: ACM.

  • Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks, CA: Sage.

    Google Scholar 

  • Phan, T., McNeil, S. G., & Robin, B. R. (2016). Students’ patterns of engagement and course performance in a Massive Open Online Course. Computers & Education, 95, 36–44.

    Article  Google Scholar 

  • Piolat, A., Olive, T., & Kellogg, R. T. (2005). Cognitive effort during note taking. Applied Cognitive Psychology, 19(3), 291–312.

    Article  Google Scholar 

  • Rashid, T., & Asghar, H. M. (2016). Technology use, self-directed learning, student engagement and academic performance: Examining the interrelations. Computers in Human Behavior, 63, 604–612.

    Article  Google Scholar 

  • Richter, J., Scheiter, K., & Eitel, A. (2015). Signaling text–picture relations in multi–media learning: A comprehensive meta–analysis. Educational Research Review, 17, 19–36.

    Article  Google Scholar 

  • Risko, E. F., Foulsham, T., Dawson, S., & Kingstone, A. (2013). The collaborative lecture annotation system (CLAS): A new TOOL for distributed learning. IEEE Transactions on Learning Technologies, 6(1), 4–13.

    Article  Google Scholar 

  • Ronchetti, M. (2010). Using video lectures to make teaching more interactive. International Journal of Emerging Technologies in Learning (iJET), 5(2), 45–48.

    Article  Google Scholar 

  • Schnotz, W., & Rasch, T. (2005). Enabling, facilitating, and inhibiting effects of animations in multimedia learning: Why reduction of cognitive load can have negative results on learning. Educational Technology Research and Development, 53(3), 47–58.

    Article  Google Scholar 

  • Seidman, I. (2006). Interviewing as qualitative research: A guide for researchers in education and the social sciences (3rd ed.). New York: Teachers College Press.

    Google Scholar 

  • Shen, D., Cho, M. H., Tsai, C. L., & Marra, R. (2013). Unpacking online learning experiences: Online learning self-efficacy and learning satisfaction. The Internet and Higher Education, 19, 10–17.

    Article  Google Scholar 

  • Sheu, F. R., & Shih, M. (2017). Evaluating NTU’s open course ware project with Google analytics: User characteristics, course preferences, and usage patterns. The International Review of Research in Open and Distributed Learning, 18(4), 102–122.

    Article  Google Scholar 

  • Storch, N. (2005). Collaborative writing: Product, process, and students’ reflections. Journal of Second Language Writing, 14(3), 153–173.

    Article  Google Scholar 

  • Topu, F. B., & Goktas, Y. (2019). The effects of guided-unguided learning in 3d virtual environment on students’ engagement and achievement. Computers in Human Behavior, 92, 1–10.

    Article  Google Scholar 

  • Tsai, Y. H., Lin, C. H., Hong, J. C., & Tai, K. H. (2018). The effects of metacognition on online learning interest and continuance to learn with MOOCs. Computers & Education, 121, 18–29.

    Article  Google Scholar 

  • Tsay, C. H. H., Kofinas, A. K., Trivedi, S. K., & Yang, Y. (2020). Overcoming the novelty effect in online gamified learning systems: An empirical evaluation of student engagement and performance. Journal of Computer Assisted Learning, 36(2), 128–146.

    Article  Google Scholar 

  • Zhang, D., Zhou, L., Briggs, R. O., & Nunamaker, J. F., Jr. (2006). Instructional video in e–learning: Assessing the impact of interactive video on learning effectiveness. Information & Management, 43(1), 15–27.

    Article  Google Scholar 

Download references

Acknowledgements

The authors wish to thank students who participated in this study.

Funding

This research was supported by the Ministry of Science and Technology of Taiwan under the grant numbers MOST 108-2511-H-032-001 and MOST 109-2511-H-032-002-MY2.

Author information

Authors and Affiliations

Authors

Contributions

SST collected, analyzed, and interpreted the data, and was a major contributor in writing the manuscript. SST read and approved the final manuscript.

Corresponding author

Correspondence to Sheng-Shiang Tseng.

Ethics declarations

Competing interests

The author declares no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tseng, SS. The influence of teacher annotations on student learning engagement and video watching behaviors. Int J Educ Technol High Educ 18, 7 (2021). https://doi.org/10.1186/s41239-021-00242-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41239-021-00242-5

Keywords