Skip to main content
  • Research article
  • Open access
  • Published:

Investigating feedback implemented by instructors to support online competency-based learning (CBL): a multiple case study

Abstract

Instructional feedback has the power to enhance learning. However, learners do not always feel satisfied with their feedback experience. Simultaneously, little attention has been paid on investigating how feedback is implemented in online competency-based learning (CBL). CBL is an approach under which learning activities are organized in a non-linear manner to help learners achieve pre-defined competencies. This study applied a multiple case study method, and 17,266 pieces of the coded feedback text, given by instructors for three learning tasks from a blended undergraduate course, were analyzed. The results showed that instructors implemented 11 types of feedback. Feedback that was used to give praise was less effective, but was frequently used. Regulative feedback and emotional feedback can be very effective, but they were actually rarely used. Feedback for diagnosis, suggesting improvements, and praise was frequently and consistently used across tasks with different complexity. In contrast, feedback used for complementary teaching and time management, as well as emotional feedback were rarely used. Based on the obtained findings, the potential causes and suggestions for improving feedback implementation were discussed.

Highlights

  • Under competency-based learning, the instructors implemented 11 types of feedback to support students in developing learning competencies.

  • Results indicated that instructors used feedback to facilitate the learning process, which is effective, but most of the used feedback was at the self-level.

  • The feedback that can help students regulate their learning process and make emotional connections was rarely used by instructors, despite that such feedback is potentially very effective to facilitate learning.

  • The patterns of feedback use were relatively consistent across learning tasks with different complexity.

Introduction

With the rapid growth of online learning in higher education (Seaman et al. 2018), concerns like mixed effects on learning outcomes (Nguyen 2015), poor retention rates (Bawa 2016), and insufficient learner feedback (Sunar et al. 2015) have emerged. To enhance online learning, many universities have adopted competency-based learning (CBL) (Besser and Newby 2019). In CBL, a variety of supports, including feedback, are offered by instructors to help learners. High-quality feedback working with repetitive practice can help competency development (Eppich et al. 2015) and increase interaction between learners and instructors.

However, while feedback is intensively used in online CBL, several studies reported that learners are not satisfied with the received feedback (HEA 2019; Mulliner and Tucker 2017; Williams et al. 2008). Consistently, previous literature reported that feedback can have differing effects on learning, ranging from strong to weak positive effects (Ivers et al. 2012), neutral (Fong et al. 2019) or even negative effects (Hu et al. 2016). Feedback is crucial to competency development and used extensively in online CBL, but the lack of consistent positive effects on learning is a concern. Additionally, feedback in the context of CBL is not well-understood (Tekian et al. 2017). Therefore, it is crucial to investigate how feedback is actually used and how the practice of giving feedback may be improved.

Given the rising adoption of online learning, digital feedback data is more accessible than ever before, making it possible to analyze what and how feedback is implemented and identify improvement opportunities. This study aimed to examine the practice of implementing feedback by instructors on learners' submitted assignments supporting online CBL. The findings may help to identify and share useful strategies for feedback implementation and identify opportunities for improving this practice and mitigating learners' unsatisfactory perceptions of the received feedback.

Literature review

Competency-based learning

Competency-based learning (CBL) is conceptualized as an instructional approach that organizes learning activities in a non-linear manner so that learners can study each component of the instruction to achieve the pre-defined competencies (Chang 2006). As one of the central elements of CBL, competency refers to the ability to apply the learned knowledge or skills in practical situations (Henri et al. 2017), and each of the competencies have clear learning objectives (Gervais 2016).

Under the competency-based approach, instructors define the competencies to be learned and the associated assessment methods. Learners are required to learn, perform the assigned tasks, and then demonstrate specific knowledge or skills. Instructors then assess learners' completed work to determine whether learners have mastered those particular knowledge or skills (O'Sullivan and Bruce 2014). Grades could be one of the assessment results that reflect the degree to which specific competencies are mastered (Gervais 2016). Throughout the learning process, learners receive various supports from instructors. Such supports could be feedback, hints, prompts, etc. This study specifically focused on instructors' feedback to guide learners on working on specific learning tasks and developing specific competencies.

Instructional feedback in CBL

Feedback is conceptualized as information given regarding learning performance, intended to adjust the learner's cognition, motivation, and/or behavior to improve their performance (Duijnhouwer et al. 2012). Feedback is one of the most powerful factors that affect learning in a variety of instructional environments (Hattie and Timperley 2007; Hattie and Gan 2011). It can not only help learners to be aware of the gaps between the desired and current knowledge, understanding or competencies, but also support them in acquiring the knowledge and competencies (Narciss 2013) and regulating their own learning process (Chou and Zou 2020). Nicol and Macfarlane-Dick (2006) also considered feedback as a tool that can motivate learners.

There are two forms of feedback, namely formative and summative. Formative feedback refers to information communicated to learners to adjust their learning thinking or behavior (Shute 2007). This can support assessment and instruction, help to close the gaps in learners' understanding, and motivate learners (Brookhart 2008). Summative feedback, on the other hand, is usually provided after summative assessments at the end of a learning module (Harrison et al. 2013). It assesses how well a student finally completes a learning task for grading (White and Weight 2000). Summative feedback is essential for learners to understand the gaps between their performance, the ultimate learning objective to achieve, and what they need to further work on to address their learning weakness (Harrison et al. 2013).

In CBL, feedback is one of the essential elements of mastery learning, as competencies are developed based on the given feedback (Guskey 2007). Learners perceive that feedback is important to them, because it intertwines with instruction (Hattie and Timperley 2007) and confirm learners' understanding and learning. Feedback can also help learners to master specific skills, apply what they have learned, develop competencies, extend thoughts, and demonstrate achievements (Besser and Newby 2019). Under CBL, learners receive iterative formative assessments and feedback and is also provided with opportunities to try again when practicing skills (Besser and Newby 2019). Such feedback can reinforce what learners were expected to learn, confirm what learners learned well, and point out where learners need to spend more effort and learn better (Guskey 2007).

Varying effects of feedback and task complexity

Although feedback is crucial to support competency development, learners are not always satisfied with the received feedback, especially at the higher education level (Radloff 2010; Mulliner and Tucker 2017). Previous studies also reported the feedback could have positive and negative effects on learning (Hattie and Timperley 2007; Kluger and DeNisi 1996). Some types of feedback are powerful as they provide information about tasks and help learners perform more effectively. In contrast, feedback such as praise and rewards are less effective since they do not provide useful information (Hattie and Timperley 2007).

The impact of feedback can be influenced by task complexity, which refers to the extent to which a task is easy or difficult to perform (van de Ridder et al. 2015). Interacting with feedback, complexity can affect decision accuracy (Zhai and Gao 2018) and outcomes. For tasks with low complexity, feedback may increase performance and be more effective (Hattie and Timperley 2007; van de Ridder et al. 2015). For complex tasks, feedback tends to deplete the resources needed for task performance (Kluger and DeNisi 1996). Zhai and Gao (2018) further found that for tasks with different complexity, the quantity of feedback affects the generated effects in a different way. For complex tasks, giving some but not too much feedback is helpful. In contrast, for non-complex tasks, giving more feedback is helpful (Mascha and Smedley 2007).

Research gaps and questions

Despite that several studies have validated the potential positive effects of feedback, the 2019 National Student Survey conducted in the UK stated that learners are not always satisfied with the received feedback. This might be because the received feedback is not specific enough to be helpful or not delivered timely (OfS 2019), resulting in poor quality of feedback implementation. Therefore, examining the practices of feedback implementation can help to identify the solutions to mitigate learners' dissatisfaction. However, few studies particularly investigated this practice to support online CBL. Specifically, exploring the types and effectiveness-related features of the implemented feedback in online CBL is still in its infancy. Recently, Besser and Newby (2019, 2020) identified several types of feedback used in online CBL. For example, they mentioned that feedback was used to confirm or deny learners' performance, describe the requirements and criteria of learning tasks, foster interaction between instructors and learners, point out the gaps between performance and goals, help learners self-regulate their learning, and help learners to transfer skills. However, this study did not further provide an in-depth analysis of the feedback's effectiveness-related features and how feedback was implemented across different learning tasks, calling for more investigation.

Based on the described research gap above, this study further examined how instructors implemented feedback in online CBL. It is worthwhile to investigate how feedback is implemented considering the potential powerful effects of feedback on learning and learner-perceived dissatisfaction reported in different studies. Targeting these gaps, this study sought to answer the following research questions.

  • RQ1. What types of feedback are implemented by instructors to support online CBL?

  • RQ2. What are the effectiveness-related features of the feedback implemented by instructors to support online CBL?

  • RQ3. How does feedback implementation vary across learning tasks with different complexity?

Theoretical framework used to examine feedback implementation

Hattie and Timperley (2007) proposed a feedback model that aims to identify the properties and circumstances that make feedback effective for better learning outcomes. This model is appropriate for analyzing the effectiveness of the implemented feedback in CBL, the context of this study, because feedback is usually intensively used in CBL to promote learning mastery (Besser and Newby 2019). In addition, the effectiveness of feedback is particularly critical for the success of learning mastery (Hattie and Timperley 2007), even under CBL. Therefore, this model was used as the analysis framework of this study. According to this model, feedback at the self-level provides a personal evaluation of the learner and is the least effective since its content is unrelated to learning tasks; feedback at the regulation level focuses on helping learners develop self-evaluation skills or confidence. This feedback can be effective by encouraging learners to continue learning by affecting learners' self-efficacy, self-regulatory proficiencies, and self-beliefs; feedback at the process level focuses on the process of completing a task and can be effective by helping learners process information; feedback at the task level indicates how well a task is being accomplished. It is effective when the task information subsequently is useful for processing information or for enhancing self-regulation. Therefore, it is conditionally effective.

Methodology

Study design

An exploratory multiple case study approach (Yin 2017) was used in this study. Using this approach, it is possible to explore the types and the features of feedback implemented to support learners through analyzing feedback texts. Case comparisons of the feedback used for different learning tasks can also show how feedback can be used in specific ways across learning tasks with different complexity.

Study context and data source

An undergraduate course titled "Introduction to educational technology," offered by a large R-1 University in the U.S. during the 2019 spring semester, was chosen as the study context. Fifty learners participated in this course and were divided into four groups with each group guided by one instructor. The course used a blended learning approach which combines in-person and online instructional activities so that the mode of instruction can be flexible (Boelens et al. 2018). This course included face-to-face lectures, labs, and online modules delivered via a digital Badge-Based Learning Management System (Badge LMS). This Badge LMS was a comprehensive digital platform for learning, including the functions like delivering learning resources (e.g., filmed presentations, video tutorials, textural reading materials, multimedia courseware), submitting assignments, receiving feedback, etc. Learners could access different parts of a learning task and then submit their assignments that they finished. Finally, instructors could view the learners’ submissions and provide feedback via this Badge LMS. This LMS allows them to revise and resubmit their assignments accordingly. Once the learners’ submitted assignments meet the final learning criteria specified by the instructors, they receive a digital badge.

A competency-based approach was followed in this course. Learners were required to complete a set of learning tasks throughout the semester on the Badge LMS. For each of the learning tasks, the learners first obtained basic instructions from in-person lectures. They then studied the online tutorials, completed the assigned weekly work, and submitted them via the Badge LMS. The instructor reviewed the submitted work and evaluated it based on assessment rubrics. Depending on the quality of the submitted work, the instructor provided individualized feedback to help learners improve their work until they demonstrated a mastery of the competencies associated with each learning task. Learners were rewarded with a digital badge once the result of assessments reached the specified learning criterion. Four lab instructors posted their feedback via the Badge LMS to learners. The feedback texts were downloaded from the Badge LMS as an MS Excel file and served as the raw data.

Case description and sampling method

The Excel file contained feedback for a set of learning tasks (i.e., challenges) which are required to develop 11 competencies. These competencies focused on lesson planning and integrating technologies in teaching and learning. These competencies and tasks are further detailed in Table 1. The information structure of this dataset is described in Table 2. The tasks in the format of quizzes focused on information remembering, different from the learning tasks that are competency-focused, where the learners had to finish the given task and demonstrate specific competencies (see Table 1). Since quizzes with single and multiple choices were graded by the platform automatically (i.e., limited feedback was provided), they were excluded in this study. As a result, 36 regular tasks, each was viewed as a case. Due to the large quantity of feedback texts to analyze, we decided to select and analyze specific cases that together represented the whole learning tasks.

Table 1 The complexity of learning tasks associated with competencies
Table 2 The structure of the raw dataset

A critical case sampling method was used, which is a process of selecting a small number of critical cases that can provide rich information and have the greatest impact on discovering knowledge (Patton 2015). This method allows deciding cases with particular features of a population of interest with limited resources (Patton 2015). When this method was followed, the complexity and nature of learning tasks which influence how difficult to develop competencies were considered. Task complexity was operationalized as the maximum points assigned to each task in the syllabus, including 2, 3, 4, 5, 7, 10, and 30 points. The task complexity in terms of the assigned points was listed in Table 1.

There were approximately three types of learning tasks based on their complexity. The first type was relatively simple and required learners to spend relatively little effort to complete it. Tasks with the assigned points 2, 3, and 4 fell into this category. The second type had an intermediate level of complexity, including the tasks with the assigned points 5 and 7. The third type was the most complex, and it required learners to spend a lot of effort to complete it. It included the tasks with the assigned points 10 and 30. The core competency in the course was to design an online learning module. In this study, we selected the tasks that were crucial steps in the learning process of course design (the course objective) and generated rich feedback data.

Within the category of low complexity tasks, the one titled "Defining Performance (3 points)" was selected since it is the first task for writing learning objectives. This task was one of the core components in the early design process. To complete it, the learners needed to write a performance component for a learning objective. Within the second category with intermediate complexity, we chose "Creating your video (7 points)" since past teaching experiences indicated that this task was vital for course content development. Learners received lots of feedback and made revisions to complete the work accordingly. Particularly, in this task, the learners needed to use video production tools to produce a video focusing on an instructional theme. Within the third category with high complexity, we selected the task "IM drafting and feedback (10 points)" since this is the last task, which significantly shaped the learners' final product based on the given feedback. This task required the learners to create a lesson plan for an online learning module.

In sum, the three tasks, namely "Defining Performance," "Creating your video," and "IM drafting and feedback" were selected as three cases for analysis. These three learning tasks represented critical cases when considering task complexity and the vital steps in achieving the primary goal of the course. All 1551 pieces of the feedback texts associated with these three tasks (884 for the task "Defining performance," 307 for the task "Creating your video," and 360 for the task "IM drafting and feedback") were coded, which generated 17,266 coded references for analysis.

Data analysis

The sample dataset was split into three files corresponding to each chosen learning task. Then, these three documents were imported into QSR Nvivo 12 software. The coding was conducted at the thematic level, during which each feedback message was broken down into meaningful chunks. Two experienced coders coded the data simultaneously. A Constant Comparison Analysis technique was applied, which includes three consecutive phases. First, the feedback text was coded into small chunks, each was given one or more descriptors (codes) indicating the basic feature of feedback. Second, the generated codes were grouped into 11 clusters based on the similarity between the types of feedback. Third, the generated clusters were further integrated so that the effectiveness-related features of feedback were summarized as five levels (see the coding schema in Appendix 1). Two researchers used the mixed card sorting method (Wood and Wood 2008) to classify the obtained feedback clusters according to Hattie and Timperley (2007)'s feedback model. Through discussions, the coders reached full agreement (100%) about the feedback classification.

The quantity of each type of feedback was calculated in terms of the total number of the coded references. Qualitative and quantitative comparisons of the implemented feedback across the three tasks were then conducted on the types, features, and quantity of the feedback.

Findings

For each of the three selected learning tasks (Task), the levels of task complexity (Complexity), the total times a student submitted their responses to a learning task (submission number), the total times that instructor provided feedback for specific tasks (feedback number), the ratio of the quantity of feedback to the quantity of submissions (Ratio), and the quantity of the coded references of feedback texts (CR) are summarized in Table 3. Totally, 1551 pieces of feedback provided for 1830 times of submission were coded. As a coding result, 17,266 pieces of coded references were generated for analysis. Among the coded feedback, 1341pieces were formative feedback (CR_Form) and 517 pieces were summative feedback (CR_Sum).

Table 3 The basic statistics of the feedback texts

What types of feedback are implemented by instructors to support online CBL?

The results indicated that 11 types of feedback were implemented (see Fig. 1). Each type of feedback is described below.

Fig. 1.
figure 1

11 types of feedback used by instructors under online competency-based learning in this study

Diagnostic feedback (F1)

When learners' submissions did not reach the learning criterion specified by the instructors, diagnostic feedback was used to provide assessment results or diagnosis and illustrate the gaps between the current and the desired performance. For instance, "… right now, the points for the current cards will be 9.5/10." Such gaps were further explained by pointing out problems identified in the submitted work. Examples of these problems are given in Table 4.

Table 4 Types of problems identified in diagnostic feedback

Feedback for justification (F2)

This type of feedback was used to explain the instructors' judgments. For example, telling learners why some submissions were judged as erroneous, "this lesson (you designed) sounds really good for a face-to-face class, but this should be purely an online interactive module." Some instructors also explained why a specific requirement was provided, such as "can you give me only one theme-related statement? The reason I ask is because some of these statements are good, while others need some work…" Instructors' personal understanding of the submission was also shared in the feedback before giving a suggestion, "I think your performance statement includes two domains, comprehension, and analysis. Am I correct?" Then, this instructor suggested, "… consider breaking your performance statement into two objectives for your learners."

Feedback for improvement (F3)

This type of feedback provided suggestions to help learners improve their work. These suggestions included correct examples, demonstrating how the work should be done, negative examples that should be avoided, specific steps guiding learners to complete the work, and directions to the learning resources were provided. Additionally, indirect suggestions were also provided in feedback, such as cueing, and recommending more effective information search. The feedback was also used to guide learners to review the task requirements to orient their efforts in the right direction. Quoted examples of suggestions are presented in Table 5.

Table 5 Types of suggestions for improving the work

Feedback as complementary teaching (F4)

Instructors used this type of feedback to explain key concepts and clarify learning tasks. This can help learners to complete the learning tasks. For example, "For a performance statement, it's what you want your learners to do by the end of the lesson." In some of this type of feedback, directions were also provided to learners to apply the learned skills. For example, "this is an excellent video … For the website [an upcoming learning task], if you want to use this, you could clip a part of the video."

Motivational feedback (F5)

This type of feedback was used for motivating learners. Sometimes, instructors directly encouraged learners to work. Sometimes, instructors motivated learners in indirect ways. For example, highlighting the value of the learning tasks, providing the normative referential formation of the assessment results, showing positive expectations toward learners' incoming work, clarifying the goals of tasks, etc. Quoted examples of the ways of motivating learners are presented in Table 6.

Table 6 Types of ways to motivate learners

Feedback as praise (F6)

This type of feedback was used as praise, which does not provide much useful information related to specific learning tasks. For example, "Good," "perfect," or "well done." Although this type of feedback was not viewed as effective, it was frequently used in this course.

Feedback for enhancing time management (F7)

This type of feedback was used to help learners with time management. For example, prompting timing issues related to submission, "Because this is so late getting submitted, I can't accept it. If you had gotten these in last week when you said you were going to, I could have made an exception, but there are always going to be deadlines in life, so it's important to try and stick with them as much as possible." Instructors also used this type of feedback to push learners to submit their work as soon as possible, "Make sure you get this badge finished by midnight tonight…" or remind learners that they still have time to improve their work further, "thanks for the early submission… However, it is not the required thing. We will talk more about this later this semester!".

Connective feedback (F8)

Instructors made diverse connections between different learning tasks and instructions through feedback. For example, connecting current tasks with upcoming tasks, connecting current tasks with the knowledge and skills learned previously, and connecting current learning to the future scenarios where the learned skills can be applied. The identified connections built via feedback are provided in Table 7.

Table 7 Types of connections built using feedback

Feedback to encourage the use of feedback (F9)

Instructors used this type of feedback to encourage learners to use feedback, for example, explaining the feedback, "sorry that my earlier feedback was not very clear. What I hoped to emphasize is the potential for video to 'show' learners about something …" This feedback can also encourage learners to make use of feedback and iteratively improve their work. For example, "we can go back and forth (revise, resubmit, evaluate, and give feedback) a couple times until it's perfect!" The last way is to use feedback to direct learners to view the attached feedback in the format of documents or images. For example, "I indicated this missing part in the attached image."

Feedback to foster communication and help-seeking (F10)

This type of feedback was used to encourage learners to communicate with instructors and actively seek help. For example, "Email me [the instructor's email] directly if you have any questions…" or "…if you want to talk about it more in [the] lab on Wednesday, we can certainly do that. And if you have any other questions or concerns, please let me know." By using these methods, instructors explicitly reminded learners to seek help actively.

Emotional feedback (F11)

Emotional feedback was also used by instructors to express different emotions and thus enhance the connections between learners and instructors. The emotions include an appreciation of what the learners have done, sympathy, humor to help learners mitigate anxiety, using emojis to express positive feelings, as shown in Table 8.

Table 8 Ways of expressing emotions in the feedback

What are the effectiveness-related features of the feedback implemented by instructors to support online CBL?

The effectiveness-related features of the feedback were identified by classifying the types of feedback into different levels, according to Hattie and Timperley's (2007) feedback model. Based on the total number of the coded references of each type of feedback, approximately 36% of all the implemented feedback, including feedback for improvement (F3) and complementary teaching (F4), connective feedback (F8), and feedback to encourage the use of feedback (F9), worked at the process level and focused on facilitating the process of completing learning tasks. This largest cluster of feedback was effective in helping learners to process task information.

The second-largest cluster of the feedback, about 29% of all the implemented feedback, included feedback for diagnosing (F1) and making justifications (F2) for the given evaluation. Working at the task level, this cluster of feedback was conditionally effective. It means that the power of this type of feedback depends on whether the subsequent task information is useful for improving strategy processing or enhancing self-regulation.

The third-largest group of feedback, about 28% of all the implemented feedback, worked at the self-level, mainly using feedback as praise (F6). These feedback messages did not provide useful information related to the learning tasks. Thus, they were viewed as the least effective.

About 7% of the implemented feedback focused on enhancing learner's self-regulation, including feedback to enhance motivation (F5), time management (F7), and foster communication and help-seeking (F10). Last, the instructors spent minimal effort on providing emotional feedback (F11), which only accounted for less than 1% of all the implemented feedback. Although these feedback clusters working at the regulation and emotion level can be very effective, the instructors did not use such feedback much in this course.

How can feedback implementation vary across learning tasks with different levels of complexity?

For each learning task, the quantity of the coded reference (QCR) for each type of feedback was calculated and organized in Table 9. We also calculated the ratio (type-all-ratio) of the coded reference of each type of feedback to the summed quantity of the coded reference of all kinds of feedback. For each learning task, we marked the top three most used types of feedback by the icon "↑," and the three least used types of feedback by the icon "↓," as shown in Table 9.

Table 9 The quantity and percentage of each type of feedback grouped by learning tasks

Consistent patterns of feedback use across the learning tasks were observed. Feedback for diagnosis (F1), improvement suggestions (F3), and feedback as praise (F6) was consistently and frequently used across the three learning tasks with a low, medium, and high level of complexity respectively. However, feedback for complementary teaching (F4), time management (F7), and emotional feedback (F11) were significantly less used across the three learning tasks.

Discussions and recommendations

The study aimed at examining the practice of feedback implementation in online CBL. The results indicated that the instructors implemented 11 types of feedback. These results showed that feedback could be used to facilitate online CBL from a variety of dimensions and help close learning achievement gaps (Guskey 2007). This finding is consistent with what Besser and Newby (2020) reported that instructors used different types of feedback to support online CBL, such as feedback focused on outcomes, motivation, interaction, clarification, extension, closing learning gaps, learning transfer, and regulation. Such diverse feedback is beneficial for mitigating learners-perceived dissatisfaction resulting from their feedback experience since learners prefer all types of feedback (Besser and Newby 2020).

Some types of feedback rarely mentioned in the previous literature were observed in this study. These included feedback for connecting current learning tasks with other learning tasks, instructional modules, assessment (F8), and the feedback for guiding learners to use feedback (F9). The reason why connective feedback was implemented might be that the dominating instructional strategy in this course was CBL, which required learners to complete a set of interrelated learning tasks. In a typical CBL model, learning is structured horizontally to integrate the learned content across the curriculum and vertically to help learners master the course content in depth (Gervais 2016). Thus, making broad connections among learning tasks, instruction, and assessments via feedback is supportive to help learners achieve meaningful learning. Connective feedback can also help learners to navigate through such a blended learning format by connecting current learning tasks and different instruction modules (e.g., online module and face to face module), helping learners make sense of the assessment results.

The feedback that encourages learners to use feedback may also be a special feature in online CBL. CBL promotes persistence, which encourages learners to keep trying until they succeed (Bloom 1980). Thus, instructors may give several iterations of feedback and supervise learners to make use of prior assessments and feedback to facilitate their learning. The online learning context might be another cause of the use of this type of feedback. The online learning environment is mostly self-driven, and if learners are not comfortable with self-learning, this kind of learning environment can be overwhelming for them (Bawa 2016). Thus, instructors tried to help learners by highlighting the essential parts, such as reminding learners to use important feedback.

Feedback at the process level can be effective, and it was actually used extensively in this course. However, self-level feedback (i.e., praise), which is viewed as the least effective (Hattie and Timperley 2007), was extensively implemented. Praise can have mixed pattern of effects on learning. For example, Cimpian et al. (2007) found praise worded in person (e.g., you are a good drawer) can cause learners to denigrate their skills, feel unhappy, avoid their mistakes and quit the tasks. In contrast, learners who were told that they did a good job tend to use strategies to correct their mistakes and persist with the task. Therefore, online instructors must be cautious when they use feedback for praise because of these potential risks. Feedback for diagnosis and justification was frequently used, but such feedback is only conditionally effective. To determine its effectiveness, further investigations, which link this type of feedback and the task information provided subsequently, are needed. Online instructors must be aware that only giving such feedback is not enough. More efforts should be made to connect this type of feedback with task information provided subsequently.

Finally, instructors did not give much feedback for enhancing self-regulation and emotional feedback. However, these types of feedback can be very beneficial since they can help mitigate the issues associated with the online context, for example, learner perceived isolation and limited instructor understanding of learners (Bawa 2016; Hung and Chou 2015). The limited amount of this type of feedback might have been caused by the intensive workload for instructors to grade the work and deliver personalized feedback to a large group of learners or simply because instructors were not aware of the value of such feedback. Considering the potential benefits of these types of feedback, it may be helpful to increase instructors' awareness of the value of the regulative and emotional feedback so that they can apply them.

Case comparisons further indicate there were consistent patterns regarding the types of feedback used. Feedback for diagnosing (F1), suggestions for improvement (F3), and praise (F6) were consistently and frequently used. This may because a comprehensive assessment system is an essential feature of CBL (Gervais 2016), and the implementation of diagnostic feedback (F1) and feedback for improvement (F3) are well aligned with this feature by providing detailed assessment information. In contrast, feedback for complementary teaching (F4) and emotional feedback (F11) were consistently less used across the different learning tasks. However, previous studies have indicated that feedback has differing effects for tasks with different complexity (Kluger and DeNisi 1996; Zhai and Gao 2018). Thus, feedback should be used in a personalized way according to task complexity, but we did not observe a significant difference in using feedback across tasks. Such differences between this study's findings and previous studies might be caused by the competency-based learning approach or the fact that instructors did not have enough time to personalize their feedback. We suggest online instructors to consider using feedback in a more customized way in learning tasks with different complexity, since information processing in tasks with different complexity is different (Zandvakili et al. 2018). For example, instructors can provide more regulative feedback to help learners manage the process and control the quantity of the overall given feedback for complex tasks.

Conclusions, implications, and limitations

This study investigated the practice of feedback implementation in online CBL. We found that instructors used eleven types of feedback to support online learners. The effectiveness-related features and the quantity of the implemented feedback were identified, which indicated the advantages and limitations in the practice of implementing feedback, and provided suggestions for feedback improvement. The case comparison further highlighted that the most basic feedback needed in online CBL. While regulative feedback and emotional feedback can be very effective, they were rarely used. Feedback should also be used in a customized way, but this was not the case in this study, which calls for more attention.

The findings of this study can enhance competency-based learning (CBL), by providing suggestions and recommendations for instructors about how to present feedback for learning, and the types of feedback to be provided. This can lead to better learning experiences and outcomes, as well as mitigate the learners' unsatisfactory perceptions of the received feedback. Specifically, this study suggests that when providing feedback, effectiveness-related features of feedback must be considered. In addition to using the feedback at the task level, feedback at regulation level and emotional level should be also used, since they can be very effective. Moreover, the nature and complexity of learning tasks are also critical for feedback implementation. Therefore, instructors should provide customized feedback based on a comprehensive consideration of the learning task attributes, and real-time learning performance. For example, for learning tasks with high complexity, it is necessary to control the quantity of the feedback delivered in the scenarios of low learning performance to avoid potential cognitive overload.

Based on the qualitative coding work, the processed feedback texts with the labels of features can be used for text classification and feature identification for mining feedback strategies from large-scale feedback datasets in the future. Such future work may help mitigate the current limitation that only a small number of cases were investigated in this study.

Appendix 1. Coding schema

Level

Cluster

Initial descriptor

Explanation

Task-level

Diagnostic feedback (F1)

Gaps

Indicate the gaps between current performance and the desired performance

Confirm_Good

Confirming students' good work

Concern

Show concerns of the work

Misunderstanding

Pointing out the misunderstanding students have for the tasks

Error

Point out erroneous elements under specific conditions

Missing

Point out the missing parts or components

Unnecessary

Point out the unnecessary part

Technical_issue

Point out technical issues that impeded evaluation

Revision

Point out the part that needs revision

Confusing

Clarify the confusing work

Feedback for justification (F2)

Reasons

Provide reasons about the judgment or requirements

Explain_Err

Explain why the response is erroneous

Personal_understanding

Sharing instructor's personal understanding toward specific learning tasks

Process-level

Feedback for improvement (F3)

Correct_Err

Correcting the erroneous elements

Suggest_improve

provide suggestions about how to improve the work

Tools_improve

Suggest tools for students to improve the works

Evaluate_suggest

Evaluate suggestions

Task_requirement

Paraphrase or directly show the task requirements

Correct_example

Provide correct examples or demonstration

Negative_example

Provide a negative example that should be avoided

Steps

Show students the next steps to finish the work

Principle

Prompt the principles

Resources

Direct students to find or use supportive learning resources

Cueing

Cueing and leading to more effective information search

Example_improvement

Provide examples about how to improve the work further

Feedback for teaching (F4)

Explain

Explain the key terms or key concepts or critical parts of the learning tasks

Reinforce

Reinforce the key concepts learned

Apply

Direct student to apply the learned skills

Feedback promoting feedback (F9)

Promote_FB

Encourage the use of feedback and revision

Explain_FB

Explain the feedback

FB_attach

Feedback attached document

FB_image

Feedback image

Other_support

Offer other types of support

Connective feedback (F8)

Connect_other_task

Connect current learning tasks to other learning tasks

Connect_previous

Connect feedback to previously learned content

Connect_future_work

Connect feedback to future work

Connect_future_instruct

Connect feedback to future instruction

Connect_assess

Connect feedback with assessments

Regulation level

Motivational feedback (F5)

Encourage_try

Encourage students additional try

Further_learning

Promote further learning for additional growth

Keep_up

Keep up the good work

Stimulate_thinking

Stimulate students to think more on it

Questioning

Questioning students' work

Purpose

Describe the purpose or goal of the learning tasks

Value

Highlight the value of the task

Normative_reference

Provide normative reference information

Prompting

Prompting students to take actions for moving forward

Self_efficacy

Enhance self-efficacy

Expectation

Show expectation of students’ progress

Feedback for time management (F7)

Timing

Prompt submitting timing issues

Feedback fostering communication and help-seeking (F10)

Communication

Encourage communication

Help_seeking

Promote help-seeking

Other_support

Offer other types of support

Self-level

Feedback as praise (F6)

Appreciation

Show appreciation of students' work

Self_positive

Show positive personal affection toward students

Emotional level

Emotional feedback (F11)

Emoji

Using Emoji

Emotional_connect

Make emotional connections

Tolerance

Show tolerance

Sympathy

Show sympathy

Humor

Show humor and help students mitigate anxiety

Availability of data and materials

The datasets used during the current study are available from the corresponding author on reasonable request.

References

  • Bawa, P. (2016). Retention in online courses: Exploring issues and solutions. SAGE Open, 6(1), 1–11.

    Article  Google Scholar 

  • Besser, E. D., & Newby, T. J. (2019). Exploring the role of feedback and its impact within a digital badge system from student perspectives. Tech Trends, 63(4), 485–495.

    Article  Google Scholar 

  • Besser, E. D., & Newby, T. J. (2020). Feedback in a digital badge learning experience: Considering the instructor’s perspective. Tech Trends, 64, 484–549.

    Article  Google Scholar 

  • Bloom, B. S. (1980). All our children learning. New York: McGraw-Hill.

    Google Scholar 

  • Boelens, R., Voet, M., & Bram, D. W. (2018). The design of blended learning in response to student diversity in higher education: Instructors’ views and use of differentiated instruction in blended learning. Computers and Education, 120, 197–212.

    Article  Google Scholar 

  • Brookhart, S. (2008). How to give effective feedback to your students. Alexandria: ASCD.

    Google Scholar 

  • Chang, C. (2006). Development of competency-based web learning material and effect evaluation of self-directed learning aptitudes on learning achievements. Interactive Learning Environments, 14(3), 265–286.

    Article  Google Scholar 

  • Chou, C. Y., & Zou, N. B. (2020). An analysis of internal and external feedback in self-regulated learning activities mediated by self-regulated learning tools and open learner models. Int J Educ Technol High Educ, 17(55), 1–27. https://doi.org/10.1186/s41239-020-00233-y.

    Article  Google Scholar 

  • Cimpian, A., Arce, H. C., Markman, E. M., & Dweck, C. S. (2007). Subtle linguistic cues affect children’s motivation. Psychological Science, 18, 314–316. https://doi.org/10.1111/j.1467-9280.2007.01896.x.

    Article  Google Scholar 

  • Duijnhouwer, H., Prins, F. J., & Stokking, K. M. (2012). Feedback providing improvement strategies and reflection on feedback use: Effects on students’ writing motivation, process, and performance. Learning and Instruction, 22(3), 171–184. https://doi.org/10.1016/j.learninstruc.2011.10.003.

    Article  Google Scholar 

  • Eppich, W. J., Hunt, E. A., Duval-Arnould, J. M., Siddall, V. J., & Cheng, A. (2015). Structuring feedback and debriefing to achieve mastery learning goals. Academic Medicine, 90(11), 1501–1508.

    Article  Google Scholar 

  • Fong, C. J., Patall, E. A., Vasquez, A. C., & Stautberg, S. (2019). A meta-analysis of negative feedback on intrinsic motivation. Educational Psychology Review, 31, 121–162.

    Article  Google Scholar 

  • Gervais, J. (2016). The operational definition of competency-based education. The Journal of Competency-Based Education, 1(2), 98–106.

    Article  Google Scholar 

  • Guskey, T. R. (2007). Closing achievement gaps: Revisiting Benjamin S. Bloom’s, “Learning for Mastery.” Journal of Advanced Academics, 19(1), 8–31.

    Article  Google Scholar 

  • Harrison, C. J., Könings, K. D., Molyneux, A., Schuwirth, L. T., Wass, V., & Van der Vleuten, C. M. (2013). Web-based feedback after summative assessment: How do students engage? Medical Education, 47(7), 734–744.

    Article  Google Scholar 

  • Hattie, J. A., & Gan, M. (2011). Instruction based on Feedback. In R. Mayer & P. Alexander (Eds.), Handbook of research on learning and instruction (pp. 249–271). New York: Routledge.

    Google Scholar 

  • Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.

    Article  Google Scholar 

  • HEA. (2019). HEA feedback toolkit. York: Higher Education Academy.

    Google Scholar 

  • Henri, M., Johnson, M. D., & Nepal, B. (2017). A review of competency-based learning: Tools, assessments, and recommendations. Journal of Engineering Education, 106(4), 607–638.

    Article  Google Scholar 

  • Hu, X., Chen, Y., & Tian, B. (2016). Feeling better about self after receiving negative feedback: When the sense that ability can be improved is activated. The Journal of Psychology, 150(1), 72–87. https://doi.org/10.1080/00223980.2015.1004299.

    Article  Google Scholar 

  • Hung, M., & Chou, C. (2015). Students’ perceptions of instructors’ roles in blended and online learning environments: A comparative study. Computers & Education, 81, 315–325.

    Article  Google Scholar 

  • Ivers, N., Jamtvedt, G., Flottorp, S., Young, J. M., Odgaard-Jensen, J., French, S. D., O'Brien, M. A., Johansen, M., Grimshaw, J., & Oxman, A. D. (2012). Audit and feedback: Effects on professional practice and healthcare outcomes. The Cochrane Database of Systematic Reviews, (6), CD000259.

  • Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254.

    Article  Google Scholar 

  • Mascha, M. F., & Smedley, G. (2007). Can computerized decision aids do “damage”? A case for tailoring feedback and task complexity based on task experience. International Journal of Accounting Information Systems, 8(2), 73–91.

    Article  Google Scholar 

  • Mulliner, E., & Tucker, M. (2017). Feedback on feedback practice: Perceptions of students and academics. Assessment & Evaluation in Higher Education, 42(2), 266–288.

    Article  Google Scholar 

  • Narciss, S. (2013). Designing and evaluating tutoring feedback strategies for digital learning environments on the basis of the interactive tutoring feedback model. Digital Education Review, 23(1), 7–26.

    Google Scholar 

  • Nguyen, T. (2015). The effectiveness of online learning: Beyond no significant difference and future horizons. Journal of Online Learning and Teaching, 11(2), 309–319.

    Google Scholar 

  • Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218.

    Article  Google Scholar 

  • OfS. (2019). National Student Survey. Retrieved from https://www.officeforstudents.org.uk/advice-and-guidance/student-information-and-data/national-student-survey-nss/.

  • O'Sullivan, N., & Bruce, A. (2014). Competency-based education: Learning at a time of change. Proceedings of "European/National Initiatives to Foster Competency-Based Teaching and Learning" Europe Conference, 37–44.

  • Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice. Thousand Oaks: SAGE.

    Google Scholar 

  • Radloff, A. (2010). Doing more for learning: Enhancing engagement and outcomes: Australasian student engagement report. Camberwell: Australian Council for Educational Research.

    Google Scholar 

  • Seaman, J., Allen, I. E., & Seaman, J. (2018). Grade increase: Tracking distance education in the United States. Wellesley: Babson Survey Research Group.

    Google Scholar 

  • Shute, V. (2007). Focus on formative feedback. ETS Research Report. http://www.ets.org/research/contact.html.

  • Sunar, A. S., Abdullah, N. A., White, S., & Davis, H. (2015). Personalisation of MOOCs: The state of the art. Proceedings of the 7th International Conference on Computer Supported Education (pp. 88–97), Lisbon, Portugal.

  • Tekian, A., Watling, C. J., Roberts, T. E., Steinert, Y., & Norcini, J. (2017). Qualitative and quantitative feedback in the context of competency-based education. Medical Teacher, 39(12), 1245–1249.

    Article  Google Scholar 

  • van de Ridder, J. M., McGaghie, W. C., Stokking, K. M., & ten Cate, O. T. (2015). Variables that affect the process and outcome of feedback, relevant for medical training: A meta-review. Medical Education, 49(7), 658–673.

    Article  Google Scholar 

  • White, K. W., & Weight, B. H. (2000). The online teaching guide: A handbook of attitudes, strategies and techniques for the virtual classroom. Boston: Allyn and Bacon.

    Google Scholar 

  • Williams, J., Kane, D., & Sagu, S. (2008). Exploring the national student survey: Assessment and feedback issues. York: Higher Education Academy.

    Google Scholar 

  • Wood, J. R., & Wood, L. E. (2008). Card sorting: Current practices and beyond. Journal of Usability Studies, 4(1), 1–6.

    Google Scholar 

  • Yin, R. K. (2017). Case study research and applications: Design and methods. New York: Sage publications.

    Google Scholar 

  • Zandvakili, E., Washington, E., Gordon, E., & Wells, C. (2018). Mastery learning in the classroom: Concept maps, critical thinking, collaborative assessment (M3CA) using multiple choice items (MCIs). Journal of Education and Learning, 7(6), 45–56.

    Article  Google Scholar 

  • Zhai, K., & Gao, X. (2018). Effects of corrective feedback on EFL speaking task complexity in China's university classroom. Cogent Education, 5(1)

Download references

Acknowledgements

Not applicable.

Funding

There was no funding that supported this study.

Author information

Authors and Affiliations

Authors

Contributions

HW made substantial contributions to the design of the work, the acquisition, analysis, and the interpretation of data, and drafted the work and revised it substantively. AT made substantial contributions to the analysis, interpretation of data, and revised it substantively. JL made substantial contributions to the design of the work, and the interpretation of data, and substantively revised it. HL and RH made substantial contributions to the interpretation of data, and substantively revised it. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Huanhuan Wang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, H., Tlili, A., Lehman, J.D. et al. Investigating feedback implemented by instructors to support online competency-based learning (CBL): a multiple case study. Int J Educ Technol High Educ 18, 5 (2021). https://doi.org/10.1186/s41239-021-00241-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41239-021-00241-6

Keywords