Skip to main content
  • Research article
  • Open access
  • Published:

An analysis of internal and external feedback in self-regulated learning activities mediated by self-regulated learning tools and open learner models

Abstract

In self-regulated learning (SRL), students organize, monitor, direct, and regulate their learning. In SRL, monitoring plays a critical role in generating internal feedback and thus adopting appropriate regulations. However, students may have poor SRL processes and performance due to their poor monitoring. Researchers have suggested providing external feedback to facilitate better student SRL. However, SRL involves many meta-cognitive internal processes that are hidden and difficult to observe and measure. This study proposed a SRL model to illustrate the relationship among external SRL tools, internal SRL processes, internal feedback, and external feedback. Based on the model, this study designed a system with SRL tools and open leaner models (OLMs) to assist students in conducting SRL, including self-assessing their initial learning performance (i.e. perceived initial performance and monitoring of learning performance) after listening to a teacher’s lecture, being assessed by and receiving external feedback from the OLM (i.e. actual performance) in the system, setting target goals (i.e. desired performance) of follow-up learning, conducting follow-up learning (i.e. strategy implementation), and evaluating their follow-up learning performance (i.e. perceived outcome performance and strategy outcome monitoring). These SRL tools also externalize students’ internal SRL processes and feedback, including perceived initial, desired, and perceived outcome performances, for investigation. In addition, this study explores the impact of external feedback from the OLM on students’ internal SRL processes and feedback. An evaluation was conducted to record and analyze students’ SRL processes and performance, and a questionnaire was administered to ask students about their SRL processes. There are three main findings. First, the results showed that students often have poor internal SRL processes and poor internal feedback, including poor self-assessment, inappropriate target goals, a failure to conduct follow-up learning, and a failure to achieve their goals. Second, the results revealed that the SRL tools and external feedback from the OLM assisted most students in SRL, including monitoring their learning performance, goal-setting, strategy implementation and monitoring, and strategy outcome monitoring. Third, some students still required further support for SRL.

Introduction

SRL and feedback

In self-regulated learning (SRL), students organize, monitor, direct, and regulate their learning. SRL involves many meta-cognitive strategies, processes, and skills, such as self-assessment, goal-setting, strategic planning, monitoring, and help-seeking (Lee et al. 2019; Panadero 2017; Winne 2011; Zimmerman 1990, 2001, 2002; Zimmerman and Schunk 1989). Figure 1 displays the SRL model proposed by Butler and Winne (1995) and re-interpreted by Nicol and Macfarlane-Dick (2006). In the model, SRL involves a self-oriented feedback cycle in which students set goals and strategies; monitor their goals, strategies, and performance; and generate internal feedback to regulate their knowledge, beliefs, goals, and strategies. A student is a cognitive system in which an academic task, set by a teacher or the student, triggers the SRL. Based on a student’s domain knowledge, strategy knowledge, and motivational beliefs, the student interprets the meaning and requirement of the task, sets goals for the desired performance, chooses and applies learning tactics and strategies to reach the goals, evaluates the outcome learning performance (i.e., perceived performance) after applying the tactics and strategies, and generates internal feedback to conduct appropriate regulation (see parts 1 to 5 in Fig. 1) (Nicol and Macfarlane-Dick 2006). Students may have individual differences in domain knowledge, strategy knowledge, and motivational beliefs that affect SRL behaviors and performance (Musso et al. 2019; Winne 1996). For example, students need domain knowledge to correctly judge the difficulty of the task and the needed effort to learn the task; some students may be unconfident in the task or lack motivation so that they set a goal of just passing the exam; students may lack knowledge in how to choose, apply, and regulate tactics and strategies, leading to poor SRL behaviors. Studies have found that high-achieving students have good SRL behaviors, frequently apply SRL strategies and develop good SRL processes whereas low-achieving students have poor SRL behaviors, rarely apply SRL strategies and develop poor SRL processes (Burns et al. 2018; Nota et al. Zimmerman 2004; Winne 1996; Zimmerman and Schunk 1989).

Fig. 1
figure 1

An SRL and feedback model

The monitoring of tactics and strategies and of the discrepancies between the goals and outcome plays an important role in SRL (Griffin et al. 2013; Winne 1996). Monitoring is a metacognitive process for generating internal feedback to the student to determine whether the regulation of knowledge, beliefs, goals, tactics and strategies is required (see Fig. 1). When a student identifies a discrepancy between the goals and outcome through monitoring, the student may generate internal feedback (i.e., self-regulating metacognitive control or actions) to reduce the discrepancy (Butler and Winne 1995; Hattie and Timperley 2007; Winne 1996). For example, if a student recognizes that the outcome performance is lower than the goals, the student may change tactics and strategies to reach the goals (i.e., internal feedback). However, students may generate poor internal feedback that does not benefit learning. For example, students may lack strategy knowledge and thus change to adopt ineffective tactics and strategies (i.e., poor internal feedback); students may also decrease their goals without further learning (i.e., poor internal feedback) due to a lack of motivation or limits on time and learning resources. In addition, a possible reason for poor internal feedback may be that students have inaccurate monitoring, and, thus, appropriate regulation is not adopted. For example, students may have a poor self-assessment (i.e., calibration, judgement of learning, perceived performance, monitoring of learning performance) and may over-estimate their learning performance so that they stop learning (i.e., poor internal feedback) before they master the task (Chou et al. 2015).

Researchers have proposed providing students with external feedback from teachers, teaching assistants, peers, or systems to facilitate better student SRL behaviors and performance (Azevedo et al. 2007; Azevedo and Hadwin 2005; Butler and Winne 1995; Chou et al. 2015, 2018; Lai and Hwang 2016; Lin et al. 2016; Müller and Seufert 2018; Panadero et al. 2019; Schraw 2007; Shyr and Chen 2018). In general, external feedback is intended to facilitate students’ monitoring, deliver information to students about their learning (i.e., externally observable outcome performance), and provide opportunities for students to generate internal feedback to close the gap between their goals and current performance (see parts 6 and 7 in Fig. 1) (Nicol and Macfarlane-Dick 2006). That is, external feedback forms a scaffolding or external regulation to assist students in reflecting on and monitoring whether a discrepancy exists between the current performance and a goal of desired performance and to regulate their learning to reduce this discrepancy if it exists (Azevedo et al. 2007; Devolder et al. 2012; Hattie and Timperley 2007; Roll et al. 2014). Researchers have proposed four levels of external feedback, namely, the task level, process level, self-regulation level, and self-level feedback (Hattie and Timperley 2007). The task and process level feedback are domain-related feedback that addresses students' tasks, solutions, and understanding of domain knowledge. Task level feedback informs how well tasks are understood or performed. For example, outcome feedback describes whether or not the tasks are correct; however, it informs the current performance but does not inform how to self-regulate (Zhou 2012). Process level feedback cues the process to finish tasks or correct errors. For instance, a hint feedback prompts students how to finish tasks, and a corrective feedback aims to guide students to find and correct their errors; that is, process level feedback provides regulation information. Self-regulation level feedback addresses how students monitor, direct, and regulate their learning, such as through self-assessments, goal-setting, and regulation actions. For example, a feedback on self-assessment assists students in reflecting on whether their self-assessments are accurate and whether to regulate their self-assessments or not. Self-level feedback expresses a personal evaluation, such as “good effort”. However, self-level feedback contains less task-related or meta-cognitive information and leads to less learning gains but affects students’ motivation and beliefs (Hattie and Timperley 2007).

Recently, intelligent tutoring systems have detected students’ learning and SRL behaviors and then provide adaptive external feedback as scaffolding support to assist students in conducting SRL, such as self-assessment, goal-setting, planning, and help-seeking (Aleven et al. 2006, 2016; Azevedo et al. 2010; Chen 2009; Chen et al. 2019; Chou et al. 2015, 2018, 2019; Harley et al. 2017; Nussbaumer et al. 2015; Roll et al. 2011a, b; Su 2020). However, there are various combinations of different types of external feedback and scaffolds (such as prompts, advice, information, tools, training, and guiding questions), SRL processes (such as self-assessments, goal-setting, planning, and help-seeking), content domain, and context (such as traditional classrooms, online learning, and blended learning); therefore, the effectiveness of different types of external feedback and scaffolds in different SRL processes and contexts require further investigation (Devolder et al. 2012).

Studies have found that the external feedback from the open learner model (OLM) promotes better student self-assessments (Chou et al. 2015; Mitrovic and Martin 2007). OLMs indicates that opening learner models (also called student models), which are built by intelligent tutoring systems to provide adaptive tutoring and are hidden from students (Brusilovskiy 1994; Conati and Kardan 2013; Desmarais and Baker 2012; Holt et al. 1994; Woolf 2008), to students to increase the accuracy of learner models and the interaction between the system and students (Bull 2004; Self 1988). OLMs can be built from different agents’ perspectives, such as systems, teachers, peers, and students themselves and can be designed to enable students to view, self-assess, and edit their OLMs or collaborate and negotiate with systems or teachers to build their OLMs (Bull 2004, 2016; Bull and Kay 2016; Bull et al. 1995,2013; Chou et al. 2015). OLMs are usually designed to promote students’ meta-cognitive processes, such as self-assessments, self-monitoring, reflection, and planning (Bull et al. 2006; Bull and Kay 2013; Chou et al. 2015, 2017; Long and Aleven 2017; Mitrovic and Martin 2007). In other words, OLMs assist students in reflecting on their skills and knowledge, identifying their weaknesses, and planning their future learning through different visualizations, such as tables, charts, radar charts, skill meters, concept maps, word clouds, or networks (Bull et al. 2016; Demmans and Bull 2015). For example, if students over-estimate their outcome learning performance, external feedback from the OLM will highlight discrepancies between the students’ perceived performance and actual performance to assist students in reflecting on their monitoring of outcome learning performance and generating internal feedback to modify their perceived performance. Students may ignore, modify, or accept the external feedback from the OLM (Chou et al. 2015). In particular, how students’ internal SRL processes and feedback run and whether and how external feedback on self-assessment from the OLM affects these internal SRL processes and feedback remain unclear. Therefore, this study aims to investigate students’ internal SRL processes and feedback and how they are impacted by the external feedback from the OLM.

Measurements of SRL

SRL involves many meta-cognitive processes that are hidden inside students and thus are difficult to observe and measure. Researchers have proposed different SRL measurements based on different SRL models, such as models that characterize and measure SRL as an attitude or an event (Winne and Perry 2000). Some self-reported questionnaires, such as the Motivated Strategies for Learning Questionnaire (MSLQ, Pintrich et al. 1993), and interviews are designed for measuring students’ SRL attitudes, such as value, expectancy, cognitive and metacognitive strategies, and resource management strategies, by asking them about their meta-cognition, motivation, and actions when facing different situations (Winne and Perry 2000). However, these self-reported SRL measurements are characterized as more static and large-grained assessments and rely heavily on students’ perspectives and beliefs (Panadero et al. 2016; Rovers et al. 2019; Winne and Perry 2000). On the other hand, some SRL measurements, such as thinking aloud and traces of students’ observable SRL indicators, focus on students’ small-grained SRL events and dynamic processes, particularly meta-cognitive monitoring (conditions) and control (actions) (Winne and Perry 2000). The thinking-aloud measurement asks students to self-report their internal cognitive and meta-cognitive processes when they are learning. Students’ observable SRL indicators in computer based learning environments can be recorded for traces and measurement (Winne 2010). Recently, many SRL studies in computer based learning environments have applied traces to measure students’ SRL behaviors (Azevedo et al. 2010; Chen 2009; Chou et al. 2015, 2018, 2019; Harley et al. 2017; Jansen et al. 2020; Long and Aleven 2017; Nussbaumer et al. 2015; Roll et al. 2011a, b). However, most studies focus on traces of students’ external operations on the system, and hence students’ internal SRL processes are still unclear. Researchers suggested designing tools as scaffolding to assist students in conducting SRL and to externalize their internal SRL processes for observation and measurement (Garcia et al. 2018; Järvelä et al. 2015; Manlove et al. 2007; Panadero et al. 2016; Pérez-Álvarez et al. 2018; Roll et al. 2014; Winne and Hadwin 2013). In general, tools are designed to facilitate student cognitive and meta-cognitive processes and reduce their cognitive load, such as memory or computation (Azevedo et al. 2010; Lajoie 1993; Winne and Nesbit 2009). Researchers have found that students with scaffolding of computer SRL tools develop better SRL processes and performance than students without such tools (Manlove et al. 2007). In addition, researchers have suggested that tools can be designed to make students’ metacognition more explicit and expand their capacity by providing external representation, interactivity, and distributed cognition (Pakdaman-Savoji et al. 2019). Therefore, this study proposes SRL tools to assist students in conducting SRL and to externalize their internal SRL processes for observation and measurement.

Purposes of this study

This study has two purposes:

#1: Investigating students’ internal SRL processes and feedback by designing SRL tools to assist students in conducting SRL and to externalize their internal SRL processes for observation and measurement. This study proposed an SRL internal and external model (named SRL-IE) to illustrate the relationship among external SRL tools, internal SRL processes, internal feedback, and external feedback (Fig. 2). The SRL-IE model is based on the SRL model proposed by Butler and Winne (1995) and re-interpreted by Nicol and Macfarlane-Dick (2006) to illustrate the role of internal and external feedback in SRL (Fig. 1). The SRL-IE model also integrates the self-regulatory cycle model, which was proposed to assist students in developing SRL through the following four phases of SRL activities: self-evaluation and monitoring (i.e., initial self-assessment); goal setting and strategic planning; strategy implementation and monitoring (i.e., follow-up learning); and strategic outcome monitoring (i.e., follow-up self-assessment) (Zimmerman et al. 1996). Most SRL studies in computer based learning environments have been conducted in the context of online learning or intelligent tutoring systems and covered all learning activities. This study was conducted to assist students in conducting SRL after listening to a teacher’s lecture. This study developed an intelligent computer assisted learning system to provide SRL tools, based on the SRL-IE model, to assist students in conducting SRL by listening to a teacher’s lecture, self-assessing their initial learning performance (i.e., perceived initial performance) based on their domain knowledge, strategy knowledge, and motivational beliefs, setting target goals of desired performance, conducting follow-up learning by applying tactics and strategies, and self-assessing their outcome performance (i.e., perceived outcome performance) after follow-up learning (see parts 1 to 6 in Fig. 2). In addition, these SRL tools were designed to externalize students’ internal processes and feedback for measurement and investigation (parts 3 to 6 in Fig. 2). Furthermore, the externalization of students’ internal processes and feedback can be further applied to detect students’ poor SRL processes and internal feedback and to provide appropriate external feedback and intervention in future studies.

Fig. 2
figure 2

The SRL and feedback model (SRL-IE) adopted in this study

#2: Investigating whether and how SRL tools and external feedback from the OLM promote students’ internal SRL processes and feedback. This study designed SRL tools not only to externalize students’ internal SRL processes for observation and measurement but also to assist students in SRL by facilitating their SRL processes (parts 3 to 6 in Fig. 2) and by reducing their cognitive load by providing external representation, interactivity, and distributed cognition (Azevedo et al. 2010; Lajoie 1993; Pakdaman-Savoji et al. 2019; Winne and Nesbit 2009). There are different levels of external feedback, specifically, task, process, self-regulation, and self-level feedback (Hattie and Timperley 2007). This study focused on providing OLMs as external feedback on self-assessment (i.e., a type of self-regulation level feedback) to assist students in SRL (see parts 7 and 8 in Fig. 2). The OLMs were integrated with SRL tools so that actual performance from OLM, perceived initial performance from the initial self-assessment, desired performance of target goals, and perceived outcome performance from the follow-up self-assessment were recorded, presented, and compared in the same format. The OLMs are designed as formative feedback to arouse students’ appropriate internal feedback to regulate their SRL processes and to thus improve their learning performance. Particularly, after receiving the OLMs, students were allowed to modify their self-assessments (i.e., perceived performance) to investigate whether and how the external feedback from the OLMs triggers their internal feedback to modify their perceived performance.

The rest of the paper is organized as follows. The second section presents an intelligent computer assisted learning system with SRL tools and external feedback from the OLMs to support SRL. The third section reports the results of an evaluation of the system. The fourth section discusses the evaluative results. The final section presents the conclusions.

A system with SRL tools and OLMs to support SRL

An intelligent computer assisted learning system, called Open Learner Models for Self-Regulated Learning (OLM-SRL), was designed and implemented based on the SRL-IE model to support SRL. The system provides SRL tools to enable students to engage in SRL through nine stages of learning activities across two classes in a round. The system also offers OLMs as a form of external feedback to promote student SRL. The nine stages of learning activities and related system support were designed based on the four phases of the self-regulatory cycle model, namely, self-evaluation and monitoring, goal setting and strategic planning, strategy implementation and monitoring, and strategic outcome monitoring (Zimmerman et al. 1996) (Table 1), which match parts 3 to 6 of the SRL-IE model shown in Fig. 2.

Table 1 Stages of learning activities and system support based on self-regulatory cycle

The details of the system support of the nine stages are presented as follows:

Stage 1: Teacher lecture.

Teachers lecture to instruct students on the concepts.

After listening to the teacher lecture, the students self-evaluate and monitor their initial learning performance. The self-evaluation and monitoring phase covers stages 2, 3, and 4. First, students self-assess their mastery levels to generate perceived initial performance after the teacher’s lecture (stage 2). Researchers have suggested providing students with opportunities to regularly and conveniently record their perceived performance to facilitate their metacognitive monitoring (Winne and Nesbit 2009). The OLM-SRL system provides a self-assessment tool to facilitate students’ metacognitive monitoring to generate internal feedback on perceived performance (i.e. judgments of learning, JOLs). In addition, the self-assessment tool externalizes and records students’ perceived performance. Then, students participate in an initial system assessment to assess their mastery levels (stage 3). After the initial system assessment, the OLM of the initial system assessment is provided as a form of external feedback on self-assessment to assist students in reflecting on their mastery levels. Students can compare their OLMs from self-assessments (i.e. perceived performance) and system assessments (i.e. actual performance) to enhance the calibration of their self-assessments (Butler and Winne 1995). Studies have found that external feedback from OLMs promotes better student self-assessment (Chou et al. 2015; Mitrovic and Martin 2007). After receiving external feedback from the OLM, students can reflect on and modify their self-assessments (stage 4). The modifications of the self-assessments are recorded to analyze the impact of the OLM on the students’ self-assessments, that is, whether the external feedback from the OLM facilitates students’ internal feedback to modify their perceived performance. In this phase, students monitor and reflect on their learning and identify unfamiliar concepts for further learning.

Stage 2: Initial student self-assessment for generating perceived initial performance.

After the teacher lecture, the students are asked to self-assess their mastery levels of the concepts taught in the lecture using the OLM-SRL system. For example, Fig. 3 shows the tool for self-assessment in a course that contains 10 meta-concepts and 36 concepts. Students fill out their mastery levels from 0 to 100% in the concept fields with the white background. The concepts with the gray background have not yet been taught and therefore do not need self-assessment. Students’ mastery levels of the meta-concepts are computed by the weighted accumulation of the mastery levels of the related concepts. The weights of the concepts in the meta-concepts were previously inputted by the teachers according to their expert domain knowledge. For instance, a meta-concept contains two concepts with weights of 0.6 and 0.4. If a student self-assesses his/her mastery levels of the two concepts as 80% and 50%, the system computes the student’s mastery level of the meta-concept as 68% (80% * 0.6 + 50% * 0.4). Similarly, students’ overall mastery levels are computed by the weighted accumulation of the mastery levels of all meta-concepts. The system presents a student’s OLMs, which are built from students or the system, in radar charts in which each axis indicates the mastery level of a meta-concept or a concept (Fig. 4). Radar charts are effective visualization tools for presenting and comparing multivariate data (Chambers et al. 2018; Saary 2008).

Fig. 3
figure 3

Self-assessment tool

Fig. 4
figure 4

An OLM for representing students’ mastery levels of meta-concepts

Stage 3: Initial system assessment for assessing actual initial performance.

Students answer questions in the system to assess their initial mastery levels of the concepts after listening to the lecture (Fig. 5).

Fig. 5
figure 5

System assessment interface

These questions were designed by teachers to assess the mastery levels of those concepts. The students’ mastery levels of concepts in the OLM from the system assessment are computed based on the correctness of the student answers and the weighted relationship of the questions and concepts (Hwang 2003). The mastery level for student Sk of concept Cj is computed as follows:

$$Mastery\left({S}_{k}, {C}_{j}\right)=\sum_{i=1}^{n}\frac{weight\left({Q}_{i}, {C}_{j}\right)\times answer\left({S}_{k}, {Q}_{i}\right)}{weight({Q}_{i}, {C}_{j})}$$
(1)

where n indicates the number of questions. Students need to master the concepts related to a question so that they can correctly answer the question. The weight(Qi, Cj) value denotes the influence of the mastery level of concept Cj on the correctness of the answer to question Qi. If a student fails to answer a question, his/her assessed mastery levels of the related concepts will decrease. The value of answer(Sk,Qi) is either 1 or 0, representing whether student Sk correctly answers question Qi or not, respectively. For example, Table 2 lists a weighted relationship of six questions and five concepts. Students need to master concepts C1 and C2 so that they can correctly answer question Q1. If students do not master concept C2, they will have a high possibility of incorrectly answering question Q1. Assuming that the answer status of student Sk for the six questions is 1, 0, 1, 1, 1, and 0, the mastery level for student Sk of concept C1 is computed as follows:

Table 2 Sample of weighted relationships of questions and concepts
$$\frac{0.2\times 1+0.3\times 0+0\times 1+0.1\times 1+0\times 1+0\times 0}{0.2+0.3+0+0.1+0+0}=0.5$$

Therefore, the system can compute the mastery levels for student Sk of the five concepts as 0.5, 0.722, 0.818, 0.625, and 0.588. The system presents the student’s mastery levels of the concepts in the form of a radar chart representing the OLM of system assessment with the same format of the OLM of student self-assessment. Thus, the system enables students to compare their perceived performance and actual performance by comparing their OLMs between the self-assessment and system assessment (Fig. 6). The OLM of the initial system assessment is provided as a form of external feedback on self-assessment to assist students in reflecting on their mastery levels of the concepts and self-assessment.

Fig. 6
figure 6

OLMs from the self-assessment (orange) and system assessment (blue)

Stage 4: Modified initial student self-assessment for reflecting on the perceived initial performance.

After investigating the results of the initial system assessment, students can reflect on their learning and modify their self-assessed mastery levels in the initial self-assessment. This stage provides students with an opportunity to generate internal feedback to modify their initial self-assessments (i.e., perceived initial performance). The modification record of student self-assessments externalizes whether students modify their self-assessment (i.e., internal feedback) and reveals the impact of the external feedback from the OLM on students’ perceived performance.

Stage 5: Student goal setting for generating desired performance.

During the goal setting and strategic planning phase, students set their target mastery levels of the concepts in the next class as their goals of desired performance for follow-up learning. The goal setting tool is similar to the self-assessment tool. The goal setting tool is designed to facilitate students to generate internal feedback on their desired performance. The tool also externalizes and records students’ goals of desired performance. Students fill out the target mastery levels of the concepts, and the system presents OLMs based on their targets and enables them to compare their OLMs from the self-assessment (i.e., perceived performance), system assessment (i.e., actual performance), and target goals (i.e., desire performance) (Fig. 7). This study focused on investigating perceived, actual, and desired performances; thus, the system did not provide tools for strategic planning. The tools for strategic planning will be implemented in the next version of the system.

Fig. 7
figure 7

Open learner models from the self-assessment (orange), system assessment (blue), and target (green)

Stage 6: Student follow-up learning after class.

During the strategy implementation and monitoring phase, the system enables students to conduct follow-up learning after class to achieve their target goals for mastery levels by reviewing their OLMs and the questions and answers of the system assessment. Students can click on a meta-concept, such as operators, in the OLM of the initial system assessment (Fig. 8a) to investigate their mastery levels of related concepts (Fig. 8b). Students can further click on a concept to investigate the related questions of the system assessment (Fig. 9). Each color block represents a question. Green indicates a correctly answered question, yellow indicates a not-yet-answered question, and red indicates an incorrectly answered question. Students can click on a color block to investigate the question, their answers, and the correct answer (i.e., outcome feedback at the task level). Accordingly, the OLMs provide external feedback to promote students’ ability to find and refine their imperfect knowledge (i.e., internal feedback) by showing the incorrect and correct answers in the hierarchy of meta-concepts, concepts, and questions. Regarding incorrect answers, students may seek help from peers, teaching assistants, or teachers. Help-seeking is a type of internal feedback in the SRL process to determine whether to seek help and whom to seek help from (Karabenick 2011).

Fig. 8
figure 8

Open learner model of the mastery levels of meta-concepts and concepts

Fig. 9
figure 9

Questions related to a concept

The strategic outcome monitoring phase covers stages 7, 8, and 9. Students self-assess their mastery levels of concepts to generate perceived outcome performance (stage 7). Students then participate in a follow-up system assessment (stage 8) after the follow-up learning, and the system provides the OLM of the follow-up system assessment as external feedback on actual outcome performance to assist students in reflecting on their mastery levels. Finally, students can modify their self-assessments of perceived outcome performance (stage 9). The modifications can reveal the impact of external feedback from the OLM of actual outcome performance on students’ perceived outcome performance. In this phase, students also reflect on their execution and the outcome of the follow-up learning. The reflection helps students perform better in the next self-regulatory cycle.

Stage 7: Follow-up student self-assessment for generating perceived outcome performance.

At the beginning of the next class, students again self-assess their mastery levels of the learned concepts after the follow-up learning. The tool is the same as that of the initial student self-assessment. The system presents an OLM from the follow-up self-assessment for displaying students’ perceived outcome performance to help students reflect on their learning after the follow-up learning. In addition, students can compare their perceived outcome performance with their desired performance goals to reflect on whether they achieved their goals or not.

Stage 8: Follow-up system assessment for assessing actual outcome performance.

Students undergo a follow-up system assessment by answering questions in the system for assessing their actual outcome performance. The questions of the follow-up system assessment are similar to those of the initial system assessment. The system presents an OLM from the follow-up system assessment as external feedback on the actual outcome performance to assist students in reflecting on whether they have achieved their target mastery levels.

Stage 9: Modified follow-up student self-assessment for reflecting on perceived outcome performance.

After investigating the OLM of the follow-up system assessment, students can reflect on their learning and modify their self-assessed mastery levels of concepts in the follow-up self-assessment. The modification record of student follow-up self-assessments externalizes whether students modify their perceived outcome performance and reveals the impact of external feedback from the OLM of actual outcome performance on students’ perceived outcome performance.

In sum, the system generates multiple OLMs from different viewpoints of perceived initial performance, actual initial performance, desired performance, perceived outcome performance, and actual outcome performance in each round and enables students to compare these OLMs to reflect on their learning.

Evaluation

Methods

An evaluation was performed to explore the following two research questions.

Research question #1: How do students conduct internal SRL processes and feedback during the four phases of the self-regulatory cycle model?

Research question #2: Do SRL tools and external feedback from the OLM assist students in SRL?

As mentioned in the Introduction, SRL is usually measured as events or attitudes (Winne and Perry 2000). SRL events can be measured by traces of observable SRL indicators and SRL attitudes can be measured by self-reported questionnaires. This study developed SRL tools to externalize students’ internal SRL processes and feedback during the four phases of the self-regulatory cycle model, including the initial self-assessment (i.e., perceived initial performance), goal setting of the desired performance, follow-up learning by applying tactics and strategies, and follow-up self-assessment (i.e., perceived outcome performance) (parts 3 to 6 in Fig. 2). To address research question #1, students’ SRL internal processes and feedback were recorded through SRL tools, and their SRL performances were also recorded for analysis. To address research question #2, students’ reactions of whether they modify their self-assessment after receiving external feedback on their self-assessment from the OLM were recorded and analyzed. In addition, a questionnaire was designed to ask students whether the SRL tools and the external feedback from the OLMs assisted them.

The participants were 69 undergraduates who enrolled in a programming course to learn basic programming knowledge and skills. Students used the OLM-SRL system to engage in 4 rounds of SRL activities over five weeks (Table 3). The class involved three hours per week in a computer classroom in which each student had a computer. The content included 6 meta-concepts and 16 concepts of computer programming. The system assessment adopted program-output-prediction problems to ask the students to predict the output of a program. The students needed syntax and semantic knowledge and program tracing skills to correctly solve the program-output-prediction problems. Previous studies have revealed that novices lack program tracing skills, which leads to poor programming performance (Chou and Sun 2013; Perkins et al. 1986; Vainio and Sajaniemi 2007). The system assessment for each week had six questions. The students’ data in the system were recorded to analyze their SRL behaviors. In the sixth week, the students were asked to take a program-output-prediction examination (Exam 1) and a programming examination (Exam 2) to evaluate their final outcome learning performance. Exam 1 included 12 related program-output-prediction questions. Exam 2 asked the students to write five programs to solve five problems. Because some SRL processes are beyond the system externalization and record, the students were asked to complete a questionnaire that contained six items scored on a 7-point Likert scale and six yes-or-no items to investigate their feelings about and SRL experience of the system.

Table 3 Scheme of the evaluation

Results

Four students missed some activities and their data were excluded; thus, data for 65 students were included in the evaluation.

Research question #1: How do students conduct internal SRL processes and feedback during the four phases of the self-regulatory cycle model?

To address research question #1, this study explored how the students perceived their performance, how they set their target goals of their desired performance, whether they conducted follow-up learning, how they perceived their outcome performance after follow-up learning, whether they improved their performance after follow-up learning, and whether they achieved their target goals.

How did students perceive their performance?

The correlation of the students’ internal SRL processes and feedback and learning performance was calculated to explore the relationship between them. Table 4 lists the correlation of the overall mastery levels shown in the student self-assessment (i.e. perceived performance), system assessment (i.e. actual performance), and target goals (i.e. desired performance) with the scores on Exam 1 and Exam 2. The results revealed that most of the students’ initial and follow-up system assessments were significantly positively correlated with their scores on Exams 1 and 2. This finding showed that the system assessments actually reflected the students’ outcome learning performance. It is interesting that the correlation of the system assessments (program-output-prediction questions) and Exam 2 (programming problems) was higher than that of the system assessments and Exam 1 (program-output-prediction questions). The results revealed that program-output-prediction is a key ability for programming. In contrast, student self-assessment and target goals had a low correlation with their scores on Exam 1 and Exam 2. The students’ self-assessment did not reflect their actual outcome learning performance, and the results indicated that the students’ self-assessments were poor.

Table 4 Correlation of student self-assessment, system assessment, target goals, and performance

The data of students’ self-assessment, system assessment, and target goals were recorded to analyze students’ internal SRL processes and feedback. A comparison of the results of the initial self-assessment and the initial system assessment indicated that the students had higher mastery levels on the self-assessment in 75% of the student records and had higher mastery levels on the system assessment in 25% of the student records (Table 5). The results showed that the students often tended to overestimate their mastery levels.

Table 5 Comparison of initial self-assessment and initial system assessment

How did students set their target goals of desired performance?

A comparison of the students’ target goals and modified initial self-assessment showed that the students set their target mastery levels at the same level as in their self-assessments in 8% of the student records, which is higher than in their self-assessments in 66% of the student records and lower than in their self-assessments in 28% of the student records (Table 6). The target mastery levels are the goals for follow-up learning; thus, it is inappropriate to set target mastery levels that are lower than the current mastery levels. The results revealed that some students set inappropriate target goals.

Table 6 Comparison of the modified initial self-assessment and target goal

Did students conduct follow-up learning?

The system records revealed that 43% of the students had reviewed the questions and answers of the initial system assessment during follow-up learning, whereas 57% of the students had not done so. The results revealed that more than half of the students had not conducted follow-up learning after the class.

How did students perceive their outcome performance after follow-up learning?

A comparison of the results of the follow-up self-assessment and follow-up system assessment showed that the students had the same mastery levels in the two assessments in 1% of the student records, had higher mastery levels in the self-assessment in 63% of the student records, and had higher mastery levels in the system assessment in 37% of the student records (Table 7). The results indicated that the students often tended to overestimate their mastery levels.

Table 7 Comparison of the follow-up self-assessment and follow-up system assessment

Did students improve their performance after follow-up learning?

A comparison of the initial system assessment and follow-up system assessment revealed that the students retained the same mastery levels in 7% of the student records, increased their mastery levels in 61% of the student records, and decreased their mastery levels in 32% of the student records (Table 8). The results showed that students decreased their mastery levels of concepts in the follow-up system assessment compared with the initial system assessment in approximately one-third of the student records. These students required assistance in follow-up learning.

Table 8 Comparison of the follow-up system assessment and initial system assessment

Did students achieve their target goals?

A comparison of the student target goals and follow-up student self-assessment showed that the students considered that they had achieved their target goals for the mastery levels in 1% of the student records, outperformed their target goals in 31% of the student records, and failed to achieve their target goals in 68% of the student records (Table 9). The results revealed that students considered that they had failed to achieve their target goals for their mastery levels in more than two-thirds of the student records. A comparison of the student target goals and follow-up system assessment showed that the students achieved their target goals for their mastery levels in 1% of the student records, outperformed their target goals in 30% of the student records, and failed to achieve their target goals in 69% of the student records (Table 10). The results indicated that the students often failed to achieve their target goals for their mastery levels.

Table 9 Comparison of the follow-up self-assessment and target goal
Table 10 Comparison of the follow-up system assessment and target goal

Research question #2: Do SRL tools and external feedback from the OLM assist students in SRL?

To address research question #2, this study explored how the students responded to the external feedback from the OLM, and this study adopted a questionnaire to ask the students about their feelings concerning their SRL experience of the system.

How did the students respond to the external feedback from the OLM?

The results of comparing the initial self-assessment and the modified initial self-assessment indicated that the students retained the same mastery levels in 34% of the student records, increased their mastery levels in 27% of the student records, and decreased their mastery levels in 38% of the student records (Table 11). The result indicated that the external feedback from the OLM promoted the ability of most (66%) students to generate internal feedback to modify their initial self-assessments (i.e., their perceived initial performances).

Table 11 Comparison of the initial self-assessment and modified initial self-assessment

In the analysis of the modified initial self-assessment records of the students who had different mastery levels between the initial self-assessment and the initial system assessment, the results of the chi-squared test revealed that the students tended to modify their initial self-assessment according to the difference between the initial system assessment and the initial self-assessment (χ2 = 10.568, p < 0.01) (Table 12). That is, when the students found that their mastery levels in the initial system assessment were lower than their mastery levels in the initial self-assessment, they tended to decrease their mastery levels in the modified initial self-assessment. Otherwise, they tended to increase their mastery levels in the modified initial self-assessment. The results indicated that the external feedback from the OLM of the initial system assessment assisted the students in reflecting on their mastery levels (i.e. perceived performance). However, after investigating the OLM of the initial self-assessment and initial system assessment, in 34% of the student records, the students did not modify their initial self-assessment, although their results in the initial self-assessment were different from their results in the initial system assessment.

Table 12 Distribution of the students’ initial system assessment, initial self-assessment, and modified initial self-assessment

Students’ feelings about and SRL experience of the system

This study provided SRL tools as scaffolding to promote students to engage in SRL processes, including initial self-assessment, goal-setting, follow-up learning, and a follow-up self-assessment after follow-up learning. This study also provided OLMs as external feedback to assist the students in SRL. Part 1 of the questionnaire was designed to ask the students whether these SRL processes through SRL tools and external feedback were beneficial to them. Table 13 lists the results of part 1 of the questionnaire. In total, 91% of the students agreed (strongly agree, agree, and somewhat agree) that the initial self-assessment helped them reflect on their learning and understanding of concepts (item #1). In total, 89% of the students considered that the OLM of the initial system assessment helped them understand their learning and understanding of concepts (item #2). In total, 77% of the students expressed that setting target goals for mastery levels of concepts prompted them to study hard (item #3). In total, 82% of the students consulted the results of the initial system assessment to set their target goals for follow-up learning (item #4). In total, 85% of the students agreed that the follow-up self-assessment helped them reflect on their follow-up learning (item #5). In total, 89% of the students considered that the follow-up system assessment helped them reflect on their follow-up learning (item #6).

Table 13 Results of part 1 of the questionnaire

Part 2 of the questionnaire was designed to ask the students about their SRL experience with the system. Table 14 lists the results of part 2 of the questionnaire. Of the students, 91% had identified their unfamiliar concepts by comparing the OLMs of the initial self-assessment and the initial system assessment (item #7), and 69% modified their initial self-assessment after investigating the OLM of the initial system assessment (item #8). In brief, the external feedback from the OLM of the initial system assessment helped students generate internal feedback to reflect on their mastery levels of concepts and identify unfamiliar concepts. In total, 83% of the students expressed that they had investigated the questions and correct answers of the initial system assessment during follow-up learning after class (i.e. strategy implementation and monitoring, item #9), but the system records revealed that only 43% of the students had reviewed the questions and answers of the initial system assessment during follow-up learning. The results revealed that many students poorly monitored their follow-up learning. In total, 78% of the students expressed that they had sought help from the teacher, teaching assistants, or classmates regarding incorrectly answered questions in the initial system assessment during follow-up learning after class (item #10). That is, most students applied help-seeking strategies to improve their learning. In total, 43% of the students found that they had not conducted any follow-up learning when they performed the follow-up self-assessment (i.e. strategy monitoring, item #11). In total, 58% of the students found that they had failed to achieve their target goals for mastery levels when they performed the follow-up self-assessment (i.e., strategy outcome monitoring, item #12). In brief, the follow-up self-assessment helped students reflect on their follow-up learning by monitoring their strategy implementation and outcome. Approximately half of the students had not conducted any follow-up learning and failed to achieve their target goals.

Table 14 Results of part 2 of the questionnaire

Discussion

Students often have poor SRL processes and poor internal feedback

Regarding research question #1,How do students conduct internal SRL processes and feedback during the four phases of the self-regulatory cycle model?” the results revealed that students often have poor SRL processes and poor internal feedback.

First, students’ self-assessment is poor; that is, monitoring of their learning performance is poor. The results showed that students’ self-assessment failed to reflect their learning performance (Table 4), and students often tended to overestimate their mastery levels (Tables 5, 7). The results were consistent with previous studies (Chou et al. 2015; Dunning et al. 2004; Stone 2000). The students’ overestimated self-assessments might deceive them and discourage them from further learning. Second, some students set inappropriate target goals. Ideally, students should set their target mastery levels after follow-up learning higher than their current mastery levels, as they conduct follow-up learning to improve their mastery levels. However, the results revealed that some students set inappropriate targets for follow-up learning (Table 6), with their target mastery levels lower than those in their self-assessment. These students might lack confidence or motivation to improve their mastery levels. Third, students often fail to conduct follow-up learning. Students should conduct follow-up learning to improve their mastery levels, but the system records and questionnaire results (item #11 in Table 14) indicated that approximately half of the students did not conduct follow-up learning. Fourth, students often fail to achieve their target goals for mastery levels. Although students set their goals for mastery levels after the follow-up learning, the results revealed that they often failed to achieve those goals (Table 10).

Accordingly, students often have poor SRL processes and poor internal feedback, and these lead to poor learning performance. The students had poor monitoring of their learning performance (i.e., poor self-assessment) and often overestimated their mastery levels of concepts; therefore, they did not conduct further learning (i.e., poor internal feedback) to improve their mastery levels. In addition, after monitoring their mastery levels of concepts, some students did not set appropriate target goals (i.e., poor internal feedback) to improve their mastery levels. Furthermore, the students often failed to conduct follow-up learning (i.e., poor internal feedback) and to achieve their target goals for the mastery levels. As a result, the students were often unaware of unfamiliar concepts and failed to improve their mastery levels of concepts.

SRL tools and external feedback from the OLM assist most students in SRL

Regarding research question #2, “Do SRL tools and external feedback from the OLM assist students in SRL?” the results showed that SRL tools and external feedback from the OLMs assisted most students in SRL.

First, SRL tools externalize internal meta-cognitive SRL processes and feedback and engage students in SRL. The questionnaire results revealed that the SRL tools of the initial self-assessment (item #1 in Table 13) and follow-up self-assessment (item #5) helped most students reflect on their learning (i.e. monitoring of learning performance). In addition, setting target goals for mastery levels of concepts prompted most students to study hard (i.e. goal-setting and learning motivation, item #3). Furthermore, the follow-up self-assessment prompted most students to be aware of their lack of follow-up learning (i.e. strategy implementation and monitoring, item #11 in Table 14) and failure to achieve their target goals (i.e. strategy outcome monitoring, item #12).

Second, the external feedback of the OLM of the system assessment helped most students reflect on their learning, set target goals, and conduct follow-up learning. The questionnaire results showed that the OLM of the system assessment assisted most students in reflecting on their learning and mastery levels of concepts (i.e., monitoring of initial and outcome performance, items #2 and #6). Most students identified unfamiliar concepts by comparing the OLMs of the self-assessment and system assessment (item #7) and modified their initial self-assessment after investigating the OLM of the initial system assessment (item #8 and Tables 11 and 12). In addition, most students consulted the results of the initial system assessment to set their target goals for follow-up learning (item #4). Furthermore, most students had investigated the questions and correct answers of the initial system assessment (i.e., strategy implementation, item #9) and had sought help from the teacher, teaching assistants, or classmates regarding their incorrectly answered questions (i.e., help-seeking strategy, item #10) during follow-up learning after class.

In sum, SRL tools and external feedback from the OLMs assisted most students in SRL, particularly in reflecting on their mastery levels of concepts and unfamiliar concepts (i.e., self-evaluation and monitoring of their learning performance), setting target goals for follow-up learning (i.e., goal-setting), conducting follow-up learning (i.e., strategy implementation), determining whether to seek help (i.e., strategy planning and implementation), and being aware of their execution (i.e., strategy monitoring) and outcome of follow-up learning (i.e., strategy outcome monitoring).

Some students still need further support for SRL

This study offers SRL tools and external feedback from OLMs to assist students in SRL. The evaluation results showed that SRL tools and external feedback from the OLMs are beneficial to most students but are ineffective for some students. Researchers have argued that students’ individual differences influence the effects of SRL support, such as prompts and external feedback (Wong et al. 2019). Some students still need further support for SRL. We suggest providing further support for SRL as follows.

First, adding negotiation mechanisms of SRL between students and the system would enhance the impact of external feedback and regulate students’ poor SRL processes and internal feedback. This study provides SRL tools and external feedback and leaves choices for the students, but some students did not utilize the SRL tools or respond to the external feedback. For example, some students did not generate internal feedback to modify their self-assessment based on the external feedback from the OLM of the initial system assessment or they even modified their self-assessment in opposition to the external feedback (Table 12). Studies have found that students with poor SRL skills often make poor choices when they have control of learning or system functions, and they thus attain poor learning performance (Clark and Mayer 2008; Scheiter and Gerjets 2007; Vandewaetere and Clarebout 2011; Young 1996). The system could add mechanisms to negotiate with students to reach a consensus on the self-assessment. Such negotiation enables a co-regulation process to support students and to regulate their ineffective SRL behaviors (Chou et al. 2015, 2018; Hadwin et al. 2011). The system can adopt different negotiation strategies with fewer or more concessions to favor the system or the students (Chou et al. 2015). If students have poor internal SRL processes and feedback, the system could adopt a negotiation strategy with fewer concessions. Previous studies have confirmed that negotiation between the system and students promoted student SRL processes, such as self-assessment, choice of next learning content, goal setting, and help-seeking processes (Bull and Kay 2013; Chen et al. 2019; Chou et al. 2015, 2018, 2019). If students’ self-assessments vary widely from the system assessment and students do not modify their self-assessments after external feedback from the OLM of the system assessment, the system could remind them to note the substantial difference between the self-assessment and system assessment and suggest that they modify their self-assessments.

Second, adding different external feedback on and SRL tools of different SRL processes would assist students in SRL. Researchers have proposed the following four levels of external feedback: task level; process level; self-regulation level; and self-level feedback (Hattie and Timperley 2007). This study focuses on self-regulation level feedback on self-assessments. However, SRL involves many meta-cognitive processes, such as goal-setting, strategy choice and application. The results of this study have revealed that some students set inappropriate goals for mastery levels that are lower than their current mastery levels (Table 6). The system could also add self-regulation level feedback on goal-setting, such as SRL prompts, or mechanisms to negotiate with students to set appropriate target goals (Chou et al. 2019). The system can prompt students to set targets goals that are higher than their current mastery levels. In addition, this study enabled students to review the questions and answers of the initial system assessment during follow-up learning, but the system records and questionnaire results (item #11 in Table 14) revealed that approximately half of the students did not conduct follow-up learning. Furthermore, over half of the students failed to achieve their goals (Table 10 and item #12). Students might lack motivation or strategies for conducting follow-up learning. Therefore, the system should add further support of learning strategies, SRL prompts, and SRL tools for follow-up learning. For example, the system could add reminder mechanisms, such as emails or messages, to prompt students to conduct follow-up learning. In addition, the system could provide further learning materials and recommend that students conduct follow-up learning by reading learning materials related to unfamiliar concepts. The system could provide further similar questions for exercises and a dashboard (Matcha et al. 2020) to visualize and reveal students’ follow-up learning behaviors to prompt students to conduct follow-up learning. Finally, the system provides the task level outcome feedback of each question that is correct or not but does not provide process level feedback to assist students in correcting their errors. The results of this study have shown that many students have sought help from teachers, teaching assistants, or classmates about their incorrectly answered questions (item #10 in Table 14). The system could add help-seeking support mechanisms to provide process level feedback, such as corrective feedback and explanation feedback, to help students understand why their answers are correct or incorrect (Chou et al. 2011). These mechanisms might help students improve their performance. The system could also add online help-seeking mechanisms to assist students in seeking help online from teachers, teaching assistants, or classmates.

Conclusion

This study proposes an SRL internal and external model (SRL-IE) to illustrate the relationship among external SRL tools, internal SRL processes, internal feedback, and external feedback. The model assists in investigating SRL and designing SRL tools and external feedback. Based on the model, this study designs an intelligent computer assisted learning system to offer SRL tools to enable students to engage in SRL, including self-assessment of their mastery levels (i.e., monitoring of learning performance), setting goals for their mastery levels for follow-up learning, conducting follow-up learning to achieve their goals (i.e., strategy implementation and monitoring), and reflecting on and regulating their learning (i.e., strategy outcome monitoring). The SRL tools also externalize students’ internal SRL processes and feedback for investigation. The results of the system records and questionnaire revealed that students often have poor SRL processes and internal feedback, including poor self-assessment, inappropriate target goals, a lack of conducting follow-up learning, and a failure to achieve their goals. The externalization of students’ internal processes and feedback could be further applied to develop adaptive regulation mechanisms by detecting students’ poor SRL processes and internal feedback and to provide appropriate external feedback and intervention in future studies.

The system provides OLMs as a form of external self-regulation level feedback on self-assessment to assist students in SRL. Multiple OLMs are generated from different viewpoints of perceived initial performance, actual initial performance, desired performance, perceived outcome performance, and actual outcome performance in each round of SRL and enable students to compare these OLMs to reflect on their SRL. The results revealed that the SRL tools and external feedback from the OLMs assisted most students in SRL, including monitoring of their learning performance, goal-setting, strategy implementation and monitoring, and strategy outcome monitoring. However, some students who did not effectively respond to the external feedback need further support for SRL. Further investigation of how to assist these students is needed.

Availability of data and materials

The data used in this study are confidential.

References

  • Aleven, V., Mclaren, B., Roll, I., & Koedinger, K. (2006). Toward meta-cognitive tutoring: A model of help seeking with a cognitive tutor. International Journal of Artificial Intelligence in Education, 16(2), 101–128.

    Google Scholar 

  • Aleven, V., Roll, I., McLaren, B. M., & Koedinger, K. R. (2016). Help helps, but only so Much: Research on help seeking with intelligent tutoring systems. International Journal of Artificial Intelligence in Education, 26, 205–223.

    Article  Google Scholar 

  • Azevedo, R., Greene, J. A., & Moos, D. C. (2007). The effect of a human agent’s external regulation upon college students’ hypermedia learning. Metacognition and learning, 2(2–3), 67–87.

    Article  Google Scholar 

  • Azevedo, R., & Hadwin, A. F. (2005). Scaffolding self-regulated learning and metacognition–Implications for the design of computer-based scaffolds. Instructional Science, 33(5), 367–379.

    Article  Google Scholar 

  • Azevedo, R., Johnson, A., Chauncey, A., & Burkett, C. (2010). Self-regulated learning with MetaTutor: Advancing the science of learning with MetaCognitive tools. In: New science of learning (pp. 225–247). Springer, New York, NY.

  • Brusilovskiy, P. L. (1994). The construction and application of student models in intelligent tutoring systems. Journal of Computer and Systems Sciences International, 32(1), 70–89.

    MATH  Google Scholar 

  • Bull, S. (2004). Supporting Learning with Open Learner Models. In Proceedings of 4th Hellenic Conference with International Participation: Information and Communication Technologies in Education, Athens, Greece. Keynote.

  • Bull, S. (2016). Negotiated learner modelling to maintain today’s learner models. Research and Practice in Technology Enhanced Learning, 11(1), 1–29.

    Article  MathSciNet  Google Scholar 

  • Bull, S., Ginon, B., Boscolo, C., & Johnson, M. (2016). Introduction of learning visualisations and metacognitive support in a persuadable open learner model. In Proceedings of the sixth international conference on learning analytics & knowledge (pp. 30–39). ACM.

  • Bull, S., & Kay, J. (2013). Open learner models as drivers for metacognitive processes (pp. 349–365). International handbook of metacognition and learning technologies. New York: Springer.

    Google Scholar 

  • Bull, S., & Kay, J. (2016). SMILI: a Framework for interfaces to learning data in open learner models, learning analytics and related fields. International Journal of Artificial Intelligence in Education, 26(1), 293–331.

    Article  Google Scholar 

  • Bull, S., Johnson, M. D., Alotaibi, M., Byrne, W., & Cierniak, G. (2013). Visualising multiple data sources in an independent open learner model. In H. C. Lane, K. Yacef, J. Mostow, & P. Pavlik (Eds.), Artificial intelligence in education (pp. 199–208). Berlin Heidelberg: Springer.

    Chapter  Google Scholar 

  • Bull, S., Pain, H., & Brna, P. (1995). Mr. Collins: A collaboratively constructed, inspectable student model for intelligent computer assisted language learning. Instructional Science, 23(1–3), 65–87.

    Article  Google Scholar 

  • Bull, S., Quigley, S., & Mabbott, A. (2006). Computer-based formative assessment to promote reflection and learner autonomy. Engineering Education, 1(1), 8–18.

    Article  Google Scholar 

  • Burns, E. C., Martin, A. J., & Collie, R. J. (2018). Adaptability, personal best (PB) goals setting, and gains in students’ academic outcomes: A longitudinal examination from a social cognitive perspective. Contemporary Educational Psychology, 53, 57–72.

    Article  Google Scholar 

  • Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65(3), 245–281.

    Article  Google Scholar 

  • Chambers, J. M., Cleveland, W. S., Kleiner, B., & Tukey, P. A. (2018). Graphical methods for data analysis. New York: CRC Press.

    Book  MATH  Google Scholar 

  • Chen, C. M. (2009). Personalized E-learning system with self-regulated learning assisted mechanisms for promoting learning performance. Expert Systems with Applications, 36(5), 8816–8829.

    Article  Google Scholar 

  • Chen, Z. H., Lu, H. D., & Chou, C. Y. (2019). Using game-based negotiation mechanism to enhance students’ goal setting and regulation. Computers & Education, 129, 71–81.

    Article  Google Scholar 

  • Chou, C. Y., Chih, W. C., Tseng, S. F. & Chen, Z. H. (2019). Simulatable Open Learner Models of Core Competencies for Setting Goals for Course Performance. In Proceedings of the 27th International Conference on Computer in Education (ICCE 2019), pp. 93–95. Kenting, Taiwan.

  • Chou, C. Y., Huang, B. H., & Lin, C. J. (2011). Complementary machine intelligence and human intelligence in virtual teaching assistant for tutoring program tracing. Computers & Education, 57(4), 2303–2312.

    Article  Google Scholar 

  • Chou, C. Y., Lai, K. R., Chao, P. Y., Lan, C. H., & Chen, T. H. (2015). Negotiation based adaptive learning sequences: Combining adaptivity and adaptability. Computers & Education, 88, 215–226.

    Article  Google Scholar 

  • Chou, C. Y., Lai, K. R., Chao, P. Y., Tseng, S. F., & Liao, T. Y. (2018). A negotiation-based adaptive learning system for regulating help-seeking behaviors. Computers & Education, 126, 115–128.

    Article  Google Scholar 

  • Chou, C. Y., & Sun, P. F. (2013). An educational tool for visualizing students’ program tracing processes. Computer Applications in Engineering Education, 21(3), 432–438.

    Article  Google Scholar 

  • Chou, C. Y., Tseng, S. F., Chih, W. C., Chen, Z. H., Chao, P. Y., Lai, K. R., et al. (2017). Open student models of core competencies at the curriculum level: Using learning analytics for student reflection. IEEE Transactions on Emerging Topics in Computing, 5(1), 32–44.

    Article  Google Scholar 

  • Clark, R. C., & Mayer, R. E. (2008). Who's in Control? Guidelines for e-Learning Navigation. In E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning (pp. 309–338). San Francisco: Pfeiffer.

  • Conati, C., & Kardan, S. (2013). Student modeling: Supporting personalized instruction, from problem solving to exploratory open ended activities. AI Magazine, 34(3), 13–26.

    Article  Google Scholar 

  • Demmans, E. C., & Bull, S. (2015). Uncertainty representation in visualizations of learning analytics for learners: Current approaches and opportunities. IEEE Transactions on Learning Technologies., 8(3), 242–260.

    Article  Google Scholar 

  • Desmarais, M. C., & Baker, R. S. (2012). A review of recent advances in learner and skill modeling in intelligent learning environments. User Modeling and User-Adapted Interaction, 22(1–2), 9–38.

    Article  Google Scholar 

  • Devolder, A., van Braak, J., & Tondeur, J. (2012). Supporting self-regulated learning in computer-based learning environments: Systematic review of effects of scaffolding in the domain of science education. Journal of Computer Assisted Learning, 28(6), 557–573.

    Article  Google Scholar 

  • Dunning, D., Heath, C., & Suls, J. M. (2004). Flawed self-assessment implications for health, education, and the workplace. Psychological Science in the Public Interest, 5(3), 69–106.

    Article  Google Scholar 

  • Garcia, R., Falkner, K., & Vivian, R. (2018). Systematic literature review: Self-regulated learning strategies using e-learning tools for computer science. Computers & Education, 123, 150–163.

    Article  Google Scholar 

  • Griffin, T. D., Wiley, J., & Salas, C. R. (2013). Supporting effective self-regulated learning: The critical role of monitoring. In International handbook of metacognition and learning technologies (pp. 19–34). Springer, New York, NY.

  • Hadwin, A. F., Järvelä, S., & Miller, M. (2011). Self-regulated, co-regulated, and socially shared regulation of learning. In Handbook of self-regulation of learning and performance (pp. 65–84).

  • Harley, J. M., Taub, M., Azevedo, R., & Bouchet, F. (2017). Let’s set up some subgoals: Understanding human-pedagogical agent collaborations and their implications for learning and prompt and feedback compliance. IEEE Transactions on Learning Technologies, 11(1), 54–66.

    Article  Google Scholar 

  • Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.

    Article  Google Scholar 

  • Holt, P., Dubs, S., Jones, M., & Greer, J. (1994). The state of student modeling. In Student Modeling: the Key to Individualized Knowledge-Based Instruction. (Greer, J. & McCalla, G. I. Eds.) (pp. 3–35), Springer, Berlin.

  • Hwang, G. J. (2003). A conceptual map model for developing intelligent tutoring systems. Computers & Education, 40(3), 217–235.

    Article  Google Scholar 

  • Jansen, R. S., van Leeuwen, A., Janssen, J., Conijn, R., & Kester, L. (2020). Supporting learners’ self-regulated learning in Massive Open Online Courses. Computers & Education, 146, 103771.

    Article  Google Scholar 

  • Järvelä, S., Kirschner, P. A., Panadero, E., Malmberg, J., Phielix, C., Jaspers, J., et al. (2015). Enhancing socially shared regulation in collaborative learning groups: Designing for CSCL regulation tools. Educational Technology Research and Development, 63(1), 125–142.

    Article  Google Scholar 

  • Karabenick, S. A. (2011). Methodological and assessment issues in research on help seeking. In Handbook of Self-regulation of Learning and Performance, pp. 267–281.

  • Lai, C.-L., & Hwang, G.-J. (2016). A self-regulated flipped classroom approach to improving students’ learning performance in a mathematics course. Computers & Education, 100, 126–140.

    Article  Google Scholar 

  • Lajoie, S. P. (1993). Computer environments as cognitive tools for enhancing learning. In Computers as cognitive tools, pp. 261–288.

  • Lee, D., Watson, S. L., & Watson, W. R. (2019). Systematic literature review on self-regulated learning in massive open online courses. Australasian Journal of Educational Technology, 35(1), 28–41.

    Google Scholar 

  • Lin, J. W., Lai, Y. C., Lai, Y. C., & Chang, L. C. (2016). Fostering self-regulated learning in a blended environment using group awareness and peer assistance as external scaffolds. Journal of Computer Assisted Learning, 32(1), 77–93.

    Article  Google Scholar 

  • Long, Y., & Aleven, V. (2017). Enhancing learning outcomes through self-regulated learning support with an Open Learner Model. User Modeling and User-Adapted Interaction, 27(1), 55–88.

    Article  Google Scholar 

  • Manlove, S., Lazonder, A. W., & de Jong, T. (2007). Software scaffolds to promote regulation during scientific inquiry learning. Metacognition and Learning, 2(2–3), 141–155.

    Article  Google Scholar 

  • Matcha, W., Gasevic, D., & Pardo, A. (2020). A systematic review of empirical studies on learning analytics dashboards: A self-regulated learning perspective. IEEE Transactions on Learning Technologies, 13(2), 226–245.

    Article  Google Scholar 

  • Mitrovic, A., & Martin, B. (2007). Evaluating the effect of open student models on self-assessment. International Journal of Artificial Intelligence in Education, 17(2), 121–144.

    Google Scholar 

  • Müller, N. M., & Seufert, T. (2018). Effects of self-regulation prompts in hypermedia learning on learning performance and self-efficacy. Learning and Instruction, 58, 1–11.

    Article  Google Scholar 

  • Musso, M. F., Boekaerts, M., Segers, M., & Cascallar, E. C. (2019). Individual differences in basic cognitive processes and self-regulated learning: Their interaction effects on math performance. Learning and Individual Differences, 71, 58–70.

    Article  Google Scholar 

  • Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218.

    Article  Google Scholar 

  • Nota, L., Soresi, S., & Zimmerman, B. J. (2004). Self-regulation and academic achievement and resilience: A longitudinal study. International Journal of Educational Research, 41(3), 198–215.

    Article  Google Scholar 

  • Nussbaumer, A., Hillemann, E. C., Gütl, C., & Albert, D. (2015). A competence-based service for supporting self-regulated learning in virtual environments. Journal of Learning Analytics, 2(1), 101–133.

    Article  Google Scholar 

  • Pakdaman-Savoji, A., Nesbit, J., & Gajdamaschko, N. (2019). The conceptualisation of cognitive tools in learning and teachnology: A review. Australasian Journal of Educational Technology, 35(2), 1–24.

    Article  Google Scholar 

  • Panadero, E. (2017). A review of self-regulated learning: Six models and four directions for research. Frontiers in Psychology, 8, 422.

    Article  Google Scholar 

  • Panadero, E., Broadbent, J., Boud, D., & Lodge, J. M. (2019). Using formative assessment to influence self-and co-regulated learning: The role of evaluative judgement. European Journal of Psychology of Education, 34(3), 535–557.

    Article  Google Scholar 

  • Panadero, E., Klug, J., & Järvelä, S. (2016). Third wave of measurement in the self-regulated learning field: When measurement and intervention come hand in hand. Scandinavian Journal of Educational Research, 60(6), 723–735.

    Article  Google Scholar 

  • Pérez-Álvarez, R., Maldonado-Mahauad, J., & Pérez-Sanagustín, M. (2018, September). Tools to support self-regulated learning in online environments: literature review. In European Conference on Technology Enhanced Learning (pp. 16–30). Springer, Cham.

  • Perkins, D. N., Hancock, C., Hobbs, R., Martin, F., & Simmons, R. (1986). Conditions of learning in novice programmers. Journal of Educational Computing Research, 2(1), 37–55.

    Article  Google Scholar 

  • Pintrich, P. R., Smith, D. A., Garcia, T., & McKeachie, W. J. (1993). Reliability and predictive validity of the Motivated Strategies for Learning Questionnaire (MSLQ). Educational and Psychological Measurement, 53(3), 801–813.

    Article  Google Scholar 

  • Roll, I., Aleven, V., McLaren, B. M., & Koedinger, K. R. (2011a). Metacognitive practice makes perfect: Improving students’ self-assessment skills with an intelligent tutoring system. In International Conference on Artificial Intelligence in Education (pp. 288–295). Springer, Berlin, Heidelberg.

  • Roll, I., Aleven, V., McLaren, B. M., & Koedinger, K. R. (2011). Improving students’ help-seeking skills using metacognitive feedback in an intelligent tutoring system. Learning and Instruction, 21(2), 267–280.

    Article  Google Scholar 

  • Roll, I., Wiese, E. S., Long, Y., Aleven, V., & Koedinger, K. R. (2014). Tutoring self-and co-regulation with intelligent tutoring systems to help students acquire better learning skills. Design Recommendations for Intelligent Tutoring Systems, 2, 169–182.

    Google Scholar 

  • Rovers, S. F., Clarebout, G., Savelberg, H. H., de Bruin, A. B., & van Merriënboer, J. J. (2019). Granularity matters: Comparing different ways of measuring self-regulated learning. Metacognition and Learning, 14(1), 1–19.

    Article  Google Scholar 

  • Saary, M. J. (2008). Radar plots: a useful way for presenting multivariate health care data. Journal of Clinical Epidemiology, 61(4), 311–317.

    Article  Google Scholar 

  • Scheiter, K., & Gerjets, P. (2007). Learner control in hypermedia environments. Educational Psychology Review, 19(3), 285–307.

    Article  Google Scholar 

  • Schraw, G. (2007). The use of computer-based environments for understanding and improving self-regulation. Metacognition and Learning, 2(2), 169–176.

    Article  Google Scholar 

  • Self, J. (1988). Bypassing the intractable problem of student modeling. In International Conference of Intelligent Tutoring Systems, Montreal, Canada, pp. 18–24.

  • Shyr, W. J., & Chen, C. H. (2018). Designing a technology-enhanced flipped learning system to facilitate students’ self-regulation and performance. Journal of Computer Assisted Learning, 34(1), 53–62.

    Article  Google Scholar 

  • Stone, N. J. (2000). Exploring the relationship between calibration and self-regulated learning. Educational Psychology Review, 12(4), 437–475.

    Article  Google Scholar 

  • Su, J. M. (2020). A rule-based self-regulated learning assistance scheme to facilitate personalized learning with adaptive scaffoldings: A case study for learning computer software. Computer Applications in Engineering Education., 28(3), 536–555.

    Article  Google Scholar 

  • Vainio, V., & Sajaniemi, J. (2007). Factors in novice programmers’ poor tracing skills. ACM SIGCSE Bulletin, 39(3), 236–240.

    Article  Google Scholar 

  • Vandewaetere, M., & Clarebout, G. (2011). Can instruction as such affect learning? The case of learner control. Computers & Education, 57, 2322–2332.

    Article  Google Scholar 

  • Winne, P. H. (1996). A metacognitive view of individual differences in self-regulated learning. Learning and Individual Differences, 8(4), 327–353.

    Article  Google Scholar 

  • Winne, P. H. (2010). Improving measurements of self-regulated learning. Educational Psychologist, 45(4), 267–276.

    Article  Google Scholar 

  • Winne, P. H. (2011). A cognitive and metacognitive analysis of self-regulated learning. In Handbook of self-regulation of learning and performance (pp. 15–32).

  • Winne, P. H., & Hadwin, A. F. (2013). nStudy: Tracing and supporting self-regulated learning in the Internet. In International handbook of metacognition and learning technologies (pp. 293–308). Springer, New York, NY.

  • Winne, P. H., & Nesbit, J. C. (2009). Supporting Self-Regulated Learning with Cognitive Tools. In Handbook of metacognition in education, p. 259.

  • Winne, P. H., & Perry, N. E. (2000). Measuring self-regulated learning. In Handbook of self-regulation (pp. 531–566). Academic Press.

  • Wong, J., Baars, M., Davis, D., Van Der Zee, T., Houben, G. J., & Paas, F. (2019). Supporting self-regulated learning in online learning environments and MOOCs: A systematic review. International Journal of Human-Computer Interaction, 35(4–5), 356–373.

    Article  Google Scholar 

  • Woolf, B. P. (2008). Building Intelligent Interactive Tutors: Student-centered Strategies for Revolutionizing e-learning. Boston: Morgan Kaufmann Publishers.

    Google Scholar 

  • Young, J. D. (1996). The effect of self-regulated learning strategies on performance in learner controlled computer-based instruction. Educational Technology Research and Development, 44(2), 17–27.

    Article  Google Scholar 

  • Zhou, M. (2012). From “Self-Tested” to “Self-Testing”: a review of self-assessment systems for learning. In S. Graf, F. Lin, & R. McGreal (Eds.), Intelligent and adaptivelearning systems: Technology enhanced support for learners and teachers (pp. 119–132). Hershey, PA: Information Science Reference.

    Chapter  Google Scholar 

  • Zimmerman, B. J. (1990). Self-regulated learning and academic achievement: An overview. Educational Psychologist, 25(1), 3–17.

    Article  MathSciNet  Google Scholar 

  • Zimmerman, B. J. (2001). Theories of self-regulated learning and academic achievement: An overview and analysis. Self-Regulated Learning and Academic Achievement: Theoretical Perspectives, 2, 1–37.

    Google Scholar 

  • Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory into Practice, 41(2), 64–70.

    Article  Google Scholar 

  • Zimmerman, B. J., Bonner, S., & Kovach, R. (1996). Developing self-regulated learners: Beyond achievement to self-efficacy. New York: American Psychological Association.

    Book  Google Scholar 

  • Zimmerman, B. J., & Schunk, D. H. (1989). Self-regulated learning and academic achievement: Theory, research, and practice. New York: Springer.

    Book  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

C-YC was responsible for writing the paper, project administration, conceptualization, methodology design, and investigation. N-BZ was responsible for software development, investigation, and formal analysis. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Chih-Yueh Chou.

Ethics declarations

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the Center for Taiwan Academic Research Ethics Education.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chou, CY., Zou, NB. An analysis of internal and external feedback in self-regulated learning activities mediated by self-regulated learning tools and open learner models. Int J Educ Technol High Educ 17, 55 (2020). https://doi.org/10.1186/s41239-020-00233-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41239-020-00233-y

Keywords