Skip to main content
  • Research article
  • Open access
  • Published:

Promoting knowledge elaboration, socially shared regulation, and group performance in collaborative learning: an automated assessment and feedback approach based on knowledge graphs

Abstract

Online collaborative learning is implemented extensively in higher education. Nevertheless, it remains challenging to help learners achieve high-level group performance, knowledge elaboration, and socially shared regulation in online collaborative learning. To cope with these challenges, this study proposes and evaluates a novel automated assessment and feedback approach that is based on knowledge graph and artificial intelligence technologies. Following a quasi-experimental design, we assigned a total of 108 college students into two conditions: an experimental group that participated in online collaborative learning and received automated assessment and feedback from the tool, and a control group that participated in the same collaborative learning activities without automated assessment and feedback. Analyses of quantitative and qualitative data indicated that the introduced automated assessment and feedback significantly promoted group performance, knowledge elaboration, and socially shared regulation of collaborative learning. The proposed knowledge graph-based automated assessment and feedback approach shows promise in providing a valuable tool for researchers and practitioners to support online collaborative learning.

Introduction

Online collaborative learning is recognized as a valuable pedagogical approach in the field of education. In online collaborative learning, learners from different geographical areas come together to complete learning tasks, solve problems, and develop abilities (Reeves et al., 2004). If designed well, online collaborative learning could contribute to improved learning achievements (Wang et al., 2020), collaborative problem-solving skills (Zhang et al., 2022), and higher-order competencies (Fu & Hwang, 2018).

However, productive collaborative learning requires sophisticated conditions to happen. It requires students to work together to elaborate knowledge in order to construct shared representations and solve shared problems (Baker, 2015; Le Bail et al., 2021). To achieve group outcomes, collaborative learning also requires students to go beyond regulating individual learning to regulate group learning socially (Järvelä et al., 2016). Group performance, knowledge elaboration, and socially shared regulation are major concerns in online collaborative learning (Ding et al., 2011; Järvelä et al., 2016; Nokes-Malach et al., 2015). Group performance can be defined as the amount and quality of group artefacts generated by student peers (Weldon & Weingart, 1993). Knowledge elaboration is conceptualized as the interconnection and integration of prior knowledge with new information (Weinstein & Mayer, 1986). Socially shared regulation is a group-level regulation process that involve multiple learners jointly regulating their collaborative learning activity (Hadwin & Oshige, 2011; Järvelä et al., 2016). Socially shared regulation includes formulating task perceptions, setting group-level goals and plans, generating strategies, monitoring group learning processes, and making adaptations (Hadwin et al., 2017). It is believed that socially shared regulation is important for productive collaborative learning (Järvelä et al., 2019).

However, learners often have difficulties in achieving high-level group performance and knowledge elaboration as well as carrying out productive socially shared regulation in collaborative learning, leading to intensified efforts to support these areas. Previous studies have adopted different strategies to promote group performance, knowledge elaboration, and socially shared regulation. For example, Chen et al. (2022) proposed a group incentive strategy in a collaborative problem-based system and found that the incentive can significantly promote group performance. Kalyuga (2009) proposed to tailor external instructional guidance to learners’ knowledge level to enhance knowledge elaboration. Kielstra et al. (2022) experimented with a collaboration script and found it productive in facilitating socially shared regulation in collaborative learning.

Given the rise of learning analytics as a field that is interested in using digital trace data to examine and improve learning (Siemens & Baker, 2012), there is a growing interest in using learning analytics to understand and support collaboration (Zheng et al., 2022). To contribute to this emerging area, this study proposes and evaluates a novel approach that is developed based on knowledge graphs to support automated assessment and feedback. The knowledge graph-based automated assessment and feedback approach integrates artificial intelligence to automatically transform group discussions into knowledge graphs that can be used to characterize group understanding. The knowledge graph-based approach does not only allow real-time automated feedback to each group, but also supports group comparison based on graph algorithms. To evaluate the efficacy of the knowledge graph-based automated assessment and feedback approach, we carried out a quasi-experimental study in a university setting. The research questions guiding the study are as follows:

  1. 1.

    Do students who learn with the knowledge graph-based assessment and feedback approach differ in group performance from those who learn with traditional online collaborative learning?

  2. 2.

    Do students who learn with the knowledge graph-based assessment and feedback approach differ in knowledge elaboration from those who learn with traditional online collaborative learning?

  3. 3.

    Do students who learn with the knowledge graph-based assessment and feedback approach differ in socially shared regulation from those who learn with traditional online collaborative learning?

Literature review

Automated assessment and feedback

Assessment and feedback of collaborative learning has been given increasing attention in recent years. Assessment denotes a judgment about quality in relation to a criterion (van Aalst, 2013). As a major concern in collaborative learning, assessment plays a crucial role in fostering effective online collaboration (Macdonald, 2003) and improving learning performance (Strijbos, 2010).

Assessment approaches can be divided into two different types: traditional or non-automated assessment, and concurrent automated assessment. Traditional assessment is implemented manually after learning has occurred. One example is to ask teachers to assess online group work following specific assessment criteria (Zhu, 2012). In another example, Peng et al. (2022) invited teaching assistants to manually assess group writing performance by scoring their essays. However, traditional assessment suffers from problems with cost, time constraints, and scalability (Palomo-Duarte et al., 2014). With the rapid development of digital technologies, researchers have explored the potential of automatic assessment to mitigate these problems. For example, Palomo-Duarte et al. (2014) developed a tool using Python programming language to automatically assess wiki contributions in online collaborative learning processes. Ramachandran et al. (2017) adopted text mining and natural language processing techniques to automatically assess the quality of peer reviews. Recent work in learning analytics have developed rich opportunities to apply data science methods to automatically derive insights from rich learning data for assessment purposes (Zheng et al., 2022; Wise & Vytasek, 2017).

Powered by these automatic assessment tools, feedback can be delivered in real-time to learners or instructors through digital technologies (Deeva et al., 2021). Prior work found automated feedback contributed to improving learning performance (Keuning et al., 2018) and reducing bias (Hahn et al., 2021). Nevertheless, most studies provided automated feedback based on predefined answers and behavioural data (Deeva et al., 2021). In cases where automated feedback is provided on emergent goals in collaborative dialogues, analytic information often lacks in specificity and interpretability (Chen et al., 2018). Assessing students’ emergent discourse in online collaborative learning remains under-explored. Knowledge graphs, applied in various domains to model complex knowledge structures (Sakr et al., 2021), could be leveraged to automatically assess online collaborative discourse and hereby provide immediate targeted feedback to student groups. In this study, we chose to focus on online discussion because it is a dominant way of supporting collaborative learning in higher education, ranging from online/hybrid courses to massive open online courses (Liu et al., 2023; Zou et al., 2021). Text-based online discussion can support learner participation (Liu et al., 2023), knowledge building (Lei & Chan, 2018), and critical thinking (Yang et al., 2022a, 2022b) in tertiary education. Text-based discussion environments pose less barriers to access while allowing learners to express their ideas in a deliberate and accurate manner. The following sections will describe the proposed knowledge-graph approach, followed by research methods and results.

Applications of knowledge graphs in education

Knowledge graphs have been widely used across fields in academia and industry. A knowledge graph is composed of entities and relationships to convey knowledge of the real-world in a graph form (Hogan et al., 2021). There are many advantages to knowledge graphs, including organizing information, demonstrating knowledge (Shaw, 2019), and representing complex relationships (Hao et al., 2021). Knowledge graphs constructed based on large text corpora (such as Wikipedia) are used to support semantic search, automatic computer reasoning, link prediction, and graph-based machine learning (Sakr et al., 2021). Knowledge graphs are broadly applied in various domains, including natural language understanding, recommendation systems, question answering, search engine, image classification, and text generation (Ji et al., 2021).

In the field of education, knowledge graphs have been applied in course management, question responses, and cognitive assessment. For example, Aliyu et al. (2020) developed a knowledge graph system for university course allocation and management. Yang et al. (2021) developed an intelligent question answering system according to knowledge graphs for high school students. Zhong et al. (2015) constructed and utilized knowledge graphs of ontologies to assess junior school students’ knowledge structure. Ho et al. (2018) developed an online assessment tool based on knowledge graphs to automatically assess first-year medical students’ understanding of a topic. As far as we know, however, there is so far no study on using knowledge graphs for automated assessment and feedback in collaborative learning contexts. To close this research gap, this study proposed a novel knowledge graph-based approach to automatically assess student learning in online collaboration.

A knowledge graph-based automated assessment and feedback approach

This study proposed a knowledge graph-based automated assessment and feedback approach to facilitate online collaborative learning. This approach, developed on a pre-existing online platform for collaborative learning, includes three steps. The first step is to collect transcripts of student online discussions taking place on the online learning platform (see Fig. 1).

Fig. 1
figure 1

The online collaborative learning platform

Using the discussion data, the second step constructs knowledge graphs using artificial intelligence and natural language processing. The construction of knowledge graphs includes two sub-steps. The first step is to identify entities through deep neural networks (DNNs) and keyword matching. As a deep learning technique, DNNs have demonstrated superior abilities in extracting high-level features (Sze et al., 2017). The selected DNN model in the study is BERT-BiLSTM-CFR since it achieves the highest performance compared with other models (see Table 1). After entities are identified, the next step is to extract their relationships from the predefined target knowledge graph that comprises calibrated entities and relationships. Then the knowledge graph of each group can be automatically constructed (see Fig. 2). Similar to earlier work (Resendes et al., 2015), information designated as inactive knowledge is from the target knowledge graph and is displayed in grey in Fig. 2.

Table 1 The results of entity recognition for different deep neural networks
Fig. 2
figure 2

The knowledge graph with intragroup assessment and feedback

The third step of this approach is to automatically generate intragroup and intergroup assessment results as well as to provide personalized feedback. The graph-based assessment includes a suite of analytics developed in prior studies, including the level of collaborative knowledge building (Zheng, 2017; formula 1), knowledge depth (Zheng, 2017; formula 2), knowledge convergence level (Zheng, 2017; formula 3), the alignment of activated knowledge range (Zheng et al., 2020; Tversky, 1977; formula 4), the alignment of collaborative knowledge building level (Zheng et al., 2020; formula 5), and the closeness centrality of collaborative knowledge building (Oshima et al., 2012; formula 6). For more details, please refer to previous studies. The knowledge graph approach allowed us to automatically calculate all of these indicators before they are presented to learners.

In addition, personalized feedback was provided based on predefined rules and thresholds. More specifically, group-level feedback on collaborative knowledge building, knowledge depth, and knowledge convergence was provided in terms of whether the activated knowledge nodes were lower than, equal to, or higher than one-third of the target knowledge. Group-level feedback on the alignment of activated knowledge range, the alignment of collaborative knowledge building level, and the closeness centrality of collaborative knowledge building was provided in terms of whether the assessment results were lower than 0.3, between 0.3 and 0.8, or higher than 0.8. The intergroup feedback was based on whether the average values of the six assessment indicators were lower than, equal to, or higher than the average values. The predefined rules and thresholds of intragroup and intergroup feedback were set based on Cohen (1988), Zheng et al. (2023), and Lu et al. (2017), and are beyond the scope of this paper. Figure 2 shows a screenshot of a group-level knowledge graph and intragroup assessment results and feedback. Figure 3 shows the activated public knowledge graph with intragroup assessment and feedback. The activated public knowledge graph denotes the knowledge graph activated by all group members of one group. Figure 4 demonstrates the intergroup assessment results and feedback.

$$A = \sum\limits_{i = 1}^{N} {\sum {\frac{F * \log (d + 2) * r}{{\log (n * (D - d + 2))}}} }$$
(1)
$${\text{D}} = \sum\limits_{i = 1}^{N} {W_{i} L_{i} }$$
(2)
$$C = C(G_{1} \cap G_{2} \cap G_{3} \cap G_{4} ) = \sum\limits_{i = 1}^{N} {A_{i} }$$
(3)
$$S = \frac{f(A \cap B)}{{f(A \cap B) + 0.5*f(A - B) + 0.5*f(B - A)}}$$
(4)
$$G = \frac{(R + W) - (D + Y + F)}{{Z + W}}$$
(5)
$$\mathrm{C}=\frac{1}{{\sum }_{ic}{d}_{ci}}$$
(6)
Fig. 3
figure 3

The activated public knowledge graph with intragroup assessment and feedback

Fig. 4
figure 4

Intergroup assessment results and feedback. (Left: Graphs showing the collaborative learning indicators of all groups. Right: Summary statistics of these indicators across all groups, as well as personalized feedback for the current student group based on intergroup assessment.)

Methods

To answer our research questions, we designed a quasi-experimental study to evaluate the knowledge graph-based assessment and feedback approach.

Participants

Participants were recruited from a comprehensive public university in China. In total, 108 college students voluntarily participated in this study. The average age was 21 years old (SD = 1.81). There were nine males and 99 females, which is consistent with the student demographics of this university. They majored in literature, education, law, economics, computer science, art, management, and communication. Informed agreement was attained, and the participants could quit at any time.

Participants were divided into 18 experimental groups and 18 control groups, with each group comprising three students who had not collaborated before the study. There was no significant difference between the experimental and control groups in gender (X2 = 1.09, p = 0.29), age (X2 = 11.21, p = 0.19), major (X2 = 69.26, p = 0.14), or prior knowledge (t = 0.970, p = 0.339).

Procedure

Figure 5 presents the experimental procedure. First, all participants completed a 20-min pre-test about image processing, the topic of the collaborative learning task. Second, a research assistant introduced the experiment, demonstrated the feedback tool to the participants, and gave them a chance to answer questions before using the tool. After the demo, all participants participated in online collaborative learning activities for 3 h. The collaborative learning task, identical for all participants, was to process images in order to make a poster. The main difference was that participants from the experimental group had access to the knowledge graph-based assessment and feedback approach, while the control group did not. After accomplishing the collaborative learning task, each group submitted their poster as their group artefact. Fourth, a 20-min post-test was administered to all participants. Finally, the participants from each experimental group were interviewed online for 30 min to discuss their perceptions of the feedback tool.

Fig. 5
figure 5

The quasi-experimental research procedure

Instruments

The instruments of this study included the pre- and post-tests developed by a teacher with more than 10 years of teaching experience in the topic. The pre-test comprised ten single-choice questions, five true–false questions, five multiple-choice questions, and two short answer questions. The post-test consisted of ten single-choice questions, ten true–false questions, ten multiple-choice questions, and two short answer questions. A perfect score for the pre- and post-test was both 100.

In addition, a semi-structured interview protocol was developed for student interviews after the experiment. Sample questions included: “Do you think the knowledge graph-based automated assessment and feedback approach contributes to refining the group product? Why?”, “Do you think the knowledge graph-based automated assessment and feedback approach contributes to integrating prior knowledge with new information? Why?”, and “Do you think the knowledge graph-based automated assessment and feedback approach was helpful for monitoring collaborative learning processes? Why?”.

Data collection and analysis methods

This study collected 108 pretests, 108 posttests, online discussion transcripts of 36 groups, 36 group artefacts, and interview recordings of 18 groups. To answer our research questions, we conducted a range of data analyses including content analysis, computer-assisted knowledge graph analysis, lag sequential analysis, and statistical analysis.

First, content analysis was adopted to measure group performance by evaluating the short answer questions of the pretest and posttest as well as group artefacts. Two raters evaluated the pretest and posttest independently. The reliability measured by Kappa were 0.85 and 0.92 for the pretest and posttest, respectively. The group performance was measured by the scores of posttest and group artefacts. The group artefacts were independently evaluated by two raters according to an evaluation rubric (see Table 2). The interrater reliability for group artefacts calculated by Kappa was 0.87, implying high reliability.

Table 2 The evaluation criteria for group artefacts

Second, the computer-assisted knowledge graph analysis was employed to measure knowledge elaboration by analysing the online discussion transcripts of 36 groups. The computer-assisted knowledge graph analysis method has been validated in previous studies (Zheng et al., 2022). There are three steps for this method: The first is to construct the target knowledge graph based on collaborative learning objectives and textbooks, and the second is to segment online discussion transcripts of each group. Two research assistants independently coded the online discussion transcripts for 36 groups. The Kappa value was calculated to be 0.96, indicating high reliability. The third step is to generate knowledge graphs automatically and measure the knowledge elaboration level using the weighted path length of activated knowledge (Zheng et al., 2015).

Third, content analysis and lag sequence analysis were combined to analyse socially shared regulation (SSR) behaviours. The coding scheme of SSR included six dimensions, namely orientating goals (OG), making plans (MP), monitoring and controlling (MC), enacting strategies (ES), evaluating and reflecting (ER), and adapting metacognition (AM). Two coders independently coded online discussion transcripts according to the coding scheme validated by a previous study (Zheng et al., 2023), achieving a Kappa value of 0.95. Based on the coding results, lag sequential analysis was performed using the GSEQ 5.1 software (Quera et al., 2007) to examine the transitional patterns among different SSR behaviours.

Finally, the semi-structured interview recordings were transcribed and independently analysed by two coders following a thematic analysis method (Braun & Clarke, 2006). Our thematic analysis followed a six-step process proposed by Braun and Clarke (2006), including (1) immersing oneself in the data, (2) generating initial codes, (3) dividing like-codes into themes, (4) reviewing themes, (5) defining themes, and (6) writing the report. In the second step, deductive coding was adopted in the present study. In answering our research questions, we converged all codes of 18 groups into three themes including improving group performance, promoting knowledge elaboration, and facilitating SSR. The interrater reliability of the interview analysis measured by Kappa value was 0.96, indicating high reliability.

Results

Analysis of group performance

Group performance was computed through scores of the posttest and group artefacts. To examine the impacts of the knowledge graph-based assessment and feedback approach on group performance, a one-way analysis of covariance (ANCOVA) was conducted. Before ANCOVA, to determine the suitability of the results for this analysis method, the Kolmogorov–Smirnov test was used to investigate whether the posttest results and group artefact results were normally distributed. The results confirmed the normality (p > 0.05). Furthermore, homogeneity of variance was not disobeyed for the posttest (F = 0.062, p = 0.805) and group artefacts (F = 2.328, p = 0.136). Meanwhile, the homogeneity of regression slopes was not disobeyed for either the posttest scores (F = 1.919, p = 0.176) or group artefact assessments (F = 2.914, p = 0.098). Therefore, ANCOVA could be performed to examine the differences in posttest and group artefact scores with the proposed approach as the independent variable, the pretest as a covariate, and the scores of the posttest and group artefacts as dependent variables.

Table 3 shows the ANCOVA results for the posttest and group artefact score. There were significant differences in scores for posttests (F = 7.45, p = 0.010) and group artefacts (F = 12.87, p = 0.001) between the experimental group and control group, favouring the experimental group. Moreover, the proposed approach had a large effect size on the posttest and group artefact scores (η2 > 0.138) based on Cohen’s (1988) criteria. Therefore, the knowledge graph-based assessment and feedback approach had a significantly positive impact on group performance.

Table 3 ANCOVA results for the group performance of the two groups

Analysis of knowledge elaboration

To examine the impacts of the proposed approach on knowledge elaboration, another ANCOVA was conducted. Before ANCOVA, the Kolmogorov–Smirnov test was conducted and the findings indicated that all datasets were normally distributed (p > 0.05). Furthermore, homogeneity of variance was not disobeyed for knowledge elaboration (F = 2.836, p = 0.101). Then the hypothesis of homogeneity of regression slopes was not disobeyed (F = 0.727, p = 0.400). Hence, ANCOVA could be employed to examine the differences in knowledge elaboration.

Table 4 shows the ANCOVA results on knowledge elaboration in the two groups. A significant difference in knowledge elaboration (F = 35.06, p = 0.000) was found between the experimental and control groups, favouring the experimental group again and with a large effect size of η2 = 0.51. Therefore, the knowledge graph-based assessment and feedback approach had a significant and positive impact on knowledge elaboration.

Table 4 ANCOVA results for knowledge elaboration in the two groups

Analysis of socially shared regulation behavioural patterns

To analyse SSR behavioural patterns, lag sequential analysis was employed in this study. Table 5 shows the descriptive statistical results for SSR behaviours. The adjusted residuals were calculated through the GSEQ 5.1 software to examine the SSR behavioural transition. If the adjusted residual was greater than 1.96, it indicated that the behavioural transition was significant (Bakeman & Quera, 2011). As shown in Table 6, there were 10 significant SSR transitional sequences in the experimental groups. In Fig. 6, OG → OG shows that participants orientated goals continually; OG → MP demonstrates that the participants made plans after setting goals; MP → MP denotes that participants made plans continually; MP → ES represents that participants enacted strategies after making plans; ES → ES demonstrates that participants enacted strategies repetitively; ES → MC represents that participants monitored and controlled after they enacted strategies; MC → MC shows that participants monitored and controlled continually; MC → ER shows that participants evaluated and reflected after monitoring and controlling; ER → ER demonstrates that participants evaluated and reflected continually; ER → AM reveals that participants adapted metacognition after they evaluated and reflected.

Table 5 The descriptive statistics results of SSR behaviours
Table 6 Adjusted residuals of the experimental group
Fig. 6
figure 6

The SSR behavioural transition figure of experimental groups

In contrast, only five repeated SSR behavioural sequences occurred in the control group (see Table 7 and Fig. 7), namely OG → OG (orientating goals repetitively), MP → MP (making plans repetitively), ES → ES (enacting strategies repetitively), MC → MC (monitoring and controlling repetitively), ER → ER (evaluating and reflecting repetitively). This result implied that the control group demonstrated less rich sequential transitions of SSR behaviours and did not jointly regulate themselves during online collaborative learning. Furthermore, Table 8 shows the five significant SSR behavioural transition sequences that only occurred in the experimental group. These SSR behavioural transition sequences are crucial behavioural transition sequences since they can promote productive collaborative learning.

Table 7 Adjusted residuals of the control group
Fig. 7
figure 7

The SSR behavioural transition figure of control groups

Table 8 Significant behaviour transition sequences that only appeared in the experimental group

Student perceptions of the knowledge graph approach

Table 9 shows key themes from the student interviews. The interviewees believed that the knowledge graph-based assessment and feedback approach contributed to improving their group performance, promoting knowledge elaboration, and facilitating socially shared regulation. With regard to improving group performance, most interviewees found the proposed approach was useful for refining group artefacts (94%) and acquiring new knowledge and skills (88%). For example, one interviewee stated that “Our group often browsed the assessment results and we revised and refined our group artefacts based on the assessment and feedback results.” Another interviewee believed that “Our group really like the knowledge graph-based assessment and feedback approach because it is useful for acquiring new knowledge and skills about image processing.”

Table 9 Themes of learners’ perceptions of the knowledge graph-based assessment and feedback approach

With respect to promoting knowledge elaboration, most interviewees stated that the proposed approach stimulated the integration of prior knowledge with new information (88%), activation of more knowledge (100%), and generation of new ideas (88%). As one interviewee told us, “Our group could activate more knowledge and yield new ideas based on the assessment and feedback results.”

As for SSR behaviours, most interviewees believed that the proposed approach contributed to monitoring the collaborative learning processes (88%), social interaction (83%), and adapting goals and strategies (88%). For instance, one interviewee indicated that “Our group often jointly regulated our learning goals, plans, and strategies based on the assessment and feedback results. It was really useful and helpful for us.”

Discussion and conclusions

Discussing main findings

This study proposes and examines a novel knowledge graph-based assessment and feedback approach through an empirical study. The results indicated that the proposed approach had substantial impacts on group performance, knowledge elaboration, and SSR in collaboration. The qualitative interview results further demonstrated ways in which students found the proposed approach useful for their learning. This study demonstrates the promise of utilizing automated assessment and feedback that is powered by knowledge graphs and artificial intelligence to promote learning performance in online collaborative learning contexts.

The present study reveals that the knowledge graph-based assessment and feedback approach can significantly improve group performance. There are three possible reasons for these results. First, the proposed approach adopted artificial intelligence and natural language processing technologies to automatically present assessment results of collaborative learning that students found useful for their ongoing collaboration. This finding was consistent with) who found that artificial intelligence-enhanced assessment had positive impacts on learning performance. Second, the proposed approach can demonstrate intergroup assessment results, which may assist learners in reflecting, evaluating, and refining their own artefacts to improve group performance. This result corroborates with a study by Peng et al. (2022) who revealed that intergroup information can improve group performance. Third, the proposed approach can provide immediate and personalized feedback to each group, which significantly improved group performance. Previous studies found that immediate feedback could significantly promote learning performance (Al Hakim et al., 2022; Zheng et al., 2022).

This study reveals that the knowledge graph-based assessment and feedback approach can significantly promote knowledge elaboration. Several potential reasons might justify the results. First, the proposed approach automatically demonstrates knowledge graphs, which could promote knowledge elaboration. The knowledge graphs clearly demonstrate activated and inactive knowledge, which helps identify concepts activated in their discourse as well as concepts and relations captured by the target knowledge graph but still to be discussed. The targeted content-based feedback could have helped students steer their collaborative discourse to cover more content areas and hereby improve their performance and knowledge elaboration. This finding was consistent with the definition of knowledge elaboration that emphasizes integrating prior knowledge with new information (Weinstein & Mayer, 1986). Second, the knowledge graph-based assessment and feedback approach automatically calculates and presents the intragroup and intergroup assessments of collaborative knowledge building, which helps learners reflect on the differences in knowledge elaboration between their own group and other groups. Previous studies also revealed that reflective assessment can promote knowledge advancement during collaborative learning (Lei & Chan, 2018). Third, the proposed approach can provide real-time formative feedback according to the assessment results, which can promote knowledge elaboration. This result was in line with Gleaves and Walker (2013) who found that formative feedback could promote knowledge elaboration to a large extent.

The present study indicates that the knowledge graph-based assessment and feedback approach can facilitate socially shared regulation. The possible reasons lie in three aspects. First, the proposed approach automatically demonstrates activated knowledge and inactivated knowledge, which serves as valuable information to raise group awareness. Information on group awareness contributed to socially shared regulation (Schnaubert & Bodemer, 2019). Second, the proposed approach automatically demonstrates both intragroup and intergroup assessment results, which helps each group jointly regulate their goals, plans, and strategies. Third, the proposed approach can provide real-time feedback based on the assessment results, which contributes to socially shared regulation. It is documented that feedback contributes to socially shared regulation (De Backer et al., 2016).

Implications

The present study has several pedagogical, technological, and practical implications for teachers, developers, and practitioners. First, the study has demonstrated the utility of the knowledge graph-based assessment and feedback approach in promoting group performance, knowledge elaboration, and socially shared regulation in online collaborative learning. This approach addresses important needs in contemporary education, including assessment for learning (Schellekens et al., 2021). Furthermore, our graph-based tool makes immediate feedback possible, which is also an important technological advancement that can promote the development of this field. Although immediate feedback might compete for working memory resources (Fu & Li, 2021) or hinder learning from errors (Mathan & Koedinger, 2005), immediate feedback could be practically effective in improving learning performance (Al Hakim et al., 2022; Van Ginkel et al., 2020). Therefore, teachers who aspire to support online collaborative learning and promote students’ self-assessment could adopt this approach in their teaching.

Second, this study found that the knowledge graph-based assessment and feedback has promise in improving learning performance. Automated assessment based on AI and knowledge graphs utilizes dramatically different methods and technologies from traditional assessments; when coupled with pedagogical interventions (Wise & Vytasek, 2017), these new assessment tools could promote transformative change in education (Swiecki et al., 2022). The knowledge graph-based assessment and feedback approach does not only provide substantial insights into the evolution of collaborative knowledge building, but also provide indicators of problematic patterns in collaborative learning, enabling teachers and practitioners to potentially intervene based on analytic results.

Third, researchers and practitioners may have challenges in using learning analytics to inform educational decision-making and take actions (Wise et al., 2016). Wise and Vytasek (2017) proposed that three principles of coordination, comparison, and customization to guiding learning analytics implementation design. Furthermore, it is suggested that learning analytics design should align with assessment systems for learners to meaningfully make sense of the analytics to minimize potential destructions.

Limitations and future studies

This study is limited by several shortcomings. First, the sample size of this study was small, and the duration was relatively short due to the COVID-19 pandemic. Future research on the proposed approach should seek to increase the sample size and duration. Second, this study examined the knowledge graph-based assessment and feedback approach was conducted using a particular task environment, which would limit its overall generality. The experiment only used one collaborative learning task and may not be generalizable to other content areas or complex collaborative learning contexts. Also, the proposed approach was reliant on online discussion data since participants mainly interacted through text-based discussions. Future research should expand the proposed approach to different collaborative learning settings involving multimodal interactions. Third, this study only examined the impacts of knowledge graph-based assessment and feedback approach in a lab context. Therefore, cautions should be made when generalizing the results to other contexts. Future research should examine the proposed approach in real-world classroom contexts. Finally, the study only collected student post-test and interview data after they used the knowledge graph-based assessment and feedback approach. Future work could look into detailed user interaction data when students actively use the knowledge graph-based assessment and feedback approach during collaborative learning activities. Future research should compare the impacts of the graph-based automated assessment and feedback approach with the general prompt approach on group performance, knowledge elaboration, and socially shared regulation. Despite these limitations, the study motivates future efforts to develop novel graph-based approaches to learning analytics and assessment.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

References

Download references

Funding

This study is funded by the International Joint Research Project of Huiyan International College, Faculty of Education, Beijing Normal University (ICER202101).

Author information

Authors and Affiliations

Authors

Contributions

ZLQ: Conceptualization; Methodology; Writing original draft; Review & editing; Project administration. LML: Data curation; Formal analysis; Writing original draft. CBD: Writing; Revising; Review & editing. FYC: Data curation; Formal analysis. All of authors read and approved this manuscript.

Corresponding author

Correspondence to Lanqin Zheng.

Ethics declarations

Ethical approval and consent to participate

All procedures performed in studies involving human participants are in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study.

Competing interests

The authors declare that there is no conflict of interest of the study.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zheng, L., Long, M., Chen, B. et al. Promoting knowledge elaboration, socially shared regulation, and group performance in collaborative learning: an automated assessment and feedback approach based on knowledge graphs. Int J Educ Technol High Educ 20, 46 (2023). https://doi.org/10.1186/s41239-023-00415-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41239-023-00415-4

Keywords