- Research article
- Open access
- Published:
Social comparison feedback in online teacher training and its impact on asynchronous collaboration
International Journal of Educational Technology in Higher Education volume 21, Article number: 55 (2024)
Abstract
In the area of online teacher training, asynchronous collaboration faces several challenges such as limited learner engagement and low interaction quality, thereby hindering its overall effectiveness. Drawing on social comparison theory, providing social comparison feedback to teacher-learners in online asynchronous collaborative learning offers benefits, but also has drawbacks. While social comparison has been explored in diverse fields, its role in education remains unclear. In this study, we selected 95 primary and secondary school teachers participating in an online training course. Using randomized controlled trial design, we provided the experimental group with social comparison feedback, while the control group received only self-referential feedback. We used epistemic network analysis, lag sequential analysis, and social network analysis to identify the impact of social comparison feedback on group-regulated focus, group-interactive behaviors, and social network structures. The results showed that social comparison feedback significantly enhanced teachers’ online asynchronous collaborative learning.
Introduction
There is a global emphasis on enhancing the professional competencies of in-service teachers (Depaepe & König, 2018). The rise of online teacher training, driven by advancements in information technology, has been recognized for its effectiveness in helping teachers acquire new skills and improve their professional practices (Kalinowski et al., 2020). The transition from face-to-face training to online platforms has significantly elevated the quality of teacher training (Ma et al., 2022a, 2022b). Unlike traditional face-to-face training, online training offers flexibility, allowing educators to learn at their own pace and on their own schedule (Prestridge, 2016). This flexibility is crucial for overcoming time and geographical constraints and is further enhanced by the availability of online professional learning communities (Kalinowski et al., 2020), which increase accessibility and foster deeper engagement in professional development.
A key component of online training is asynchronous interaction (Frey & Alman, 2003), typically manifested as online asynchronous collaboration. This mode of interaction, which does not require real-time communication, provides flexibility that enhances engagement, collaboration, and inspiration within online learning environments (Burns et al., 2022). It also improves learners’ engagement, participation, and higher-order thinking skills (Bailey et al., 2020; Li et al., 2018). However, the delayed nature of asynchronous communication can lead to extended response times, potentially reducing training efficiency (Kim et al., 2015). Therefore, leveraging learning analytics to help educators understand learners’ behaviors and performance, as well as to provide timely and adaptive feedback and support, is crucial for optimizing online learning (Banihashem et al., 2024).
Social comparison, as defined by Festinger (1954), involves individuals assessing their abilities and behaviors against those of others. It is a common method for self-evaluation and self-assessment (Fam et al., 2020), with significant implications in fields such as psychology and medicine (Baldwin & Mussweiler, 2018; Corcoran et al., 2020). In educational contexts, students often engage in subconscious social comparisons, evaluating aspects such as academic performance, physical appearance, and athletic skills (Fleur te al., 2023). These comparisons can offer insights into peer perceptions, thereby motivating learners (Chen & Chen, 2023) and encouraging them to match their peers’ achievements, leading to improved cognitive engagement and learning outcomes (Wambsganss et al., 2022). While social comparison can be beneficial in psychology and health (Appel et al., 2015; Han et al., 2020; Verduyn et al., 2020), it can also induce anxiety, potentially hindering learning (Bai et al., 2021).
Despite the recognized benefits and potential pitfalls of social comparison in educational contexts, significant research gaps remain. First, previous studies have primarily focused on the impact of social comparison on individuals (Bai et al., 2021; Delava et al., 2017; Kollöffel & Jong, 2016), with no known research on the design and implementation of social comparison feedback in online collaborative environments. Second, prior research has focused mainly on learning performance, which neglecting the effects of social comparison on group dynamics, such as group-regulated learning and social network structures. This oversight has limited our comprehensive understanding of social comparison feedback. Finally, to our knowledge, only two studies have explored the differences between social comparison and self-reference (Delava et al., 2017; Kollöffel & Jong, 2016), and both focused on individuals without investigating these differences in online collaborative environments.
Therefore, this study aims to address these research gaps by integrating social comparison feedback into online asynchronous collaboration. We used a randomized controlled trial design involving in-service teachers to investigate the effects of this integration, examining how social comparison feedback influences group-regulated focus, interactive group behavior, and social network structure among teacher-learners participating in online collaborative learning environments.
Literature review
Asynchronous collaborative learning in online teacher training
Online teacher training has become a crucial means of professional development (Ma et al., 2022a, 2022b), offering benefits such as flexibility in schedule and location, access to diverse learning resources, and the ability of learners to progress at their own pace. Additionally, a significant advantage of online platforms is their capability to integrate various instructional supports tailored to specific courses and learner needs, which facilitates effective interactions (Gao et al., 2024). A key feature of this mode of training is online asynchronous collaboration (Ma et al., 2023), where teacher-learners work together to understand course materials and co-construct new knowledge (Liu et al., 2021). This collaboration typically takes place through interactive boards, forums, and assignment review areas, accommodating participants who cannot meet in person for various reasons.
Asynchronous collaborative learning provides more time for reflection and deliberation compared to synchronous interactions (Lin & Sun, 2024). This extended time fosters deeper thinking and more thoughtful communication in learners. Previous research indicates that synchronous collaborative learning enhances problem-solving abilities (Hendarwati et al., 2021), boosts critical thinking (Oh et al., 2018), and promotes group knowledge construction (Yang et al., 2020).
However, online asynchronous collaboration presents several challenges. Merely participating in online collaborative learning does not guarantee effective learning or successful completion of collaborative tasks (Chejara et al., 2024). First, the lack of continuity often extends the timeline (Zhou et al., 2015), leading to disjointed discussions and potential deviations from key training content (Guan et al., 2006). Second, the delayed inherent of online asynchronous collaboration can lead to less timely interactions among teacher-learners, fostering a sense of isolation and reducing motivation to learn (Kaufmann & Vallade, 2020).
Therefore, to enhance the effectiveness of online teacher training, designing effective learning support strategies is essential. Feedback, as a vital component of asynchronous learning environments, can offer valuable insights in the absence of real-time interactions, helping learners identify issues that may be difficult to recognize on their own (Cui & Schunn, 2024; Shea & Bidjerano, 2010).
Social comparison feedback
Social comparison, a concept introduced by Festinger (1954), involves individuals evaluating their own opinions and abilities by comparing themselves with others. This comparison serves as a mechanism for accurate self-evaluation (Festinger, 1954), enabling individuals to gauge their abilities, behaviors, and performance levels by contrasting them with those of their peers in similar situations. Compared to absolutist approaches, social comparison is a more efficient method of information processing, enabling self-assessment with reduced cognitive effort (Mussweiler & Epstude, 2009) and addressing the needs for self-assessment, self-improvement, and self-enhancement (Dijkstra et al., 2008), even when objective standards are present.
Social comparison can be categorized into three types based on various theoretical models and perspectives: upward comparison, parallel comparison, and downward comparison. Upward social comparison occurs when individuals compare themselves with those at a higher level, which can help them identify their shortcomings (Park et al., 2021). Parallel social comparison involves comparing oneself with others of similar abilities or opinions (Festinger, 1954), while downward social comparison involves comparing oneself with those in less favorable situations, aiming to maintain a positive self-image and enhance satisfaction, self-esteem, and self-evaluation (Kong et al., 2021).
Social comparison feedback is believed to aid learners in learning from their peers and identifying learning gaps. Neugebauer et al. (2016) noted that learners prone to social comparison are better at extracting useful information from high-performing peers. Previous studies also suggest that this feedback can improve online learning performance (Joksimovic et al., 2015) and self-efficacy (Flener-Lovit et al., 2020). By providing social comparison feedback, we aimed to offer guidance and encourage active engagement in online asynchronous collaborative learning. However, concerns have also been raised about anxiety induced by social comparison and its impact on effective learning (Ray et al., 2017).
Therefore, in this study, we examined the influence of social comparison feedback on teacher-learners in online asynchronous collaboration.
Factors influencing online collaborative learning
Regulated focus, which is essential for the success of online learning, is positively associated with collaborative learning performance (Carter et al., 2020; Zheng et al., 2019). Research by Rogat and Adams-Wiggins (2015) on seventh-grade students working in groups on science tasks revealed that effective regulation significantly enhanced team performance. Given the dynamic nature of collaborative learning, traditional tools often fall short in understanding and facilitating this process. Epistemic Network Analysis (ENA), which conceptualizes learning as the development of a cognitive framework that integrates knowledge and competencies, proves effective in tracking the dynamics of these regulatory processes (Lu et al., 2023; Shaffer et al., 2016). The growing popularity of ENA in research underscores its value for investigating regulated focus in group learning.
Group behavioral sequences are also vital in collaborative learning. Studies by Yang (2023) and Tlili et al. (2023) have examined the sequential progression of group behaviors, and techniques such as lag sequential analysis (LSA) can be employed to identify significant relationships between behaviors and uncover patterns (Berk et al., 1997).
Social interaction is crucial for online collaborative learning. Social network analysis, as utilized by researchers like Xie et al. (2018), plays a key role in understanding changes within learning community structures. This method employs nodes to represent entities and edges to denote relationships, thereby revealing participants’ roles in collaborative activities. Calvani et al. (2010) identified three essential metrics for assessing effective network interactions: participation, cohesion, and synthesis. Additionally, Zheng et al. (2021) recommended indicators like per capita postings and entry degree centrality for analyzing small social networks consisting of three to five individuals.
Aims and research questions
Grounded in the four dimensions of the learning engagement framework proposed by Fredricks et al. (2004), this study focused on the application of social comparison feedback within online asynchronous collaborative learning groups for teacher-learners. The primary objective was to evaluate the overall impact of this feedback on collaborative learning processes from multiple perspectives. Thus, the study aimed to answer the following research questions:
-
How does social comparison feedback influence the regulated focus within learning groups?
-
How does social comparison feedback influence collaborative interactive behavior within these learning groups?
-
How does social comparison feedback influence the social network structure within these groups?
Method
Research context
The Project-Based Learning (PBL) Design in Action course examined in this study is a free public-service program aimed at enhancing the theoretical knowledge and practical skills of in-service teachers in PBL design. By engaging teachers in PBL design projects, the course helps them develop the ability to design effective PBL curricula. The course is structured around four key topics: PBL topic selection, setting learning objectives and plans, supporting the learning process, and assessing learning outcomes. Hosted on the EPBL platform, this online program is available to primary and secondary school teachers throughout China.
Before the course began, an online live session was organized for all participants. This session provided detailed demonstrations of platform navigation and features, along with an overview of the course structure. During this session, participants were randomly assigned to small groups and given access to a dedicated discussion area on the EPBL platform. This discussion area was also intended for collaborative activities once the course officially started, promoting group formation and mutual understanding.
Before the course officially began, an introductory phase was initiated where participants interacted briefly within their groups in the online discussion area. They introduced themselves and voluntarily shared information such as their names, regions, schools, subjects taught, and grade levels. This preliminary interaction was crucial for several reasons: it helped participants familiarize themselves with the discussion area, established initial connections among group members, and ensured that everyone was comfortable using the platform before course content was delivered. During this introductory phase, two teaching assistants were available online to assist with any technical issues.
Once the initial interactions were completed, the course proceeded through the four key topics. Each topic included multiple learning videos, materials, and collaborative activities. Participants engaged in these online collaborative activities within the discussion areas of their small groups and submitted assignments related to each topic. All activities were conducted online. To ensure effective progress in collaborative learning, insights from scholars such as Kawai (2006) and Biesenbach-Lucas (2004) on asynchronous collaborative learning were incorporated. The approach was tailored to the unique characteristics of teacher-learners and the course, with appropriate support mechanisms implemented to promote interdependence among group members. The course facilitated the gradual completion of PBL design projects through guided group collaboration on various topics. Upon completing the tasks for the four key topics, each group produced a comprehensive PBL design project.
Figure 1 illustrates the collaboration interface among different groups within specific topic discussion areas. Participants had the option to use filter buttons to view comments from other participants before posting their own comments. The posting box included a range of editing tools—such as text, graphics, and tables—that allowed participants to refine their comments. After engaging in online collaboration, groups were required to submit assignments related to each topic. These assignments were reviewed by teaching assistants, who then provided feedback.
Participants
At the beginning of the experiment, a randomized controlled trial (RCT) design was employed to ensure that participants were randomly assigned to either the experimental group or the control group. This method minimized selection biases by randomly assigning participants to treatment and control conditions, thus providing robust evidence for research methodologies (Gegenfurtner & Ebner, 2019). The use of RCTs was well-established in educational research, as demonstrated in studies such as those by Merk et al. (2020) and Schenke et al. (2020).
Initially, 109 teachers registered for the course, with 53 in the experimental group and 56 in the control group. As our study focused on asynchronous collaborative learning, we included the 95 participants who completed at least one discussion thread: 49 in the experimental group and 46 in the control group. As a result, all 95 participants engaged in at least one discussion thread during the course, with no participants failing to do so. This ensured there was no difference in retention rates between the experimental and control groups.
Individual variables such as educational level and gender could influence online peer feedback (Noroozi et al., 2024). To ensure comparability between the experimental and control groups, we conducted descriptive statistics on participants’ gender, teaching experience, and grade level taught. Table 1 presents these characteristics, indicating that the distributions were roughly equivalent between the two groups. Specifically, the gender distribution was similar in both groups, with approximately 35% male and 65% female participants. In terms of teaching experience, the majority had 1–5 years of experience (about 46%), followed by those with over 16 years (about 19%), and pre-service teachers comprised about 16%. Regarding the grade level taught, approximately half of the participants were primary school teachers, while the remaining participants taught at middle school or high school levels, or were pre-service teachers. Most participants had prior experience with online learning and were enthusiastic about engaging in collaborative online activities and interacting with both experts and peers.
Construction of social comparison feedback
Mussweiler (2003) identified three stages in the process of social comparison: standard selection, comparison with the target, and evaluation. In this study, social comparison was defined as the process in which participants are presented with comparative information about their peers within specific standards, which is valuable for their self-evaluation.
During online collaboration, each group engaged in discussions and submitted assignments on various topics. These assignments were evaluated and ranked by teaching assistants in descending order. Groups were categorized into three levels: the top one-third were labeled “excellent,” the middle one-third “qualified,” and the bottom one-third “developing”. Participants could choose a comparison category for their group, enabling various types of social comparisons: upward, downward, or parallel. For example, comparing a “developing” group with an “excellent” group constituted an upward comparison, comparing an “excellent” group with another “excellent” group constituted a parallel comparison, and comparing a “qualified” group with a “developing” group constituted a downward comparison.
Social comparison feedback was provided to participants through a combination of visual representations and textual descriptions. The tone of the feedback became increasingly positive as the ranking of the participant’s group rose. Participants received specific comparisons between their own group and other selected groups across four dimensions: behavior, cognition, interaction, and emotion. This process is illustrated in Fig. 2.
-
Behavioral Dimension: Data in this dimension were derived from behavioral indicators observed on the learning platform, such as study duration, the number of quizzes taken, and the quality of quizzes submitted by each group. Participants received a comparison of these behavioral indicators between their own group and the selected group.
-
Cognitive Dimension: Data for this dimension were sourced from interactional texts in the discussion areas of each group. High-frequency words and topics from the selected group were identified using Term Frequency-Inverse Document Frequency (TF-IDF) analysis and Latent Dirichlet Allocation (LDA) topic modeling, with the optimal number of topics determined by topic perplexity (Blei, 2000). Participants were then shown the high-frequency words and topics from the selected group.
-
Interactive Dimension: Data in this dimension were gathered from interactional information in each group’s discussion area. Four types of indicators of small social network interactions were analyzed: interaction density, interaction centrality, interaction cohesion, and interaction balance (Zheng et al., 2021). Participants were presented with social network diagrams and comparisons of these indicators between their own group and the selected group.
-
Emotional Dimension: Data for this dimension came from self-reports by group members, using Artino’s adapted academic emotion questionnaire (Artino and Jones, 2012) to assess the emotional states during collaborative learning. Participants received comparisons of emotional states between their own group and the selected group.
After the initial comparison with the selected group, each student had the option to compare their performance with that of other groups.
Study design
The course involved each group designing a feasible project-based learning (PBL) plan. The design process was segmented into a series of tasks related to each topic. Groups were required to collaboratively discuss each topic and submit assignments related to the PBL plan for that topic. At the end of each topic, both the experimental and control groups received feedback. The experimental group received social comparison feedback, while the control group received self-referential feedback. The fundamental differences between these types of feedback are outlined in Table 2.
As noted previously, the comparison between the experimental and control groups focused on comparison options, overview guidelines, the learning information of the subject group, and the learning information of the other groups. The specific differences are outlined in Table 3:
-
Comparison Options: The experimental group received feedback that included comparisons with other groups, allowing participants to dynamically choose among excellent, qualified, or developing groups. In contrast, the control group received feedback solely on their own learning process relative to the completed topic, without any comparative information.
-
Overview Guidelines: The experimental group was provided with social comparison guidelines that varied based on the group type chosen for comparison (excellent, qualified, or developing). For instance, if a participant’s group was ranked as developing and they chose an excellent group for comparison, they received upward social comparison feedback, such as “Room for improvement!” along with related guidelines.
-
Learning Information: Regarding the behavioral, cognitive, interaction, and emotional dimensions, the experimental group could compare its performance with any type of target group (excellent, qualified, or developing) and received comparative guidelines and visuals. Conversely, the control group could only access information about its own performance in these dimensions.
This study employed a randomized controlled trial to explore the impact of social comparison feedback on online collaboration. The experimental group (n = 49, including 11 small groups) received social comparison feedback, while the control group (n = 46, including 11 small groups) received self-referential feedback. The experiment lasted for 16 days, and the study design is illustrated in Fig. 3.
Stage 1 involved preparation, including the development of course content and the recruitment of participants.
Stage 2 encompassed the randomization process, during which 95 learners were randomly assigned to 22 groups, each consisting of 4–5 members. Of these groups, 11 were designated as the experimental group and 11 as the control group.
Stage 3 saw participants engaging in online asynchronous collaboration. The course comprised four topics, each introduced sequentially according to the course schedule, with each topic lasting 4 days. During the open period for each topic, learners were required to study the corresponding course materials and participate in online asynchronous collaborative learning. They could choose to learn and discuss at any time within the open period for each topic. Before the deadline for each topic, groups were required to submit their assignments. Following submission, they received feedback related to the topic, with the experimental group receiving social comparison feedback and the control group receiving self-referential feedback.
Coding scheme
Learning regulation focus coding scheme
In this study, the comment data were analyzed using the online collaborative learning regulation focus coding scheme developed by Zhang et al. (2021), which has demonstrated strong validity and reliability. This coding scheme classified the groups’ regulation focus into three main dimensions: task, emotion, and organization, each comprising several sub-dimensions.
Comments in the task dimension were further categorized into task understanding (Task), content monitoring (ConMo), and process monitoring (ProMo). Task referred to the extent of understanding of the learning tasks, ConMo involved tracking the accuracy and relevance of the discussed content, and ProMo pertained to overseeing the learning methods and strategies used.
Comments in the emotion dimension were classified into positive emotion (Pos), negative emotion (Neg), and joking (Joke). Pos denoted expressions of approval or appreciation for the content posted by others, while Neg referred to expressions of disapproval or dissatisfaction. Joke indicated emotional content unrelated to the learning material, such as humor or unrelated banter.
Comments in the organization dimension included comments related to organizing (Org), which indicated activities focused on structuring or arranging the learning process within the group.
Interaction behavior coding scheme
Based on Gunawardena’s et al. (1997) interaction analysis model and subsequent research by scholars such as Hou and Wu (2011), Wang et al. (2020) developed a verb-driven interaction behavior coding scheme. This scheme prioritized learners’ communication and interaction rather than focusing solely on constructing advanced social knowledge. In this study, the scheme was adapted to reflect the nuances of online asynchronous collaboration, as shown in Table 4. The adapted coding scheme was used to analyze and code the groups’ discussion behaviors.
Knowledge construction level coding scheme
In this study, we utilized Gunawardena’s et al. (1997) interaction analysis model and its modified versions, as this model is widely used for content analysis of online discussions. Gunawardena’s model classified a group’s knowledge construction into five stages, reflecting increasing depth of knowledge construction and interaction quality. The five stages were: sharing/comparing of information, discovery of dissonance and inconsistency, negotiation of meaning/co-construction of knowledge, testing and modification of the proposed synthesis, and agreement/application of newly constructed meaning. This model was used to analyze and quantify the interaction data to assess the degree of knowledge construction achieved by the groups.
Data analysis
To investigate the influence of social comparison feedback on asynchronous collaboration, we analyzed three dimensions: regulation focus, interaction behavior, and social network structure. These dimensions provided a comprehensive understanding of how social comparison feedback influenced the learning process, group interaction dynamics, and social relationships among learners engaged in online collaboration.
Regarding Research Question 1, which concerned the regulation of the learning process within groups, we invited two experts in project-based learning to code all discussion posts using the group regulation focus coding scheme described in Sect. ”Learning regulation focus coding scheme”. Before coding, the experts received relevant training and independently coded 15% of the selected posts. Their coding demonstrated a high level of reliability, with a coefficient of 0.85 (Fleiss, 2003). The experts discussed any discrepancies to ensure consistency and then independently coded the participants’ discussion data. We subsequently conducted Epistemic Network Analysis (ENA) on the experimental and control groups based on the coded data. ENA used coded qualitative data from interactions, such as discussions, to construct networks. Specifically, ENA identified, quantified, and visualized the connection structures between design nodes by analyzing the co-occurrence of cognitive nodes (Shaffer et al., 2016). In the visualized graph, each node represented a predefined cognitive element, and the edges indicated the co-occurrence between these elements. The thickness of the edges reflected the relative strength of the connection between two nodes. The network was mapped onto a two-dimensional space, with the X and Y axes helping to distinguish the connection patterns between nodes. Nodes that were close together indicated frequent co-occurrence in similar interaction contexts. Additionally, ENA created subtraction networks to identify the most significant differences between the two networks. By comparing the subtraction networks of the experimental and control groups within the epistemic network space, we determined the influence of social comparison feedback on group-regulation focus.
To analyze group interaction behavior, we used the interaction behavior coding scheme adapted by Wang et al. (2020) (see Sect. ”Interaction behavior coding scheme”) and invited two experts to code all interaction behavior data of the learners. Before coding, they underwent relevant training and independently coded 15% of the selected posts. Their coding demonstrated a high level of reliability, with a coefficient of 0.94 (Fleiss, 2003). After reaching a consensus through discussion, the experts independently coded the participants’ interaction behavior data. We then conducted lag sequential analysis (LSA) using GSEQ software 5.0. LSA calculated the probabilities of transitions between different behaviors to identify patterns and dependencies, generating transition diagrams that displayed the likelihood of moving from one behavior to another. If the Z-score for a particular behavior sequence exceeded 1.96, it indicated that the sequence was statistically significant (p < 0.05). These significant behavior sequences revealed the behavior patterns of the experimental and control groups during online asynchronous collaborative learning. Through these analyses, we gained a better understanding of the impact of social comparison feedback on group interaction behavior.
Regarding social network structure, group members interacted during online asynchronous collaboration by posting and replying to messages. By considering all members of the group as network nodes, with posts representing “out-degree” connections to other group members and replies representing “in-degree” connections for specific participants, directed social networks were formed within each group. We analyzed the social networks of both the experimental and control groups using the dimensions of interaction intensity, interaction balance, and interaction quality to measure the effects of social comparison feedback on group social network relationships.
Results
Group-regulated focus
To further explore the differences in group-regulated focus between the experimental and control groups, we plotted the subtracted epistemic network shown in Fig. 4. Each student in the experimental group was represented by a red dot, while each student in the control group was represented by a blue dot. The blue and red squares represented the average centroids of the experimental and control groups, respectively, and the dashed boxes around the squares indicated the 95% confidence interval. Nodes in the ENA network represented each code (e.g., Task, ConMo), and the connections between nodes represented associations. The thickness of the lines between two nodes indicated the strength of the connection.
The epistemic network generated from the coding data exhibited explanatory strengths of 18.2% along the x-axis and 24.6% along the y-axis. Given the non-normal distribution of the data, we conducted a Mann–Whitney U test on the projection points, which yielded significant results. Specifically, we observed a significant difference between the two groups along the x-axis (U = 720.00, p = 0.00, r = 0.44), while no significant difference was detected along the y-axis (U = 1311.00, p = 0.90).
For the task dimension (codes: Task, ConMo, and ProMo), where Task represented task understanding, ConMo represented content monitoring, and ProMo represented process monitoring, the distribution of task-related codes along the x-axis was concentrated in the experimental group. This indicated that task-related aspects were more emphasized in this group. In the experimental group, the connections between Task (task understanding), ConMo (content monitoring), and ProMo (process monitoring) illustrated the collaborative problem-solving process: learners engaged in discussions about content monitoring (ConMo) based on their task understanding (Task), iteratively refining and developing their cognitive goals. Additionally, the strong connection between ConMo and ProMo in the experimental group (ConMo—ProMo of the experimental group: 0.44) suggested that participants in this group engaged in more process monitoring (ProMo) during content monitoring (ConMo) compared to the control group (ConMo—ProMo of the control group: 0.38).
Regarding the emotion dimension (codes: Neg, Pos, and Joke), where Neg represented negative emotions, Pos represented positive emotions, and Joke represented joking, two main observations emerged. First, in terms of node positions, negative emotions (Neg) and joking (Joke) were more prominent in the experimental group on the x-axis, while positive emotions (Pos) were more prominent in the control group. Second, in terms of connection strength, the experimental group’s negative emotions (Neg) and joking (Joke) had weaker connections with other nodes, indicated by faint blue connection lines. In contrast, the control group’s positive emotions (Pos) had stronger connections with content monitoring (ConMo): Pos-Task (experimental group: 0.05; control group: 0.12), Pos-ConMo (experimental group: 0.35; control group: 0.50), Pos-ProMo (experimental group: 0.07; control group: 0.18). This indicated that the control group frequently exhibited positive emotions (Pos) during content discussions (ConMo).
Regarding the organization dimension (code: Org), the results from the Mann–Whitney U test indicated differences between the experimental and control groups along the x-axis, with the organization code being closer to the experimental group on this axis. Additionally, the organization code in the experimental group showed a closer connection to task understanding (Task): Org-Task (experimental group: 0.12; control group: 0.07). This suggested that the experimental group likely organized their content more comprehensively than the control group, with this more structured approach to content organization contributing to a better understanding of the task.
Group interaction behaviors
The behavior sequence transition diagrams for the experimental and control groups, based on the residual tables of behavior sequences, are shown in Fig. 5. In Fig. 5, nodes represented different types of interaction behaviors, while the connecting lines between the nodes indicated significant behavior sequences. The arrows illustrated the order of transition between two behaviors, and the numbers above the arrows (Z-scores) indicated the significance level of the behavior sequences. To visually emphasize these differences, the thickness of the arrows was proportional to the significance level, with higher values represented by thicker arrows.
Three notable differences between the experimental and control groups emerged in relation to the behavioral sequence transitions.
(a) In the yellow section (Line 1): The experimental group exhibited a distinct offer – > negotiate sequence, indicating that participants in this group engaged in more negotiation behavior after receiving information. This behavior reflected deeper consideration of the collaborative content, including negotiations over differing viewpoints or alternatives. Conversely, the control group displayed a pronounced offer – > support sequence, where support denoted agreement with others’ opinions. This suggested that participants in the control group were more inclined to endorse their peers’ opinions.
Additionally, the ask – > respond sequence was observed in both groups but was more pronounced in the experimental group (Z1 = 12.52 > Z2 = 5.33). This indicated that participants in the experimental group were more likely to receive responses to their questions. However, it is important to acknowledge that Lag Sequential Analysis does not directly test for significant differences in link strength across conditions. Therefore, this inference about stronger connections in the experimental group should be interpreted with caution.
(b) In the blue section (Line 2): The experimental group demonstrated greater engagement in monitoring behaviors within their summaries. Monitoring actions signified control over the group collaboration process and reflection on collaborative content. Two significant behavioral sequences, monitor – > conclude and include – > monitor, were observed in the experimental group. In contrast, monitoring in the control group appeared as isolated actions without connections to other behaviors.
(c) In the green section (Line 3): Support reflected approval of other group members’ opinions. The experimental group displayed sequences of support – > monitor and support – > add, as well as an inner loop of add – > add, indicating that after a group member expressed support, others in the group would monitor or supplement their behavior, and thus the adding behavior might be repeated. In contrast, the control group lacked significant sequential behaviors following support. Furthermore, while the experimental group displayed an internal cycle of add – > add, the control group participants demonstrated an add – > provide sequence, suggesting that following adding behavior, the control group tended to introduce new ideas rather than providing support.
Social network structure
To investigate the effects of social comparison feedback on social network structure, we first conducted a comprehensive observation of postings in both the experimental and control groups. Following this, we analyzed interaction intensity, interaction balance, and interaction quality. Table 5 provides the descriptive statistics for postings in both groups. It is noteworthy that the experimental group posted more comments compared to the control group. Additionally, the number of posts in both groups declined over time.
Interaction intensity
Interaction intensity reflects the frequency of interaction among group members and indicates the level of activity and engagement within the group. To examine the impact of social comparison feedback on interaction intensity, we used two indicators: the average number of posts per person and network density.
Initially, we conducted covariance analysis to compare the average number of posts per person between the experimental and control groups. The group was treated as the independent variable, the number of posts in the initial theme discussion area as the covariate, and the number of posts in the final theme discussion area as the dependent variable. Subsequently, we performed one-way ANCOVA to determine if there was a significant difference between the experimental and control groups in terms of the average number of posts per person. The parallelism test revealed no significant interaction between the independent variable and the covariate (F = 0.010, p = 0.921 > 0.05), and the residual normality test confirmed that the residuals followed a normal distribution with a mean of 0 (p = 0.613 > 0.05), supporting the use of covariance analysis. The results of one-way ANCOVA showed a significant difference between the experimental and control groups in the number of posts in the final theme (F = 6.04, p = 0.024 < 0.05). That is, although both groups experienced a reduction in the number of posts over time, the social comparison feedback appeared to attenuate the rate of decline. Thus, social comparison feedback had a significantly positive impact on the number of posts (p = 0.024 < 0.05).
Given the small sample size, with both the experimental and control groups consisting of only 11 subgroups, our primary focus was on the descriptive analysis of the density distribution. Table 6 presents the network density distribution for both groups. Overall, the experimental group exhibited a higher mean network density than the control group (M1 = 0.67 > M2 = 0.57), along with a higher maximum value and a lower minimum value. The numerical distribution was also more clustered, indicating greater network density in the experimental group. Despite these observed differences, the Mann–Whitney U test revealed no significant difference in the density distribution between the experimental and control groups (Experimental group: Mdm = 0.670, SD = 0.135; Control group: Mdm = 0.580, SD = 0.200; U = 40.500, Z = 1.320, p = 0.187).
Interaction balance
In this study, we analyzed the interaction balance within each group using two metrics: out-degree balance and in-degree balance. Out-degree balance was evaluated via participation homogeneity (Zheng et al., 2021), which provided insights into the engagement levels of group members. A higher level of participation homogeneity indicated an uneven distribution of contributions among members. In-degree balance was assessed using in-degree centrality (Zheng et al., 2021), a metric that measures the popularity of individuals within the network. A higher level of in-degree centrality suggested that the network was centered around specific individuals.
Box plots were used to visualize the trends in participation homogeneity (reflecting out-degree balance) and in-degree centrality (reflecting in-degree balance) for both the experimental and control groups. To explore whether there were differences in participation homogeneity and in-degree centrality between the experimental and control groups, we conducted statistical tests. Given the small sample size of subgroups in both groups, we used the Mann–Whitney U test (Şimşek, 2023). The results showed no significant differences in participation homogeneity between the experimental and control groups (Experimental group: Mdm = 7.106, SD = 2.842; Control group: Mdm = 7.583, SD = 3.078; U = 50.000, Z = 0.690, p = 0.490). Similarly, there were no significant differences in in-degree centrality (Experimental group: Mdm = 3.167, SD = 4.084; Control group: Mdm = 3.333, SD = 3.596; U = 54.5000, Z = 0.394, p = 0.693).
Generally, as the average number of posts per person increased, both participation homogeneity and in-degree centrality tended to rise because balancing the number of posts sent and received became more challenging. Figures 6 and 7 illustrate that experimental group 1, control group 1, and experimental group 7 had exceptionally high average posting rates, leading to unusually high participation homogeneity and in-degree centrality. This indirectly highlighted the difficulty in maintaining balance as the average number of posts per person increased. Despite the experimental group posting more, their participation homogeneity and in-degree centrality were slightly lower than those of the control group, although the differences were not statistically significant. This suggested that the experimental group performed slightly better in balancing out-degree and in-degree compared to the control group.
Interaction quality
To analyze the effects of social comparison feedback on interaction quality, we utilized Gunawardena’s interaction knowledge construction model (see Sect. ”Knowledge construction level coding scheme”) to assess the quality of interaction among different cohorts. The consistency between the two raters was very high, with a kappa score of 0.89 (Fleiss, 2003). As illustrated in Fig. 8, most participants were predominantly engaged in the preliminary phase of knowledge construction, focusing mainly on information sharing. This phase accounted for approximately half of the overall interactions. In contrast, participation in the deeper stages of knowledge construction, which involve the application of new knowledge, was significantly lower, barely reaching 5%. This indicated that, during online asynchronous collaborative learning, most knowledge construction activities were centered on identifying and sharing information, with less emphasis on advancing to higher levels of knowledge construction.
The Shapiro–Wilk test was conducted to examine the normality of the knowledge construction levels for both the experimental and control groups. The results showed that neither group followed a normal distribution (experimental group: W = 0.794, p < 0.001; control group: W = 0.763, p < 0.001). Consequently, the Mann–Whitney U test was used to assess whether the knowledge construction levels differed significantly between the experimental and control groups. The results revealed that the knowledge construction level of learners in the experimental group (Mdm = 2, SD = 1.135) was significantly higher than that of the control group (Mdm = 1, SD = 1.066), with U = 401,943.000, z = 2.428, p = 0.015 < 0.050. This suggested that the experimental group, which received social comparison feedback, demonstrated a higher overall level of knowledge construction compared to the control group, which received self-referential feedback.
Considering the difference in the total number of posts between the experimental and control groups, we compared the percentage of posts at each knowledge construction level. As shown in Fig. 9, the experimental group exhibited lower proportions of posts in the initial and intermediate stages of knowledge construction compared to the control group but higher proportions in the third, fourth, and fifth stages. This indicated that social comparison feedback promoted a higher level of knowledge construction.
Discussion
Group-regulated process
In this study, epistemic networks were developed for both the experimental and control groups. The results indicated that the experimental group, which received social comparison feedback, placed more emphasis on task completion and adjustment compared to the control group. Cialdini and Goldstein (2004) highlighted that social comparison can effectively integrate learning feedback with goal setting. In online learning environments, social comparison feedback appeared to focus the group’s efforts more effectively, potentially enhancing group-regulation behaviors. Furthermore, while asynchronous collaboration faces challenges such as varying participation times, it offers significant advantages, including increased opportunities for asynchronous discussion and automatic archiving of activities, as noted by Schellens and Valcke (2005) and Duvall et al. (2020). This study leveraged these benefits by designing social comparison feedback to enhance asynchronous collaboration. This approach promoted positive regulatory behaviors, demonstrating the value of integrating social comparison within asynchronous collaborative settings.
First, the experimental group frequently exhibited negative emotions, such as questioning, which led to higher-quality outcomes. In contrast, the control group showed more positive emotions, typically manifesting as simple agreement, which did not significantly enhance collaborative knowledge construction. It is speculated that negative emotions triggered further knowledge construction, aligning with previous findings that learners experiencing negative emotions perform better than those experiencing positive emotions (Liaw et al., 2021). Simply displaying positive emotions was not sufficient to achieve the same level of positive effects without additional content construction or regulation.
Next, the experimental group engaged in more joking behavior. Qualitative analysis of discussion texts revealed that joking played a significant role in regulating the discussion atmosphere, motivating participants, and expressing friendliness. For example, statements such as “Understanding how important it is to know the direction of one’s efforts, as a teacher, instilling this feeling in students means that education is already halfway to success!” (regulating the discussion atmosphere, motivating others) and “It’s the final stretch! Let’s give it our all together!” (motivating others) exemplified this role. Previous studies have indicated that humor is a crucial form of conversational engagement (Ingram, 2023) and plays a vital role in social interaction (Chadwick & Platt, 2018). Thus, the increased joking in the experimental group likely facilitated better knowledge construction.
Group interaction behavior
In this study, we analyzed the behavior transition sequences in both the experimental and control groups, leading to the following key findings.
First, the experimental group demonstrated negotiation behavior when receiving information (“offer” behavior) from fellow group members. This behavior indicated a deeper engagement with collaborative content, characterized by critical assessment. In contrast, the control group tended to exhibit “support” behavior, accepting information without rigorous evaluation. Second, the experimental group showed a greater tendency to monitor during the conclusion process. They displayed a bidirectional relationship between concluding and monitoring (conclude < – > monitor), indicating a cautious acceptance of support and summarization behaviors from their peers. Conversely, the control group exhibited self-looping sequences such as conclude – > conclude and conclude – > lead, suggesting a focus on the act of concluding or a tendency to engage in new learning activities. Finally, the experimental group demonstrated monitor and add behaviors when providing support, sometimes involving repetition of support. Conversely, the control group typically did not link support with subsequent monitoring or adding behavior but were more inclined to introduce new viewpoints (lead).
In summary, the experimental group showed more negotiation and monitoring behaviors, indicating a deeper level of reflection. This reflective process is crucial if learners are to accumulate and share knowledge and skills over time, as well as increasing their communication and collaboration capabilities (Yang, 2022; Zamora, 1985). Overall, the results of this study support Bandura’s social cognitive theory principle that feedback, especially when combined with social comparison, can lead to positive group interaction behavior during the learning process (Bandura, 1991).
Social network relationships
In this study, we analyzed the impact of social comparison feedback on group social network relationships using three key indicators: interaction density, interaction balance, and interaction quality.
Interaction density: Interaction density measures the frequency of interactions among group members, revealing their level of activity and enthusiasm. This study used two indicators to assess interaction density: average number of posts per person and network density. The results for average posts per person showed that the experimental group consistently had higher interaction density than the control group. Although network density descriptively appeared higher in the experimental group, the difference was not statistically significant. This lack of significance might have been explained by the limited number of subgroups in both the experimental and control groups, which affected the reliability of network density as an indicator. Nevertheless, considering the results of both indicators, social comparison feedback positively impacted interaction density. It effectively offset the decline in posting activity among participants and enhanced network density. Previous research (Nordin et al., 2022) demonstrated that increased interaction typically led to stronger idea exchanges, thereby enhancing the overall online learning experience, which supported the positive influence of social comparison feedback.
Interaction Balance: Interaction balance was assessed using levels of participation homogeneity and in-degree centrality to evaluate the distribution of engagement among group members. Despite the experimental group’s higher number of postings, their levels of participation uniformity and centrality were comparable to those of the control group. This suggests that social comparison feedback effectively maintained, and in some cases even enhanced, a balanced level of participation among members.
Interaction quality: The experimental group demonstrated significantly higher levels of collaborative knowledge construction compared to the control group. Groups exhibited a lower level of knowledge construction in the second phase (discovery of dissonance and inconsistency) than in the third phase (negotiation of meaning/co-construction of knowledge). Despite initial introductions before the experiment, participants were still not very familiar with each other. Due to the entirely online nature of the interactions, this limited familiarity might have led participants to feel that asking questions could bring interpersonal pressure (Kumi-Yeboah, 2018). This is consistent with the greater incidence of positive emotions observed in both groups as part of the group-regulated analysis.
During the online asynchronous collaboration, the experimental group received visual social network diagrams and indicators, while the control group only received standard reference values based on their data. The results indicated that the experimental group achieved higher interaction density, balance, and quality compared to the control group. This difference may be because social network metrics in small group collaborations are more sensitive to contextual factors than those in larger networks. The comparative feedback, presented through images and text, likely motivated the experimental group to increase their interactions and improve their social network status. In contrast, the control group’s limited feedback may have restricted their understanding of interaction dynamics, as they lacked the additional motivational element provided to the experimental group.
Practical implications
Application of social comparison feedback: The results of this study affirm the role of social comparison feedback in enhancing adaptive learning outcomes. Future research should investigate the application of social comparison feedback in asynchronous collaborative learning environments to further improve collaboration.
Enhancing discussion forum features: This study introduced pinned posts to the discussion forum to improve information organization. Future enhancements could include filters based on time frame, number of replies, and number of views. Incorporating machine learning methods, as suggested by Ma et al. (2023), could assist learners in organizing discussion content more effectively. Additionally, using graphic organizers, as proposed by Jeon et al. (2022), could further enhance the efficacy of online asynchronous collaboration.
Fostering a collaborative atmosphere: Humor in posts was found to positively regulate the learning atmosphere and sharpen task focus. Instructional designers might consider incorporating activities that encourage learners to create and share humor in online learning environments (Song et al., 2021) and facilitate intermittent synchronous communication (Hu et al., 2023). These strategies could promote a more active collaborative atmosphere and enhance overall collaboration.
Exploring interaction balance indicators: The study indicated that groups with higher average posting and receiving rates tend to exhibit greater participation and centrality. Future research should focus on developing new indicators for online asynchronous collaboration that are less sensitive to posting baselines and group size. Additionally, increasing the volume of discussion could reduce the sensitivity of social network indicators, thereby providing more reliable measures of interaction balance.
Conclusions, limitations, and future work
In this study, we provided social comparison feedback to learners across four dimensions—behavioral, cognitive, interactive, and emotional—to analyze its impact on collaborative learning. Using a randomized controlled trial design, we delivered social comparison feedback to the experimental group, while the control group received only self-referential learning reports. This design enabled us to investigate the effects of social comparison feedback on collaborative learning. The results suggested that social comparison feedback enhanced the regulation of learning processes, stimulated increased monitoring behaviors, and improved social network relationships.
Nevertheless, this study had certain limitations that suggest directions for future research. First, learners were randomly grouped to ensure similar distributions between the experimental and control groups in this study. Future research could explore more effective grouping methods that not only maintain comparability between groups but also enhance the effectiveness of asynchronous collaboration within those groups. Second, we used non-parametric tests to calculate and compare social network-related metrics. However, each group, whether experimental or control, comprised a limited number of subgroups, which may have affected the generalizability of our conclusions. Future studies could include a larger sample size with more subgroups to strengthen the robustness of the results. Third, extending the course duration in future studies could allow more time for asynchronous interaction. Lastly, as the participants were primarily in-service teachers, the conclusions are mainly applicable to this group. Caution should be exercised when generalizing these findings to other populations.
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
References
Appel, H., Crusius, J., & Gerlach, A. L. (2015). Social comparison, envy, and depression on facebook: A study looking at the effects of high comparison standards on depressed individuals. Journal of Social & Clinical Psychology, 34(4), 277–289. https://doi.org/10.1521/jscp.2015.34.4.277
Artino, A. R., & Jones, K. D. (2012). Exploring the complex relations between achievement emotions and self-regulated learning behaviors in online learning. The Internet and Higher Education, 15(3), 170–175. https://doi.org/10.1016/j.iheduc.2012.01.006
Bai, S., Hew, K. F., Sailer, M., & Jia, C. (2021). From top to bottom: How positions on different types of leaderboard may affect fully online student learning performance, intrinsic motivation, and course engagement. Computers & Education, 173, 104297. https://doi.org/10.1016/j.compedu.2021.104297
Bailey, D., Almusharraf, N., & Hatcher, R. (2020). Finding satisfaction: Intrinsic motivation for synchronous and asynchronous communication in the online language learning context. Education and Information Technologies, 26(3), 2563–2583. https://doi.org/10.1007/s10639-020-10369-z
Baldwin, M., & Mussweiler, T. (2018). The culture of social comparison. Proceedings of the National Academy of Sciences, 115(39), E9067–E9074. https://doi.org/10.1073/pnas.1721555115
Bandura, A. (1991). Social cognitive theory of self-regulation. Organizational Behavior and Human Decision Processes., 50(2), 248–287. https://doi.org/10.1016/0749-5978(91)90022-L
Banihashem, S. K., Kerman, N. T., Noroozi, O., Moon, J., & Drachsler, H. (2024). Feedback sources in essay writing: peer-generated or AI-generated feedback? International Journal of Educational Technology in Higher Education. https://doi.org/10.1186/s41239-024-00455-4
Berk, R. H., Bakeman, R., & Gottman, J. M. (1997). Observing interaction: An introduction to sequential analysis. Technometrics, 34(1), 112–113. https://doi.org/10.1080/00401706.1992.10485258
Biesenbach-Lucas, S. (2004). Asynchronous web discussions in teacher training courses: Promoting collaborative learning—or not? AACE Journal, 12(2), 155–170. https://www.researchgate.net/publication/228963766
Blei, D. (2000). https://doi.org/10.1162/jmlr.2003.3.4-5.993. Applied Physics Letters, 3(4–5), 993–1022. https://doi.org/10.1162/jmlr.2003.3.4-5.993
Burns, A., Holford, P., & Andronicos, N. (2022). Enhancing understanding of foundation concepts in first year university STEM: Evaluation of an asynchronous online interactive lesson. Interactive Learning Environments, 30(7), 1170–1182. https://doi.org/10.1080/10494820.2020.1712426
Calvani, A., Fini, A., Molino, M., & Ranieri, M. (2010). Visualizing and monitoring effective interactions in online collaborative groups. British Journal of Educational Technology, 41(2), 213–226. https://doi.org/10.1111/j.1467-8535.2008.00911.x
Carter, R. A., Jr., Rice, M., Yang, S., & Jackson, H. A. (2020). Self-regulated learning in online learning environments: Strategies for remote learning. Information and Learning Science, 121(5–6), 321–329. https://doi.org/10.1108/ILS-04-2020-0114
Chadwick, D. D., & Platt, T. (2018). investigating humor in social interaction in people with intellectual disabilities: A systematic review of the literature. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2018.01745
Chejara, P., Kasepalu, R., Prieto, L. P., Rodríguez-Triana, M. J., Ruiz Calleja, A., & Schneider, B. (2024). How well do collaboration quality estimation models generalize across authentic school contexts? British Journal of Educational Technology, 55(4), 1602–1624. https://doi.org/10.1111/bjet.13402
Chen, C. M., & Chen, P. C. (2023). A gamified instant perspective comparison system to facilitate online discussion effectiveness. British Journal of Educational Technology, 54(3), 790–811. https://doi.org/10.1111/bjet.13295
Cialdini, R. B., & Goldstein, N. J. (2004). Social influence: Compliance and conformity. Annual Review of Psychology, 55, 591–621. https://doi.org/10.1146/annurev.psych.55.090902.142015
Corcoran, K., Kedia, G., Illemann, R., & Innerhofer, H. (2020). Affective consequences of social comparisons by women with breast cancer: An experiment. Frontiers in Psychology, 11, 1234–1234. https://doi.org/10.3389/fpsyg.2020.01234
Cui, Y., & Schunn, C. D. (2024). Peer feedback that consistently supports learning to write and read: providing comments on meaning-level issues. Assessment and Evaluation in Higher Education. https://doi.org/10.1080/02602938.2024.2364025
Delava, M., Michinov, N., Bohec, O., & Hénaff, B. (2017). How can students’ academic performance in statistics be improved? Testing the influence of social and temporal-self comparison feedback in a web-based training environment. Interative Learning Environments., 25(1), 35–47. https://doi.org/10.1080/10494820.2015.1090456
Depaepe, F., & König, J. (2018). General pedagogical knowledge, self-efficacy and instructional practice: Disentangling their relationship in pre-service teacher education. Teaching and Teacher Education, 69, 177–190. https://doi.org/10.1016/j.tate.2017.10.003
Dijkstra, P., Kuyper, H., Werf, G. V. D., Buunk, A. P., & Zee, Y. G. V. D. (2008). Social comparison in the classroom: A review. Review of Educational Research, 78(4), 828–879. https://doi.org/10.3102/0034654308321210
Duvall, M., Matranga, A., & Silverman, J. (2020). Designing for and facilitating knowledge-building discourse in online courses. Information and Learning Sciences, 121(7/8), 487–501. https://doi.org/10.1108/ILS-04-2020-0081
Fam, J. Y., Bala Murugan, S., & Yap, C. Y. L. (2020). Envy in social comparison-behaviour relationship: Is social comparison always bad? Psychological Studies, 65(4), 420–428. https://doi.org/10.1007/s12646-020-00575-7
Festinger, L. (1954). A theory of social comparison processes. Human Relations, 7(2), 117–140. https://doi.org/10.1177/001872675400700202
Fleiss, J. L., Levin, B. A., & Paik, M. Cho. (2003). Statistical methods for rates and proportions (3rd ed.). J. Wiley.
Flener-Lovitt, C., Bailey, K., & Han, R. (2020). Using structured teams to develop social presence in asynchronous chemistry courses. Journal of Chemical Education, 97(9), 2519–2525. https://doi.org/10.1021/acs.jchemed.0c00765
Fleur, D. S., van den Bos, W., & Bredeweg, B. (2023). Social comparison in learning analytics dashboard supporting motivation and academic achievement. Computers and Education Open, 4. https://doi.org/10.1016/j.caeo.2023.100130
Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74(1), 59–109. https://doi.org/10.3102/00346543074001059
Frey, B. A., & Alman, S. W. (2003). Applying adult learning theory to the online classroom. New Horizons in Adult Education & Human Resource Development, 17(1), 4–12. https://doi.org/10.1002/nha3.10155
Gao, X., Noroozi, O., Gulikers, J., Biemans, H. J., & Banihashem, S. K. (2024). A systematic review of the key components of online peer feedback practices in higher education. Educational Research Review, 42, 100588. https://doi.org/10.1016/j.edurev.2023.100588
Gegenfurtner, A., & Ebner, C. (2019). Webinars in higher education and professional training: A meta-analysis and systematic review of randomized controlled trials. Educational Research Review, 28, 100293. https://doi.org/10.1016/j.edurev.2019.100293
Guan, Y. H., Tsai, C. C., & Hwang, F. K. (2006). Content analysis of online discussion on a senior-high-school discussion forum of a virtual physics laboratory. Instructional Science, 34(4), 279–311. https://doi.org/10.1007/s11251-005-3345-1
Gunawardena, C. N., Lowe, C. A., & Anderson, T. (1997). Analysis of a global online debate and the development of an interaction analysis model for examining social construction of knowledge in computer conferencing. Journal of Educational Computing Research, 17(4), 397–431. https://doi.org/10.2190/7MQV-X9UJ-C7Q3-NRAG
Han, R., Xu, J., Ge, Y., & Qin, Y. (2020). The impact of social media use on job burnout: The role of social comparison. Frontiers in Public Health, 8, 588097. https://doi.org/10.3389/fpubh.2020.588097
Hendarwati, E., Nurlaela, L., Bachri, B. S., & Sa’ida, N. (2021). Collaborative problem based learning integrated with online learning. International Journal of Emerging Technologies in Learning, 16(13), 29–39. https://doi.org/10.3991/ijet.v16i13.24159
Hou, H. T., & Wu, S. Y. (2011). Analyzing the social knowledge construction behavioral patterns of an online synchronous collaborative discussion instructional activity using an instant messaging tool: A case study. Computers & Education, 57(2), 1459–1468. https://doi.org/10.1016/j.compedu.2011.02.012
Hu, Y. H., Yu, H. Y., Tzeng, J. W., & Zhong, K. C. (2023). Using an avatar-based digital collaboration platform to foster ethical education for university students. Computers and Education, 196, 104728. https://doi.org/10.1016/j.compedu.2023.104728
Ingram, M. (2023). A (dis)play on words: Emergent bilingual students’ use of verbal jocularity as a channel of the translanguaging corriente. Linguistics and Education, 74, 101165. https://doi.org/10.1016/j.linged.2023.101165
Jeon, M., Kwon, K., & Bae, H. (2022). Effects of different graphic organizers in asynchronous online discussions. Educational Technology Research and Development, 71(2), 689–715. https://doi.org/10.1007/s11423-022-10175-z
Joksimovic, S., Gasevic, D., Kovanovic, V., Riecke, B. E., & Hatala, M. (2015). Social presence in online discussions as a process predictor of academic performance. Journal of Computer Assisted Learning, 31(6), S18–S19. https://doi.org/10.1111/jcal.12107
Kalinowski, E., Egert, F., Gronostaj, A., & Vock, M. (2020). Professional development on fostering students’ academic language proficiency across the curriculum—a meta-analysis of its impact on teachers’ cognition and teaching practices. Teaching and Teacher Education, 88, 102971. https://doi.org/10.1016/j.tate.2019.102971
Kaufmann, R., & Vallade, J. I. (2020). Exploring connections in the online learning environment: Student perceptions of rapport, climate, and loneliness. Interactive Learning Environments, 30(10), 1794–1808. https://doi.org/10.1080/10494820.2020.1749670
Kawai, G. (2006). Collaborative peer-based language learning in unsupervised asynchronous online environments. Fourth International Conference on Creating, Connecting and Collaborating through Computing (C5’06), 35–41. https://doi.org/10.1109/C5.2006.12
Kim, Y., Jeong, S., Ji, Y., Lee, S., Kwon, K. H., & Jeon, J. W. (2015). Smartphone response system using twitter to enable effective interaction and improve engagement in large classrooms. IEEE Transactions on Education, 58(2), 98–103. https://doi.org/10.1109/TE.2014.2329651
Kollöffel, B., & Jong, T. (2016). Can performance feedback during instruction boost knowledge acquisition? Contrasting criterion-based and social comparison feedback. Interactive Learning Environments., 24(7), 1428–1438. https://doi.org/10.1080/10494820.2015.1016535
Kong, F., Wang, M., Zhang, X., Li, X., & Sun, X. (2021). Vulnerable narcissism in social networking sites: The role of upward and downward social comparisons. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2021.711909
Kumi-Yeboah, A. (2018). Designing a cross-cultural collaborative online learning framework for online instructors. Online Learning Journal, 22(4), 181–201. https://doi.org/10.24059/olj.v22i4.1520
Li, J., Tang, Y., Cao, M., & Hu, X. (2018). The moderating effects of discipline on the relationship between asynchronous discussion and satisfaction with MOOCs. Journal of Computers in Education (the Official Journal of the Global Chinese Society for Computers in Education), 5(3), 279–296. https://doi.org/10.1007/s40692-018-0112-2
Liaw, H., Yu, Y.-R., Chou, C.-C., & Chiu, M.-H. (2021). Relationships between facial expressions, prior knowledge, and multiple representations: A case of conceptual change for kinematics instruction. Journal of Science Education and Technology, 30(2), 227–238. https://doi.org/10.1007/s10956-020-09863-3
Lin, X., & Sun, Q. (2024). Discussion activities in asynchronous online learning: Motivating adult learners’ interactions. The Journal of Continuing Higher Education, 72(1), 84–103. https://doi.org/10.1080/07377363.2022.2119803
Liu, S., Hu, T., Chai, H., Su, Z., & Peng, X. (2021). Learners’ interaction patterns in asynchronous online discussions: An integration of the social and cognitive interactions. British Journal of Educational Technology, 53(1), 23–40. https://doi.org/10.1111/bjet.13147
Lu, Y., Li, K., Sun, Z., Ma, N., & Sun, Y. (2023). Exploring the effects of role scripts and goal-orientation scripts in collaborative problem-solving learning. Education and Information Technologies, 28, 12191–12213. https://doi.org/10.1007/s10639-023-11674-z
Ma, N., Du, L., & Lu, Y. (2022a). A model of factors influencing in-service teachers’ social network prestige in online peer assessment. Australasian Journal of Educational Technology, 38(5), 90–108. https://doi.org/10.14742/ajet.7622
Ma, N., Du, L., Lu, Y., & Sun, Y.-F. (2022b). The influence of social network prestige on in-service teachers’ learning outcomes in online peer assessment. Computers and Education Open, 3, 100087. https://doi.org/10.1016/j.caeo.2022.100087
Ma, N., Zhang, Y.-L., Liu, C.-P., & Du, L. (2023). The comparison of two automated feedback approaches based on automated analysis of the online asynchronous interaction: A case of massive online teacher training. Interactive Learning Environments. https://doi.org/10.1080/10494820.2023.2191252
Merk, S., Poindl, S., Wurster, S., & Bhol, T. (2020). Fostering aspects of pre-service teachers’ data literacy: Results of a randomized controlled trial. Teaching and Teacher Education, 91, 103043. https://doi.org/10.1016/j.tate.2020.103043
Mussweiler, T. (2003). Comparison processes in social judgment: Mechanisms and consequences. Psychological Review, 110(3), 472–489. https://doi.org/10.1037/0033-295X.110.3.472
Mussweiler, T., & Epstude, K. (2009). Relatively fast! Efficiency advantages of comparative thinking. Journal of Experimental Psychology: General, 138(1), 1–21. https://doi.org/10.1037/a0014374
Neugebauer, J., Ray, D. G., & Sassenberg, K. (2016). When being worse helps: The in-fluence of upward social comparisons and knowledge awareness on learner engagement and learning in peer-to-peer knowledge exchange. Learning and Instruction, 44, 41–52. https://doi.org/10.1016/j.learninstruc.2016.02.007
Nordin, N., Samsudin, M. A., Mansor, A. F., & Ismail, M. E. (2022). Social network analysis to examine the effectiveness of e-PBL with design thinking to foster collaboration: comparisons between high and low self-regulated learners. Journal of Technical Education and Training, 12(4), 48–59. https://doi.org/10.30880/jtet.2020.12.04.005
Noroozi, O., Alqassab, M., Taghizadeh Kerman, N., Banihashem, S. K., & Panadero, E. (2024). Does perception mean learning? Insights from an online peer feedback setting. Assessment and Evaluation in Higher Education. https://doi.org/10.1080/02602938.2024.2345669
Oh, E. G., Huang, W.-H.D., Hedayati Mehdiabadi, A., & Ju, B. (2018). Facilitating critical thinking in asynchronous online discussion: Comparison between peer- and instructor-redirection. Journal of Computing in Higher Education, 30(3), 489–509. https://doi.org/10.1007/s12528-018-9180-6
Park, J., Kim, B., & Park, S. (2021). Understanding the behavioral consequences of upward social comparison on social networking sites: The mediating role of emotions. Sustainability. https://doi.org/10.3390/su13115781
Prestridge, S. (2016). Conceptualising self-generating online teacher professional development. Technology, Pedagogy and Education, 26(1), 85–104. https://doi.org/10.1080/1475939x.2016.1167113
Ray, D. G., Neugebauer, J., & Sassenberg, K. (2017). Learners’ habitual social comparisons can hinder effective learning partner choice. Learning and Individual Differences, 58, 83–89. https://doi.org/10.1016/j.lindif.2017.08.003
Rogat, T. K., & Adams-Wiggins, K. R. (2015). Interrelation between regulatory and socioemotional processes within collaborative groups characterized by facilitative and directive other-regulation. Computers in Human Behavior, 52, 589–600. https://doi.org/10.1016/j.chb.2015.01.026
Schellens, T., & Valcke, M. (2005). Collaborative learning in asynchronous discussion groups: What about the impact on cognitive processing? Computers in Human Behavior, 21(6), 957–975. https://doi.org/10.1016/j.chb.2004.02.025
Schenke, K., Redman, E. J. K. H., Chung, G. K. W. K., Chang, S. M., Feng, T., Parks, C. B., & Roberts, J. D. (2020). Does “Measure Up!” measure up? Evaluation of an iPad app to teach preschoolers measurement concepts. Computers & Education, 146, 103749. https://doi.org/10.1016/j.compedu.2019.103749
Shaffer, D. W., Collier, W., & Ruis, A. R. (2016). A tutorial on epistemic network analysis: Analyzing the structure of connections in cognitive, social, and interaction data. Journal of Learning Analytics, 3(3), 9–45. https://doi.org/10.18608/jla.2016.33.3
Shea, P., & Bidjerano, T. (2010). Learning presence: Towards a theory of self-efficacy, self-regulation, and the development of a communities of inquiry in online and blended learning environments. Computers & Education, 55(4), 1721–1731. https://doi.org/10.1016/j.compedu.2010.07.017
Şimşek, A. S. (2023). The power and type I error of Wilcoxon-Mann-Whitney, Welch’s t, and student’s t tests for Likert-type data. International Journal of Assessment Tools in Education, 10(1), 114–128. https://doi.org/10.21449/ijate.1183622
Song, K., Williams, K. M., Schallert, D. L., & Pruitt, A. A. (2021). Humor in multimodal language use: Students’ Response to a dialogic, social-networking online assignment. Linguistics and Education, 63, 100903. https://doi.org/10.1016/j.linged.2021.100903
Sun, Z., Lin, C.-H., Lv, K., & Song, J. (2021). Knowledge-construction behaviors in a mobile learning environment: A lag-sequential analysis of group differences. Educational Technology Research and Development, 69(2), 533–551. https://doi.org/10.1007/s11423-021-09938-x
Tlili, A., Wang, H., Gao, B., Shi, Y., Zhiying, N., Looi, C.-K., & Huang, R. (2023). Impact of cultural diversity on students’ learning behavioral patterns in open and online courses: A lag sequential analysis approach. Interactive Learning Environments, 31(6), 3951–3970. https://doi.org/10.1080/10494820.2021.1946565
Verduyn, P., Gugushvili, N., Massar, K., Tht, K., & Kross, E. (2020). Social comparison on social networking sites. Current Opinion in Psychology, 36, 32–37. https://doi.org/10.1016/j.copsyc.2020.04.002
Wambsganss, T., Janson, A., & Leimeister, J. M. (2022). Enhancing argumentative writing with automated feedback and social comparison nudging. Computers and Education, 191, 104644. https://doi.org/10.1016/j.compedu.2022.104644
Wang, C., Fang, T., & Gu, Y. (2020). Learning performance and behavioral patterns of online collaborative learning: Impact of cognitive load and affordances of different multimedia. Computers & Education, 143, 103683. https://doi.org/10.1016/j.compedu.2019.103683
Xie, K., Di Tosto, G., Lu, L., & Cho, Y. S. (2018). Detecting leadership in peer-moderated online collaborative learning through text mining and social network analysis. Internet & Higher Education, 38, 9–17. https://doi.org/10.1016/j.iheduc.2018.04.002
Yang, C. C. Y. (2023). Lag sequential analysis for identifying blended learners? sequential patterns of e-Book note-taking for self-regulated learning. Educational Technology & Society Journal of International Forum of Educational Technology & Society and IEEE Learning Technology Task Force, 26(2), 63–75. https://doi.org/10.30191/ETS.202304_26(2).0005
Yang, Y. (2022). Collaborative analytics-supported reflective Assessment for Scaffolding Pre-service Teachers’ collaborative Inquiry and Knowledge Building. International Journal of Computer-Supported Collaborative Learning, 17(2), 249–292. https://doi.org/10.1007/s11412-022-09372-y
Yang, Y., van Aalst, J., & Chan, C. K. K. (2020). Dynamics of reflective assessment and knowledge building for academically low-achieving students. American Educational Research Journal, 57(3), 1241–1289. https://doi.org/10.3102/0002831219872444
Zamora, M. D. (1985). Review of The Constitution of Society. Man, 20(3), 567–568. https://doi.org/10.2307/2802469
Zhang, S., Chen, J., Wen, Y., Chen, H., Gao, Q., & Wang, Q. (2021). Capturing regulatory patterns in online collaborative learning: A network analytic approach. International Journal of Computer-Supported Collaborative Learning, 16(1), 37–66. https://doi.org/10.1007/s11412-021-09339-5
Zheng, J., Xing, W., & Zhu, G. (2019). Examining sequential patterns of self- and socially shared regulation of STEM learning in a CSCL environment. Computers & Education, 136, 34–48. https://doi.org/10.1016/j.compedu.2019.03.005
Zheng, Y. F., Zhao, Y. N., & Wang, W. (2021). Research on social relationship analysis and visualization in online collaborative discussions. China Education Info, 2021(5), 10–17. https://doi.org/10.3969/j.issn.1673-8454.2021.03.004
Zhou, Q. G., Guo, S. C., & Zhou, R. (2015). Investigation about participatory teachers’ training based on MOOC. International Journal of Distance Education Technologies, 13(3), 44–52. https://doi.org/10.4018/ijdet.2015070103
Acknowledgements
We express our sincere gratitude to all participants who voluntarily participated in our study and offered invaluable support during the data collection phase.
Funding
This work was funded by the “Research on Time-Emotion-Cognition Analysis Model and Automatic Feedback Mechanism of Online Asynchronous Interaction” project [Grant number 62077007], supported by National Natural Science Foundation of China, and by the “Research on Multimodal Process Data-Driven Automatic Analysis and Feedback for Deep Interdisciplinary Leaming” project [Grant number: YLXKPY-XSDW202401] supported by the First-Class Education Discipline Development of Beijing Normal University, China.
Author information
Authors and Affiliations
Contributions
Yao Lu: Conceptualization, Methodology, Formal analysis, Investigation, Data curation, Writing—original draft, Writing—editing and translation, Writing—review and editing, Visualization, Project administration; Ning Ma: Conceptualization, Methodology, Writing—review and editing, Supervision, Funding acquisition, Project administration; Wenyu Yan: Writing—editing and translation, Writing—review and editing, Supervision, Project administration.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Lu, Y., Ma, N. & Yan, WY. Social comparison feedback in online teacher training and its impact on asynchronous collaboration. Int J Educ Technol High Educ 21, 55 (2024). https://doi.org/10.1186/s41239-024-00486-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s41239-024-00486-x