Skip to main content

Who engaged in the team-based assessment? Leveraging EdTech for a self and intra-team peer-assessment solution to free-riding

Abstract

A STEM-based faculty in an Australian university leveraged online educational technology to help address student and academic concerns associated with team-based assessment. When engagement and contribution of all team members cannot be assured, team-based assessment can become an unfair and inaccurate measure of student competency. This case study explores the design and capacity of an online self and intra-team peer-assessment of teamwork strategy to measure student engagement and enable peers to hold each other accountable during team-based assessments. Analysis of student interactions across 39 subjects that implemented the strategy in 2020, revealed that an average of 94.4% of students completed the self and intra-team peer-assessment task when designed as part of a summative team-based assessment. The analysis also revealed that an average of 10.3% of students were held accountable by their peers, receiving feedback indicating their teamwork skills and behaviours were below the required minimum standard. Furthermore, the strategy was successfully implemented in cohorts ranging from seven to over 700 students, demonstrating scalability. Thus, this online self and intra-team peer-assessment strategy provided teaching teams with evidence of student engagement in a team-based assessment while also enabling students to hold each other accountable for contributing to the team task. Lastly, as the online strategy pairs with any discipline-specific team-based assessment, it provided the faculty with a method that could be used consistently across its schools to support management and engagement in team-based assessments.

Introduction

This five-year-long case study takes place in an Australian university that has placed a priority on enhancing graduate employability (Oliver, 2015; Young et al., 2017) and has established eight graduate learning outcomes (GLO) (Deakin University, 2021), that encompass a collection of 21st-century skills. This paper focuses on the GLO of Teamwork and associated skills.

Teamwork continues to be a well-recognised graduate employability skill, ranked in the top three most essential skills to global employers (QS, 2019). Likewise, a meta-analysis conducted by the World Economic Forum (WEF) in 2015 identified 16 essential skills students would need to develop for the 21st-century workplace, highlighting communication and collaboration as critical competencies. The WEF also recognised the potential for educational technologies to support the development and assessment of 21st-century skills when integrated into well designed instructional systems (WEF, 2015). This paper describes the design, implementation, and evaluation of a purpose-designed, online, self and intra-team peer-assessment task that combines with existing team-based assessments (TBA) to support student engagement and improve their experience of TBAs across a STEM-based faculty.

Voluntary student feedback comments (Deakin eVALUate, 2021) from this STEM-based faculty were captured and thematically analysed at the end of 2015. The content analysis focused on TBA and was conducted inductively using NVivo (Ethical approval was granted by the Faculty Human Advisory Group (HEAG), Deakin University SEBE-2020-51). The resulting themes revealed that while many students liked learning with others, a concerning number of students were dissatisfied with their TBA experience. Team-based assessment in this context required students to collaborate within a project team to produce a submission for assessment by an academic. University policy stipulated that all members of a TBA receive the same team mark unless approved and stated in the published subject assessment brief. The analysis of student comments related to TBA (N = 435) revealed that 30% (N = 129) of students described TBAs positively as opportunities to; collaborate, learn from peers, build social networks, and develop teamwork skills in a real-world environment. The remaining 70% (N = 306) of comments were negative, with students citing lack of effort from team members (44%) and lack of organisation/management of the TBA (40%) as the two main reasons for their dissatisfaction.

Unequal workload distribution is often associated with the common student criticism that TBA is unfair, as not all students engage productively, yet all receive the same team mark (Goldfinch & Raeside, 1990; Lejk et al., 1996; Willcoxson, 2006). This student behaviour is often referred to by students and in the literature as ‘free-riding’ (Fellenz, 2006; Hall & Buzwell, 2012; Maiden & Perry, 2011; Strijbos, 2016). The authors assert that free-riding behaviour constitutes academic misconduct, as the free-riding student intentionally permits a teaching team to evaluate their performance against work completed by another student. However, in this faculty, there was no established mechanism to identify when students disengaged or attempted to free-ride during TBAs. Consequently, the faculty was aware that free-riding was a problem but could not establish the number and identity of students free-riding.

A student’s free-riding behaviour can result in misalignment between the intended learning outcomes of the TBA and their actual learning outcomes. In this faculty, several academics implemented ad hoc or bespoke methods (Tucker et al., 2009) to assess individual student contributions to TBA, but many neglected to assess teamwork skills explicitly. Instead, the submission of a team project became the proxy for assessing both teamwork and discipline-specific skills. Assuming all students contribute equally to the team process and submission, diminishes the assessment’s construct validity (Meijer et al., 2020).

The lack of consistency in assessing skills and outcomes in higher education in Australia is a recognised problem (Martin & Mahat, 2017). The significant challenges identified were: the high degree of autonomy afforded to academics to design assessment, the lack of generalised testing methods, and the common use of a bottom-up model for specifying assessment (Martin & Mahat, 2017). The authors also acknowledged that it is a challenge for academics involved in teaching to stay abreast of current technologies and education literature (Henderson & Dancy, 2008).

This paper details the collaborative top-down project that endeavoured to bring an improved and consistent approach to the management of TBAs across a faculty. The concerns raised by students and academics, namely, measuring student engagement and reducing students’ dissatisfaction with the effort of their peers, were addressed through the design and implementation of the online self- and intra-team peer-assessment strategy presented here. The development of the strategy is explained, with enhancements justified, before exploring the results of student interactions with the strategy across 39 diverse subjects during the year 2020. This case study explores whether the strategy had the desired effect of providing teaching teams with evidence of student engagement in TBAs and whether students used the strategy to hold each other accountable to the team task. To investigate the strategy’s viability to be implemented faculty-wide, the generalisability and scalability of the strategy were examined. Finally, as an early indicator of student satisfaction with the strategy, voluntary student feedback from 2020 was collated and analysed.

Explanation of the self- and intra-team peer-assessment of teamwork strategy

This case study details the strategy developed by the academics named in this paper, who combined TBA with an online self- and intra-team peer-assessment of teamwork (SITPAT) task. Intra-team peer-assessment, also referred to as intra-group peer-assessment (Strijbos, 2016), is a specific form of peer-assessment and is defined in this context as the process of students evaluating and grading the work and performance of their peers, within their team, against pre-set criteria. Students completed intra-team peer-assessment and self-assessment against teamwork skills and behaviours criteria via the group member evaluation (GME) tool in the online platform, FeedbackFruits (FeedbackFruits GME, 2021). Henceforth, reference made to the ‘SITPAT task’ refers solely to completing self- and intra-team peer-assessment of teamwork via the GME tool. Reference to ‘the strategy’ refers to SITPAT, combined with a discipline-specific TBA.

Self- and peer-assessment overview

As student team interactions often occur external to the classroom, it is the authors’ experience that academics are rarely in the best position to comment on the application of the teamwork skills utilised by their students. The students working in the team are best positioned to observe, assess, and provide feedback to their peers regarding achievements, learning outcomes, and performances as team members. Therefore, intra-team peer-assessment became the chosen method to involve students in assessing the team process. We define peer assessment here as ‘students judging and making decisions about the work of their peers against particular criteria’ (Adachi et al., 2018). When used within a team, we refer to the process as intra-team peer assessment. The reciprocal feedback provided to their peers about the team’s performance was also shared with the teaching team, reducing the opportunity for students to free-ride.

The authors discovered that with considered design, intra-team peer-assessment of teamwork could do far more than simply reduce free-riding. It had the potential to be used as a powerful collaborative learning task that could support the development of a cluster of interconnected generic employability skills (Strijbos et al., 2015). The peer-assessment criteria drew student focus to the broad range of personal, interpersonal, and technical skills required to work with others within a team (Lepine et al., 2008; Marks et al., 2001; Varela & Mead, 2018). Students also had the opportunity to develop their feedback literacy skills (Molloy et al., 2020) as they; provided, received, interacted, evaluated, and applied feedback from their peers and the teaching team.

The affordances of peer-assessment could be enhanced by incorporating self-assessment to promote reflection and self-critique on received feedback. Thus, self- and peer-assessment can develop students’ evaluative judgment skills (Tai et al., 2018) and support self-regulated and life-long learning (Boud & Falchikov, 2006). Villarroel et al. identified self- and peer-assessment as essential tasks that support authentic assessment due to the relevance of the skillset to the world of work (Villarroel et al., 2018). When scaffolded from first to final years, self- and peer-assessment of teamwork have the potential to provide a multifaceted learning environment for students as they progress towards the attainment of their GLOs.

Self- and peer-assessment considerations

During the initial design process, conversations with colleagues highlighted several challenges associated with self- and peer-assessment implementation. Many concerns were consistent with those previously raised in the literature. For example, our colleagues questioned whether students would feel safe (Edmondson, 1999) to engage with peer assessment, and when doing so, would their responses be honest or moderated for their friends? (Panadero et al., 2013). There were also concerns regarding students’ capability as novice learners to provide valuable feedback to others (Kruger & Dunning, 1999).

Customised rubrics were developed with purpose-designed criteria and attainment level descriptors to support the efficacy and confidence of students to assess each others’ teamwork skills and behaviours (Jonsson, 2014). The process kept students anonymous to support honest assessment and a safer space to hold others accountable. Academic oversight was maintained as all submissions were identifiable to the teaching team. The task was a requirement of the TBA; therefore, students could be penalised for non-completion. The output from the peer assessment and the student engagement with the feedback process provides evidence to individualise a student’s mark.

A gap in available education technology

Finding an established online tool that supported our proposed design and integrated with our learning management system (LMS) was challenging. As identified by the WEF, few tools existed in the market that addressed the assessment of teamwork skills (WEF, 2015) at the commencement of this project. As an interim solution, the web-based self- and peer-assessment tool SparkPLUS (Willey & Gardner, 2009) was trialled in 2016. SparkPLUS was an online platform where students anonymously rated each other against set criteria which automatically calculated an individualisation factor and enabled the students to provide written feedback. However, SparkPLUS could not support LMS integration or customisable rubrics. The students’ self-ratings and team performance could also skew the individualisation factor (see section, The Group Skills Factor (GSF), which details the algorithm used by SparkPLUS). The authors’ preference was for students to be rated on the skills and behaviours that underpin teamwork, independent of their self-rating and the rest of the teams’ ratings. Regardless of the drawbacks, SparkPLUS did provide an opportunity to pilot and test critical elements of the strategy.

Development and implementation of the strategy

The combination of TBA with SITPAT was piloted, trialled, and implemented as a collaborative project between faculty and school academics. Use of the strategy was voluntary, being available to all subjects in the faculty using a TBA design. The authors of this paper were key early-adopting academics who trialled and informed the development of the strategy over time.

2015—Design—proof of concept in a single engineering subject.

2016—Pilot—first iteration and feedback from students and academics.

2017—Trial—investigating other tools.

2018—Trial—development of the new tool and improved pedagogical model.

2019—Transition—to the new tool and faculty-based rubric.

2020—Implementation of the new tool and strategy.

Proof of concept design

During the proof of concept stage, several students challenged academics by asking, ‘Why are you wasting our time with teamwork skills when we are here to become engineers?’. As a teaching team, we had failed to help students link the importance of developing teamwork skills to; their course learning outcomes, the world of work, and their life-long learning. This challenge prompted an immediate redesign of the student induction process. In addition, the term transferable skills was promoted within this faculty to elevate the status of teamwork and its associated cluster of generic skills. The aim was to support students and academics to recognise non-discipline specific skills as transferable to new contexts, across subjects and into the world of work.

To further enhance the strategy, a formative feedback opportunity was incorporated (Black & Wiliam, 2018). Formative completion of a SITPAT task supported; early engagement with the teamwork rubric, allowed students to practise their skills without the pressure of marks and provided interim feedback to encourage mid-task reflection on performance. The results from the formative task also provided academics with mid-task evidence on team progress and the opportunity to stage an intervention, if necessary.

Preparing early adopter academics

In 2016, 24 academics volunteered to trial the first iteration of the strategy in 25 subjects across the STEM faculty, involving 2235 student participants. In preparation, academics were provided with one-on-one professional development at their point of need to ensure:

  • Subject learning outcomes referred to the development of teamwork skills

  • The assessment task stipulated the use of the SITPAT task

  • Understanding of the underlying pedagogy of the strategy

  • Correct implementation and monitoring of the tool

  • Students were introduced to the strategy and how to use the tool

  • Correct analysis of results

  • Investigation of student feedback before individualisation

The faculty approved the scale-up of the strategy based on the positive academic feedback that it provided evidence of student engagement in TBA.

Overview of the purpose-built tool

To provide our academics with a purpose-built tool, the authors began their collaboration with the EdTech company FeedbackFruits, in 2018. The collaboration involved a year-long project designing, building, testing, and improving the technology. The result was an integrated group contribution grading (GCG) feature to support SITPAT, incorporating the group skills factor (GSF) algorithm to support the individualisation of a team mark. A detailed explanation of the algorithm is covered in detail in the later section, The Group Skills Factor (GSF). The GSF and GCG features are contained within the group member evaluation (GME) tool (FeedbackFruits GME, 2021). In addition, the FeedbackFruits platform is coupled to our LMS using learning tool interoperability (LTI) technology enabling the GME to access student and group formation details, providing students with seamless access to the platform and enabling finalised marks to be sent to the student grade book.

The GME tool guides students through a series of steps to complete the self and peer assessment process. Step one, ‘Read instructions’, clarifies the task. Step two, ‘Give feedback to yourself and group members’, provides students with assessment criteria in the form of a rubric and space to provide written feedback. Finally, step three, ‘Read received feedback’, provides students with their peer reviews and an opportunity to reflect and download their reviews as a PDF.

The GCG feature brings the student ratings together in a format that enables the academic to overview the averaged ratings for each student against all set criteria. The GCG also lists the resulting GSFs for each student and the student’s overall self-assessment. The final marks from the artefacts produced by the student teams are entered in the GCG as a percentage. In addition, the GCG provides ‘suggested adjustments’ to each student’s team mark based on the GSF statistic. All suggested adjustments can be investigated by the academic by reading the written comments provided by the students during the peer assessment process, together with any observations made by the teaching team during the TBA. Once individualised team marks are finalised in the GCG, the academic can ‘publish’ the results, which populates the marks into the student grade book within the LMS.

Pedagogical overview of the strategy

The pedagogical model of the strategy for summative use is illustrated in Fig. 1. A team collaborates to complete their discipline specific TBA and submits the resulting artefact for marking by the academic. All team members are then required to complete the SITPAT task anonymously. The GME tool provides students with an inbuilt, purpose-designed rubric that focuses on specific teamwork behaviours and processes. The GCG feature calculates the students’ individual GSF from the results of the intra-team peer-assessment. The academic uses the GSF to determine whether all team members met the teamwork learning outcome and therefore deserve the team mark. The following terminology defined student achievement, ‘met well’, ‘met’, ‘partially met’, ‘not met’ (Rust, 2002). Students who ‘met well/met’ the teamwork learning outcome receive the team mark. Those who ‘partially met’ the teamwork learning outcome are investigated by the academic and potentially receive a mark scaled by their GSF. Students who have ‘not met’ the teamwork learning outcome are investigated by the academic and potentially receive a mark of zero. The academic can further individualise a student’s mark for not completing the task. The GME tool provides an opportunity for student self-reflection by sharing their received self and peer feedback.

Fig. 1
figure 1

The pedagogical model of the strategy illustrates the two components: team-based assessment (TBA) and self- and intra-team peer-assessment of teamwork (SITPAT) integrating the group member evaluation (GME) tool, featuring the group skills factor (GSF) calculation that sits inside the group contribution grading (GCG) tool

The four-level, outcomes-based rubric

The authors acknowledged the diversity of incoming students’ skill sets and experiences. Backgrounds included, but were not limited to, secondary school, mature age, industry-based, and international students. A novice baseline of incoming knowledge was assumed to support all students’ efficacy and confidence levels. A strategic decision was made to move away from the traditional high distinction, distinction, credit, pass, and fail rubric model. Instead, a simplified, four-level, outcomes-based rubric that provided clear points of difference between levels was implemented.

This rubric style supports student attainment of subject learning outcomes rather than focusing on grades (Rust, 2002). It also closely aligns with the assessment students will experience in the world of work during performance reviews, thus strengthening the level of authenticity of the assessment task (Schultz et al., 2022). Students provide their self- and peer-assessment ratings against this style of rubric. At the end of the SITPAT task, the GCG calculates a student’s GSF based on these peer ratings. Table 1 demonstrates how the four levels (Rust, 2002) were contextualised for students and academics and aligned to a numerical value for use in the GSF algorithm. To support academics to contextualise the levels, we refer to the student as having ‘met well or met’ the expectation of the academic for each criterion, with ‘met’ equating to the minimum standard and therefore ‘met well’ equating to above the minimum standard. For student use, we focused on contextualising the levels for preparation for the world of work. The minimum standard for each criterion was valued team member, with the above standard criterion described as an aspirational industry ready team member. In this way, we aimed to encourage students to strive beyond the minimum standard and focus on developing their employability skills.

Table 1 Contextualisation of rubric levels for students and academics, with an example criterion

A faculty-standard teamwork rubric

Academics were provided with a faculty-standard rubric (Additional file 1: Table S1), developed to ensure consistent language across criteria and constructive alignment (Biggs, 1996) to the GLOs that underpin our course design. Academics removed the criteria not relevant to their TBA, resulting in a customised, six to ten criteria rubric. Based on the subject’s year level, academics could identify the criteria students had previously encountered and increase the complexity of the criteria accordingly.

The group skills factor (GSF)

The results of the SITPAT task are used to calculate each student’s GSF. The intentional design of the rubric ensures that the behaviour a student demonstrates in their team environment is aligned to four well-defined levels that have been assigned the values 3, 2, 1 and 0. Therefore, the resulting GSF reflects whether the student has ‘met well/met’, ‘partially met’, or ‘not met’, the teamwork learning outcome.

The GSF (Fig. 2a) is calculated by taking the square root of the average peer rating a student receives for all criteria, divided by the maximum possible average rating. The result is a GSF between 0 and 1. Compared to a linear function, the square root, as a curved function, is less punitive for students who ‘partially met’ the teamwork learning outcome. The resulting GSF aligns with the authors’ expectations of using the tool to support learning when used to individualise student marks.

Fig. 2
figure 2

A comparison of the group skills factor (GSF) to other individualisation factors in the literature, a the GSF, b the self and peer assessment (SPA) factor (Willey & Gardner, 2009), and c the peer assessment (PA) score (Goldfinch & Raeside, 1990)

The GSF algorithm addresses the limitations experienced during the trial of SparkPLUS, namely, the influence of all team members’ ratings and self-ratings on the SparkPLUS self and peer assessment (SPA) factor (Fig. 2b). The authors’ intention aligns more closely with the peer assessment (PA) score (Goldfinch & Raeside, 1990) (Fig. 2c), which is derived from the peer ratings only.

With the dependent variable ‘average team rating’ (Fig. 2b) as the denominator, the SPA factor algorithm indicates a student’s rating relative to the other team members’ ratings. Consequently, the SPA factor is not relevant outside of a specific team. Whereas the denominator for the GSF algorithm, ‘maximum possible average rating’ (Fig. 2a), is an independent variable and therefore is the same for all students in a subject. Importantly, when used as a part of a scaffolded assessment strategy, it is the same across cohorts and courses.

The SPA factor numerator, ‘average self-rating and peer-rating’ (Fig. 2b), enabled students to skew their SPA factor, requiring manual intervention to correct. When self-assessing, high achieving students tend to under-rate themselves while low achieving students tend to over-rate (Boud & Falchikov, 1989). The GSF numerator, ‘average peer-rating’ (Fig. 2a), is not affected by this inaccurate self-perception, so while students benefit from completing self-assessment for reflection purposes, the GSF algorithm is not influenced by the student’s self-rating. To assess individual student teamwork skills, the authors considered the GSF a more appropriate algorithm when compared to the SPA factor or PA score.

The individualisation process

The GME tool automates the individualisation process using the GCG and GSF functions. The maximum average peer mark a student can receive forms the denominator of the GSF calculation. In this case study, the maximum average mark in the rubric is ‘3’ (Table 1). Table 2 details the GSF boundaries used to delineate when a student’s GSF equates to that student having, ‘met well/met’, ‘partially met’, or ‘not met’ the teamwork learning outcome, prompting the individualisation of the student mark, where applicable.

Table 2 The group skills factor (GSF) boundaries used to individualise a student’s mark

Figure 3 illustrates worked examples of the GSF calculation from a student’s average peer mark. The established GSF boundaries inform the academic actions in the individualisation process by providing ‘suggested adjustments’ in the GCG function of the GME tool. For example, Student B received an average peer-mark of 2 and a GSF equal to 0.816. This GSF is above the 0.81 boundary (Table 2), indicating that the student met the teamwork learning outcome and deserves the team mark.

Fig. 3
figure 3

Individualisation process, from the average peer rating per criterion to the final suggested adjustment for each group skills factor (GSF) boundary

Non-completion of the task

Completing the summative SITPAT task is a critical component of the teamwork process. Non-completion results in a misalignment with the intended teamwork learning outcome. Student engagement in the task supports the development of skills that underpin; teamwork, evaluative judgment, feedback literacies, and self-reflection. Due to the reciprocal nature of the task, a one-hundred percent completion rate maximises feedback for peers while providing evidence for academics. Students who do not complete the summative SITPAT task are held accountable via a non-completion penalty, 25% deducted from the student’s individual mark for the TBA. For example, if the team mark is 90% and a student receives a GSF greater than or equal to 0.81 but did not complete the summative SITPAT task, a 25% penalty would be deducted, resulting in a final individual mark of 65%.

Academic professional development and student support

Academic preparation was an essential focus for this strategy to ensure consistent use of the tool and develop our academics’ digital competency. Many of our academics expressed their lack of confidence in using this new EdTech and welcomed support to develop their digital competency using this tool. The lack of academic digital competency in Higher Education is well recognised (Basilotta-Gómez-Pablos et al., 2022). As such, one-on-one support was provided to all academics using this strategy, whether they were new or requiring a refresh. In addition, we provided updated student and academic resources yearly to ensure consistent messaging and management of the strategy. The activities required by academics across a teaching period are summarised in Table 3.

Table 3 The sequence of activities required of the academic and students to complete the TBA and SITPAT combination

A purpose-designed student induction was developed at the faculty level to provide a consistent message to students regarding the justification and implementation of the strategy. Based on student feedback during the pilot and trialling of the strategy, key areas requiring clear explanation were identified. This feedback also highlighted the importance of being able to refer back to these explanations during the self- and peer-assessment process, which was best achieved using a single resource. Therefore, the primary resource to prepare students for the SITPAT task was a video that was produced at the faculty level and covered the critical areas of:

  • Teamwork as an essential employability skill

  • Student involvement in the assessment process

  • The FeedbackFruits GME tool

  • The process of giving and receiving feedback

  • The expected student behaviour based on the student code of conduct

  • Support available for dysfunctional teams

  • The individualisation processes

While this video targeted students, it was also helpful to introduce academics to the strategy. Additional support tasks in the student induction included a range of reflection tasks that helped students focus on; previous team task experiences, setting goals for the upcoming team task, consideration of their own and others’ social styles in a team environment and how to complete a team agreement. Table 3 summarises the activities that students must complete during their TBA.

Methodology

The project’s objective was to implement a strategy that could be used consistently across the faculty to improve the management of TBA, provide academics with evidence of student engagement, and provide students with a means to hold their peers accountable. The following research questions were proposed to investigate whether the strategy met the above objectives.

  1. 1.

    Is the strategy generalisable and scalable?

  2. 2.

    Are academics provided evidence of student engagement in TBA, and is there a difference in student engagement with the strategy when used formatively or summatively?

  3. 3.

    Do students use the strategy to hold each other accountable?

  4. 4.

    Does a comparison of the thematic analysis of voluntary end-of-subject student feedback from 2015 (before the strategy) to 2020 (after partial implementation of the strategy) provide indicators of the student response to the strategy?

Data from the following sources were analysed.

  • The count and description of subjects that used the strategy from 2015 to 2020 were collated.

  • Output from the GME tool for all 39 subjects using the strategy, both formatively and summatively in 2020, was deidentified and aggregated as faculty data to ensure the anonymity of participating academics and students. Ethical approval was granted by the faculty Human Ethics Advisory Group, Deakin University SEBE-2020-56.

  • Student feedback comments from 2015 and 2020, captured through the University’s established end-of-subject survey (Deakin eVALUate, 2021), were deidentified and aggregated to ensure the anonymity of participating academics and students. Ethics approval was granted by the faculty Human Ethics Advisory Group, Deakin University SEBE-2020-51. A thematic analysis was conducted on students’ written responses to the questions: ‘What are the most helpful aspects of this subject?’ and ‘How do you think this subject might be improved?’. The analysis focused on TBA and was conducted inductively using NVivo. A text search identified all comments containing the words (and their stems); team, teamwork, group, group work, peer, and fruit (FeedbackFruits).

Across the faculty, TBA was identified in 60 subjects in 2015 and 104 subjects in 2020. Of those 104 subjects with a TBA component, 39 used the strategy described in this paper.

Results

The scalability and generalisability of the strategy

The strategy was piloted, trialled, and implemented over 5 years in 58 Engineering, Science, and Information Technology subjects. In addition, over 90 academics were provided with one-on-one professional development at their point of need to support the implementation and consistent use of the strategy. Table 4 demonstrates the distribution of subjects using the strategy across the schools and includes subjects who postponed the strategy’s use due to COVID-19 restrictions and those who redesigned and discontinued the use of the strategy. The investigation focused on student interaction across the 39 units that used the strategy in 2020.

Table 4 Overview of subjects implementing the strategy across schools within this faculty, from the 2016 pilot to 2020

Over the last 5 years, several of the 58 subjects have used the strategy multiple times, with the total number of individual offerings totalling 182. The strategy was used in class sizes ranging from seven to 720 students. By summing the number of student participants in each of the 182 offerings, the total student cohort who have been exposed to the strategy from 2016 to 2020 is over 23,000. In programs where the strategy is implemented from first to final year, some students will have encountered the strategy more than once.

Student completion of the self and intra-team peer-assessment of teamwork tasks

Table 5 details the comparison of student completion rates between formative and summative SITPAT tasks in 2020 across 39 subject iterations during trimesters one, two and three. Analysis revealed that, on average, 94.4% of students completed the task when it was summative, which is 17.4% higher than when the task was formative.

Table 5 Average student completion rates during 2020 for formative and summative SITPAT tasks

Holding peers accountable during team-based assessment using the strategy

Table 6 compares the GSF ratings students received in 2020, across the 39 subject iterations, during trimesters one, two and three. Students receiving a GSF of 0.81 and above from their peers indicated their teamwork skills and behaviours ‘met well or met’, the required standard. Conversely, receiving a GSF below 0.81 demonstrated that students were being held accountable by their peers for not meeting the minimum standards for teamwork skills. Of the students held accountable by their peers, 6.8% were identified as ‘partially meeting’ the minimum standards, receiving a GSF less than 0.81 and more than 0.55. The remaining 3.5% of students were held accountable for ‘not meeting’ the required standard, as they received a GSF below 0.55. Therefore, 10.3% of students were held accountable by their peers for not meeting the minimum required standard for their teamwork skills and behaviours during the TBA.

Table 6 The distribution of student GSF results during 2020 for summative SITPAT tasks

Student experience feedback

The thematic analysis of voluntary student feedback from 2015 and 2020 focused on responses related to TBA. From the data captured in 2015 (N = 435), 70% of comments were coded negative (N = 306) and 30% coded positive (N = 129). From the data captured in 2020 (N = 522), 64% of the comments were coded negative (N = 334), and 36% were coded positive (N = 188). The count of negative comments increased from 306 in 2015 to 334 in 2020. The number of subjects using TBA increased from 60 in 2015 to 104 in 2020.

The thematic analysis results for negatively coded comments for 2015 and 2020 are shown in Table 7. The count of student comments related to ‘Dissatisfaction with the effort of team members’ dropped from 134 in 2015 to 72 in 2020. Conversely, there was an increase in the count of student comments related to ‘Dissatisfaction with the organisation of the TBA’, rising from 123 in 2015 to 217 in 2020. The 2020 data also presented three new themes not seen in 2015 related to the strategy, the tool associated with the strategy, and the COVID-19 pandemic. The count of student comments with ‘no reason specified’ decreased from 49 in 2015 to 22 in 2020.

Table 7 The comparison of negative themes coded from the voluntary student feedback in 2015 and 2020 and their associated percentage frequencies

The thematic analysis noted that 13 students who were dissatisfied with their TBA experience in 2020 had recommended the inclusion of peer assessment in that subject. Representative, anonymous student comments:

‘As there was no team peer review, underperforming team members got the same mark as the team members who did most of the work.’

‘Graded peer evaluation would have been a valuable addition to the group assignment, as other units have done previously.’

The count of positive student comments related to TBA increased from 129 in 2015 to 188 in 2020. The thematic analysis of the positive comments for 2015 and 2020 are shown in Table 8. The 2020 data presented the theme ‘Liked peer assessment’, not seen in 2015. The count of students who ‘Liked teamwork—no reason specified’ increased from 19 in 2015 to 53 in 2020.

Table 8 The comparison of positive themes coded from voluntary student feedback in 2015 and 2020 and their associated percentage frequencies

Discussion and conclusions

The authors analysed a full year of data from several sources to measure the strategy’s success in meeting its objectives. Namely, providing an online strategy that could be shared across a diverse faculty to improve the management of TBA while providing academics with evidence of student engagement and providing students with a means to hold their peers accountable.

Initially, the project team was concerned that the strategy could appear overly complex to academics and students. To build user confidence, it was important that the technology that underpinned the strategy was easy to access and tasks could be completed with minimal support after the initial introduction. The FeedbackFruits GME tool has achieved this by minimising the administrative burden on academics, as the underlying collation, calculation and distribution processes were automated in the GCG feature. The remaining academic responsibilities included: choice of rubric criteria, group allocation, due dates, student queries, and the overview of peer assessment before finalisation of marks within the tool. Student use of the GME tool was supported by the internal clear step-by-step guide to completing required tasks. Further, integrating the platform with the LMS provided academics and students with a seamless transition to the platform.

One indicator of the strategy’s success is that uptake has grown from 1 engineering subject in 2015 to 39 diverse STEM subjects in 2020. The strategy was not mandated for use. Instead, the project team supported academics across the faculty who desired to improve TBA in their subject and thus volunteered to participate. As the SITPAT task functions as a standalone task, it was combined with any TBA; therefore, its use across the faculty was not limited. The repeated and increasing use of the strategy in numerous STEM subjects, regardless of discipline and class size, demonstrated that it was generalisable, scalable, and shareable as a consistent strategy across the faculty.

During the strategy development, one-to-one support services were provided to academics. To move the strategy from trial to implementation, videos, ‘how-to’ guides, and infographics were created and centrally located for academic and support team use. Technical support moved from the project team to a central support team. FeedbackFruits also provided 22 h/7 days per week, student and staff, online support for technical difficulties. A community of practice in MS Teams facilitated current and prospective users of the strategy to; seek advice, ask questions, and learn from each other. Together, these measures aim to support academic independence when using the strategy, thereby enabling its potential expansion to other faculties in this university.

The strategy provided academics with two forms of evidence to provide oversight of student engagement in TBA: feedback from the intra-team peer-assessment (GSF and written comments) and student completion of the SITPAT. The very high completion rates of summative SITPAT tasks (94.4% compared to 77% of formative tasks) highlighted the success of the substantial 25% non-completion penalty to drive student behaviour. The GSF data from the summative SITPAT tasks revealed that students identified 10.3% of their peers as not meeting the required minimum standard of teamwork skills during the TBA; including 3.5% of students identified by a GSF rating below 0.55 as potential free-riders. These results contrast with previous research, which suggested that students were reluctant to hold each other accountable when their actions could penalise a peer’s grade (Sridharan et al., 2018). While the authors do not claim that all students have held their peers accountable, they propose that the student induction positively supported and empowered students to use the strategy as intended.

A thematic analysis of anonymous, voluntary student feedback from 2015 and 2020 was conducted to better understand the overall student experience of TBA across the faculty at these two points of time. The authors acknowledge the inherent limitations of this data. That is, anonymous feedback lacks a demographic profile, and when coupled with a low voluntary response rate, the data cannot be assured as representative of the whole student cohort. In addition, the underlying variables identified between the 2015 and 2020 data include different students, different teaching teams, subject redesigns, the number of subjects using TBA, the number of subjects using TBA in combination with the SITPAT task, and the impact of the COVID-19 pandemic on the design and delivery of subjects. In 2020, 104 subjects used TBA (an increase of 44 from 2015) of which 39 used the strategy presented here, which constitutes a partial implementation of the strategy across the faculty. The authors used the thematically analysed data to identify any emerging themes in the 2020 data related to the student response to the strategy and explored the differences in percentual frequencies within the data from the two datasets.

The positive themes related to TBA, identified from the comments provided by the students who participated in the 2015 survey, were also identified from the comments provided by the students who participated in the 2020 survey, with the addition of one theme, ‘Liked peer assessment’. Therefore, students from both datasets ‘Liked collaborating, learning and socialising with peers’, ‘Liked developing teamwork skills’, ‘Liked the real-world experience’ and were ‘Satisfied with the organisation’ of the TBA. In addition, there was a count of 11 student comments who ‘Liked peer assessment’. The emergence of this theme was attributed to the addition of the strategy across the faculty; as to the authors’ knowledge, no other forms of peer assessment were combined with TBA. This emerging positive theme was further supported by 13 student comments responding negatively to TBA subjects but provided recommendations that those subjects should implement peer assessment. The authors consider these two results as early indicators that some students recognise the strategy as valuable to TBAs.

Further analysis of the positive 2020 data was complicated by 53 students that ‘Liked teamwork—no specified reason’. This theme formed 28.2% of the positive student responses, making it the second most coded theme. Thus, the various reasons those students liked teamwork were concealed, making the percentual frequency analysis of the positive themes unusable.

While negative themes related to TBA identified through the 2015 student comments were again identified by students in 2020, three additional dissatisfaction themes were identified. Two themes were directly related to the strategy presented in this paper. Comments from seven students were coded against the theme ‘Dissatisfaction with the peer assessment method’, making up 2% of the negative themes in 2020. As the strategy was used to hold students accountable, the authors suggest that students who were held accountable could have been dissatisfied with the method and may have contributed to this negative feedback. In contrast, some students may have felt uncomfortable using the strategy to hold their peers accountable. This potential student concern was identified early in the project and prompted the creation of the student induction strategy to explain and justify its use. This faculty is committed to continually improving student resources to support the development of student confidence in feedback literacies to address these significant student concerns.

The second additional theme, ‘Dissatisfaction with the peer feedback tool’, was coded from three students, making up 1% of the negative themes in 2020. The authors are confident that student dissatisfaction with the tool should reduce in the future as improving the student user experience of the tool is a FeedbackFruits priority. The third new theme, ‘Dissatisfied with TBA online due to COVID-19’, was coded from 13 students, making up 3.8% of the negative themes in 2020. On-campus teaching at this university paused in March 2020, requiring all domestic and international students to transition their study to fully online (Johnston, 2020). The appearance of this new theme related to TBA was consistent with the experience of Australian Higher Education students in general, that is, the COVID-19 pandemic had significantly impacted their studies (Dodd et al., 2021).

In addition, there was a concerning increase in the frequency of student negative comments related to the theme ‘Dissatisfied with the TBA organisation/team size/allocation/topic/weight’, being coded from 217 students, making up 65% of the student negative comments in 2020. The authors acknowledge that the management of TBAs during the early stages of the COVID-19 pandemic was challenging for academics. The 104 subjects using TBAs had to be rapidly re-designed for online delivery and support and could have contributed to this rise in student dissatisfaction of TBAs. One way to improve the organisation and management of these TBAs, would be to expand the use of SITPAT tasks into the 65 subjects not currently using it. The strategy could then provide academics across the faculty with a consistent approach and centralised support, to manage TBAs and would provide additional opportunities for students to develop their skills in TBAs.

In contrast, the count of student comments related to ‘Dissatisfied with the effort of team members’ dropped from 134 in 2015 to 72 in 2020. The project team were particularly interested in student feedback associated with ‘Dissatisfaction with the effort of their team members’, as this was one of the factors that inspired this project in 2015. While the authors are reluctant to draw conclusions from this data, due to the numerous variables stated above, this apparent reduction in student dissatisfaction with their team members occurred in the context of a 94.4% student completion rate of the strategy, with 10.3% of students being held accountable by their peers, in 39 of the 104 units using TBA in 2020. In addition, while not all subjects combined SITPAT with their TBAs in 2020, over 90 academics have undertaken professional development to use the strategy since its inception. The efforts of these academics have, in turn, supported over 23,000 students to engage with the strategy. Thus, the project has worked to build the capacity of thousands of students to work in teams, who then had the opportunity to transfer their learning experiences to complete subsequent TBAs in other subjects.

Consequently, the authors are optimistic that this reduction in ‘Dissatisfaction with the effort of their team members’ is a potential indicator that the strategy is having a positive impact on the student experience of TBAs by improving the efforts of team members. Measuring changes in the ‘Dissatisfaction with the effort of their team members’ constitutes a focus for future research. A longitudinal study has been initiated to measure whether the strategy builds the capacity of students to work in teams when scaffolded throughout a course.

Academic induction was crucial to ensure the correct and consistent application of the strategy. As the academic was rarely present during team interactions, the reliability and validity of the GSF ratings were based on inter-rater reliability (Fellenz, 2006) and student honesty. It was the responsibility of the academic to remain vigilant and analyse the feedback for those students who received a GSF below 0.81, and therefore ‘Partially Met’ or ‘Not Met’ the minimum standard of the teamwork learning outcome. Investment in personalised professional development for academics implementing the strategy resulted in a collegiate environment that upskilled academics in current assessment strategies and technologies. In addition, it provided valuable feedback to inform the evolution of the strategy.

The strategy focused on identifying and reducing student academic misconduct in the form of free-riding behaviour. Dissatisfaction with the teaching environment, opportunities to cheat, and lack of support for students with language other than English (LOTE) have been identified (Bretag et al., 2019) as the top three contextual factors contributing to student academic misconduct. These factors aligned with the 2015 student feedback shared in the introduction. These factors were mitigated by; providing students with a consistent strategy, valuing student feedback, and enabling students to contribute to the assessment process. A teamwork rubric that used straightforward language supported the diverse student cohort to peer-assess. Introducing the SITPAT task reduced opportunities to cheat, as the mechanism for students to hold their peers accountable, it provided evidence of free-riding to the academic. However, the factors contributing to academic misconduct are complex (Bretag et al., 2019). For example, the strategy cannot ensure that students do not participate in contract cheating. Authentic assessments have been shown to reduce academic misconduct and improve employability skills (Villarroel et al., 2018). An evaluation of the SITPAT task against an authentic assessment framework (Schultz et al., 2022) suggested that criteria were met, except those associated with industry engagement. Therefore, the authors suggest that the discipline-specific TBA addresses industry-based criteria to improve the strategy’s authenticity.

This online, EdTech based, self and intra-team peer-assessment of teamwork (SITPAT) task, combined with team-based assessment (TBA), was confirmed to be a scalable and generalisable strategy providing a consistent method to manage TBA across diverse STEM-based subjects. In addition, the strategy measured and encouraged student engagement in TBAs and can identify students who may be attempting to free-ride.

Availability of data and materials

All aggregated datasets used and or/analysed during this current study are included in this published article and additional information file. Due to low-risk ethics obligations that maintain all participants’ anonymity, non-aggregated and therefore identifiable data cannot be made publicly available.

Abbreviations

EdTech:

Educational Technology

STEM:

Science, Technology, Engineering, Mathematics

TBA:

Team-based assessment

GLO:

Graduate learning outcome

SITPAT:

Self and intra-team peer-assessment task

GME:

Group member evaluation

LMS:

Learning management system

LTI:

Learning tool interoperability

GCG:

Group contribution grading

GSF:

Group skills factor

SPA:

Self and peer assessment

PA:

Peer assessment

COVID-19:

Coronavirus disease of 2019

LOTE:

Language other than English

References

Download references

Acknowledgements

We acknowledge the support and guidance of Professor Malcolm Campbell, Dr Kimberley James and the scores of academics who have helped shape this strategy at Deakin University. We thank the developers and support team at FeedbackFruits and acknowledge the Deakin Learning Futures team who provided the opportunity and support to work with FeedbackFruits.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

TKG led the project, making substantial contributions to the study design, data collection, analysis, drafting and revision of the manuscript. CLF made substantial contributions to the data analysis. XAC, PKC, AB, KA and APAC were instrumental to the design and implementation of the project and contributed to the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Tiffany K. Gunning.

Ethics declarations

Ethics approval and consent to participate

Ethics approval was granted by the Faculty Human Ethics Advisory Group, Deakin University SEBE-2020-51-GUNNING and SEBE-2020-56-GUNNING.

Competing interests

FeedbackFruits was not involved in this study. The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

The faculty-standard teamwork rubric.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gunning, T.K., Conlan, X.A., Collins, P.K. et al. Who engaged in the team-based assessment? Leveraging EdTech for a self and intra-team peer-assessment solution to free-riding. Int J Educ Technol High Educ 19, 38 (2022). https://doi.org/10.1186/s41239-022-00340-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41239-022-00340-y

Keywords