Skip to main content

A comparative analysis of the skilled use of automated feedback tools through the lens of teacher feedback literacy

Abstract

Effective learning depends on effective feedback, which in turn requires a set of skills, dispositions and practices on the part of both students and teachers which have been termed feedback literacy. A previously published teacher feedback literacy competency framework has identified what is needed by teachers to implement feedback well. While this framework refers in broad terms to the potential uses of educational technologies, it does not examine in detail the new possibilities of automated feedback (AF) tools, especially those that are open by offering varying degrees of transparency and control to teachers. Using analytics and artificial intelligence, open AF tools permit automated processing and feedback with a speed, precision and scale that exceeds that of humans. This raises important questions about how human and machine feedback can be combined optimally and what is now required of teachers to use such tools skillfully. The paper addresses two research questions: Which teacher feedback competencies are necessary for the skilled use of open AF tools? and What does the skilled use of open AF tools add to our conceptions of teacher feedback competencies? We conduct an analysis of published evidence concerning teachers’ use of open AF tools through the lens of teacher feedback literacy, which produces summary matrices revealing relative strengths and weaknesses in the literature, and the relevance of the feedback literacy framework. We conclude firstly, that when used effectively, open AF tools exercise a range of teacher feedback competencies. The paper thus offers a detailed account of the nature of teachers’ feedback literacy practices within this context. Secondly, this analysis reveals gaps in the literature, signalling opportunities for future work. Thirdly, we propose several examples of automated feedback literacy, that is, distinctive teacher competencies linked to the skilled use of open AF tools.

Introduction

In higher education, feedback is considered key to students’ learning, but it does not exist in a vacuum. Rather, it is mediated through teachers and technologies. In recent years there have been significant changes to both how feedback is understood by teachers and how it can be supported by technologies. These two ‘tectonic forces’—redefining the contours of the conceptual and digital landscapes for feedback—set the context for this paper’s investigation. As new technologies develop, university teachers must extend how they understand and practice feedback, what is often called teacher feedback literacy (Carless & Winstone, 2020), in the emerging digital environment. Likewise, feedback technologies, particularly those that rely on some kind of automated element, need to provide affordances that match new conceptions of feedback, which emphasise what the student does rather than what the teacher says.

To date, there has been no systematic account of how emerging notions of teacher feedback literacy relate to the use (skillful or otherwise) of learning technologies designed to mediate feedback, which are increasingly powered by analytics and now artificial intelligence. As elaborated in the following sections, this dual shift in how we think about “feedback” and its associated literacies and competencies, and the digital infrastructures now emerging for creating “feedback rich environments” (Henderson et al., 2019b), raises interesting questions at their intersection, regarding how each might inform the other.

This paper explores how automated feedback (AF) tools currently allow for the new feedback competencies that teachers require. It is divided into two parts, beginning with an outline of the new conceptual and digital landscape of feedback, most relevant to teacher practices (Sect. “Emerging approaches to feedback”). This includes new conceptions of feedback, the teacher feedback literacy competency framework proposed by Boud and Dawson (2021), and AF tools, focusing on a particular type of open approach which requires teacher mediation in order to (i) decide the criteria for differentiating the feedback they wish to send to different student groups, and (ii) design those feedback messages. This section closes with two research questions examining the interaction between teacher feedback competency and open AF tools.

The second part of the paper explains the review and analysis methodology followed (Sect. “Methodology”) in order to address these questions, by analysing the evidence emerging around the skillful use of open AF tools. The paper’s contributions are, firstly, to provide a systematic mapping of the intersections between different teacher feedback literacy competencies, and published evidence of the skilled use of open automated feedback tools. This demonstrates to what extent the literature shows that open AF tools engage and cultivate teachers’ capacities to design and deliver feedback, across the competency framework’s three levels (Sects “Macro level: programme design and development”–“Limitations in the open AF literature”). Secondly, the analysis draws attention to teacher feedback competencies that the AF literature connects to relatively poorly, highlighting opportunities for future work. Thirdly, we extend the current framework (Sect. “Automated feedback competencies for teachers”) characterising three forms of teacher feedback literacy specifically tied to the affordances of open AF tools. We conclude by considering the implications of this analysis for researchers, designers and evaluators (Sect. “Conclusion”).

Emerging approaches to feedback

Changing conceptions of feedback

Feedback has been recognised as a vital element for learning at all levels. Hattie and others have argued (Wisniewki et al., 2020) that it is the greatest contributor identified in meta-analyses of the effects of different kinds of intervention on learning. In addition to the potency it can provide, feedback is also one of the few mechanisms in courses that tailors what is a common program to the needs of each individual student. It provides a mechanism for adapting courses to students. Conventionally, feedback has been regarded as an input to students, such as comments on a student’s assignment, but there has been a shift in recent years to recognise that feedback is necessarily a process in which students have an essential role (Boud & Molloy, 2013). Information provided to students cannot influence their learning without the active agency of students themselves. This agency may be invoked at each stage of a feedback process: from eliciting information, to making sense of it, to showing understanding in a subsequent task (Malecka et al., 2020). Feedback research has thus moved from consideration of inputs, to a focus on process, and to an emphasis on impacts (Henderson et al., 2019a). This view aligns with the long tradition in cybernetics feedback research dating back to the 1950s (Wiener, 1989), that feedback without effect cannot be regarded as feedback at all.

Implementing effective feedback designs in higher education courses is a challenging process. Large cohorts of students, high student-staff ratios, short teaching terms and the need to include extensive subject-matter constrain good design options. Worthwhile designs typically propose multiple feedback loops, opportunities for students to receive useful information in good time before completing their next task, detailed specific information on their own performance and responsiveness to students’ declared needs (Henderson et al., 2019a, 2019b). Such features have been difficult to adequately implement within existing resource constraints.

Another limitation on the implementation of good feedback processes has been a lack of recognition of what constitutes quality feedback, and how it can operate well for both students and educators. Too often, feedback has followed conventional practices in any given discipline and has not taken account of feedback research or even good models within the institution or discipline. Realisation of this limitation has given rise to interest in the idea of feedback literacy which Carless and Boud (2018) defined as “the understandings, capacities and dispositions needed to make sense of information and use it to enhance work or learning strategies” (p. 1316). Without sufficient feedback literacy, they suggest, students are unable to utilise feedback to improve their learning, and educators are unable to introduce feedback processes which have a positive impact on students. This focus on feedback literacy led Molloy, et al. (2020) to develop a learner-centred framework for feedback literacy based on the analysis of data from students’ experience of feedback, and Boud and Dawson (2021) to propose an empirically-derived competency framework to identify the attributes of feedback literate teachers, both in the design of processes and in their enactment.

If teachers are to implement feedback processes well, they need to have a good understanding of feedback, how it operates and what contributions they can make to ensuring it is effective. Asking “what do feedback literate teachers do?”, Boud and Dawson’s (2021) framework “represents the competencies required of university teachers able to design and enact effective feedback processes” (p. 1). This differentiates competencies associated with different levels of responsibility in a typical university program, operating at the macro, meso and micro levels, as summarised in Table 1.

Table 1 Macro, meso and micro levels of the teacher feedback competency framework (Boud & Dawson, 2021)

In parallel with these changing conceptions of what “feedback” could and should mean, the last decade has also witnessed an explosion in data, analytics and now artificial intelligence (AI), coupled with growth in the use of online learning as the internet has been embedded into daily practices from primary to tertiary education, further accelerated by the COVID-19 pandemic. Most recently, we see the instrumentation of physical learning spaces with increasingly affordable sensors, making face-to-face teaching and learning a new source of data for multimodal analytics and AI (Martinez-Maldonado et al., 2020). We have seen the consolidation of research communities focused on inventing, evaluating and critiquing these infrastructures, including Learning Analytics (Ferguson, 2012; Siemens, 2013), AI in Education (Feng & Law, 2021), Learning Engineering (Dede et al., 2018), and Educational Data Mining (Baker & Yacef, 2009). Significantly, educational analytics and AI have emerged from the lab into mainstream products in the last decade, with most products offering analytics, and a growing number AI-enabled feedback, catalysed most recently by developments in large language models (Kasneci et al., 2023).

However, the online learning literature reflects predominant conceptualisations of feedback as something performed on students rather than a process in which their agency is exercised (Jensen et al., 2021). The challenges that rapidly developing automated feedback technologies may pose to teacher feedback literacy have yet to be explored. While Boud and Dawson (2021) briefly discuss the possibilities that educational technologies present for active feedback processes, this was not their focus. They identified that feedback literate teachers will make the best use of technology support as a resource; that some more advanced teachers understand how to use analytics about student activity in the learning management system; and that teachers may use “technology to enable more efficient/scalable feedback processes”, giving the example of tools that assist in the logistics of configuring peer feedback groups.

As we extend the notion of teacher feedback literacy and recognise that there are new tools at their disposal, critical questions emerge: How should our conception of feedback literacy shape the design and deployment of automated feedback tools? Do such tools merely automate manual tasks, or in fact require or even cultivate feedback literacy competencies—and if so, how? Does the skilled use of digital tools as part of providing feedback-rich environments introduce new kinds of automated feedback competencies and practices? In order to answer these questions, we now turn to consideration of automated feedback tools, with a particular focus on the relationship between the AF tool and the teacher.

Distinguishing open and closed automated feedback tools

There is a long history of educational technologies providing myriad forms of automated feedback. It is beyond the scope of this paper to survey these, but we characterise a few well- known approaches. Firstly, the automation of teaching, assessment and feedback is possible where the student’s mastery of the curriculum can be modelled in detail. Intelligent tutoring systems (ITS) have made particular advances in introductory science, technology, engineering and mathematics (STEM). There is evidence that compared to conventional learning experiences, students in school and university can learn more quickly and in some cases to a higher standard, using an ITS (e.g., ASSISTments, 2023; Koedinger & Aleven, 2016; Lovett et al., 2008; Murphy et al., 2020; Student Achievement Partners, 2021). However, there is a trade-off between system complexity and end-user modifiability. While these AI tutors adapt to each learner in the pacing and material presented, providing fine-grained feedback information to both learners and teachers on the mastery level of constituent knowledge and skills, the requisite complexity of the underlying learner model, adaptive algorithms and feedback engine requires significant technical expertise to design and tune, which does not permit easy modification by teachers.

Simpler approaches than ITS provide automated feedback without any adaptive capability, such as programming environments which perform automated code-checking and send alerts regarding coding and execution bugs (e.g., Heckman & King, 2018). A computer science teacher can exercise a degree of control over the configuration of such developer tools, such as which bugs are reported, but does not control how bugs are identified, or the feedback received by the students.

Finally, learner-facing dashboards (e.g., Bodily & Verbert, 2017; Schwendimann et al., 2017) now appear in many educational technology tools, displaying to the learner summaries of their activity. However, these do not provide reports to students in the usual sense of written feedback, instead leaving the student to interpret the implications of the graphs, with some evidence that this can be problematic (de Barba & Corrin, 2014; Teasley, 2017). In this regard, recent advances in automated “data storytelling” around visualisations of student activity, designed to draw the student’s attention to specific issues prioritised and explained by the teacher, signal a promising development (Echeverria et al., 2018; Fernandez-Nieto et al., 2021).

All of these approaches have demonstrable benefits when implemented well, and teachers can and should exercise their agency with these types of tools, in understanding how to orchestrate them as part of the student experience (e.g., du Boulay, 2019; Prieto et al., 2019). Generally speaking, however, these types of automated feedback tools are what we label “closed”, in the sense that the teachers cannot fundamentally change the key parameters around the feedback information or feedback processes. Thus, while they contribute to teacher feedback literacy (Boud & Dawson, 2021), they can be viewed as other forms of information provision.

In contrast to closed AF tools, we define “open” AF tools as enabling the educator to specify some or all of the following key parameters in the tool’s behaviour:

  1. 1.

    the student activity data that the system analyses;

  2. 2.

    the algorithms that analyse that data;

  3. 3.

    the feedback information the teacher wishes the software to compile for students;

  4. 4.

    the modalities via which feedback information is communicated by teachers;

  5. 5.

    the student-driven feedback processes that are afforded.

Pedagogically speaking, open tools are an interesting class of AF tool, giving significant agency to the teacher to specify the student attributes they deem relevant for differentiating feedback, the contexts and timing for giving such feedback, the tone and content of the information, and the affective warmth and richness (e.g., from text messages, to email, to audio-visual). As with conventionally delivered feedback information, open AF tools place no constraints on what the teacher may recommend students do next (e.g., repeat assessment activities, read further, interact with peers, reflect on study habits). Thus, the affordances in and of themselves do not necessarily result in any increased student agency or activity—rather, this is a consequence of the design choices the teacher makes. From a technical perspective, in Sect. “Overview of the four open AF tools” we elaborate on what makes this combination of attributes distinctive from current learning platforms, some of which partially replicate individual attributes.

Our focus in this paper is on open AF tools as defined by the above five attributes, since teachers must make decisions in order to exercise this agency which will reflect their feedback literacy competencies. Furthermore, open AF tools may, in principle, also provide important affordances for teacher feedback literacy to be developed, when they prompt consideration of important pedagogical issues. Specific examples of such tools are provided in the next section.

To summarise, open AF tools require the teacher/teaching team to make a range of critical decisions that determine the tool’s behaviour, placing them at the centre of decision-making about how a tool will be deployed, and for what purposes, thus connecting the affordances of open AF tools to teacher feedback competency. Having introduced how “feedback” is being redefined—conceptually and technically—we turn now to the interplay between these shifts, which motivates the two research questions for this paper:

  1. 1.

    Which teacher feedback competencies are necessary for the skilled use of open AF tools? (Sections “Macro level: programme design and development”–“Limitations in the open AF literature”)

  2. 2.

    What does the skilled use of open AF tools add to our conceptions of teacher feedback competencies? (Section “Automated feedback competencies for teachers”)

Methodology

To address these research questions, we reviewed empirical evidence around the use of the open AF tools introduced above, through the lens of Boud and Dawson’s (2021) teacher feedback literacy competency framework. Our goal was to clarify if, and how, the concept of teacher feedback literacy translates into the skilful use of such tools, not only in principle, but illustrated by empirical evidence. Specifically, we sought to evaluate whether teachers’ practices in the empirical studies could be explained in terms of the framework’s categories (RQ1), and whether new practices could be identified that could not be described within the framework (RQ2).

We undertook this task as a Critical Review (Grant & Booth, 2009), which aims to “identify conceptual contribution to embody existing or derive new theory” (p. 94). This type of review process seeks to target key literatures for critical synthesis and was therefore more suited to our research questions. As detailed by Grant and Booth (2009), this contrasts with a Systematic Literature Review (SLR) that seeks to gather breadth of evidence to derive a meta-analysis to answer focused hypotheses (on the limitations of SLR methodology, see Boell & Cecez-Kecmanovic, 2015). We selected four exemplars on the basis that each was sufficiently well described to represent the type of open AF tool in which we were interested, since they had associated with them substantial evidence from empirical papers. Specifically, we identified papers using a combination of our own knowledge of the literature, supplemented by communication with the research teams behind the tools, to select publications against the following criteria: (i) peer reviewed conference and journal articles; (ii) documenting how an open AF tool was integrated into curriculum design in authentic higher education contexts; (iii) providing information about the educator’s perspective in employing the tool in their courses; and (iv) empirically evaluated using either or both qualitative and quantitative methods. These stringent criteria yielded a total of 20 articles, which span diverse educational contexts (Table 2).

Table 2 Summary of literature reviewed

Overview of the four open AF tools

We briefly introduce the capabilities of the four open AF tools evaluated in the selected literature (Appendix A provides illustrative screenshots to convey the user interfaces). Two examples that require the teacher to make important feedback design decisions, in order to create customised feedback experiences, are the Student Relationship Engagement System (SRES) (Liu et al., 2017) and OnTask (Pardo et al., 2018). The input data typically comes from learning management and student administration systems, but these open tools make it possible for the teacher to import data from any source, define the criteria they wish to use to differentiate significant sub-groups of students, and design personalised feedback messages, including links to differentiated student activities. SRES permits greater sophistication in feedback processes, including dialogic and peer feedback, and student self-reflection on feedback (Arthars & Liu, 2020). Instructors can grade students during in-class assessments and provide immediate feedback, and students can interact with their feedback via a web portal.

A platform offering additional AF capabilities is ECoach (Huberth et al., 2015). As detailed by Matz et al. (2021) instructors of large gateway courses choose from a suite of tools: personalized course guidance sent as messages (cf. SRES/OnTask), exam playbooks, exam reflections, a grade calculator and a to-do list. Students visit a web portal with feedback information and activities delivered via these tools, which adapt to the students’ survey responses and academic progress. Instructors have important input into the feedback messaging, advised by behavioural scientistsFootnote 1 in the university’s academic innovation unit, to create feedback messages personalised to ongoing performance and progress, drawing also on each student’s profile as measured using psychological surveys (Matz et al., 2021). The to-do list provides reflection and action prompts to foster students’ self-organization and motivation for study. Going beyond automated feedback solely focused on students’ cognitive learning, Matz et al. (2021) frame ECoach as a platform to help teachers “communicate care for students” (p. 217). In the context of teacher feedback competencies, this extended conception of technology’s affordances provides richer scope for analysis.

Finally, AcaWriter provides automated formative feedback information on students’ writing (Knight et al., 2020). The feedback focuses on ‘rhetorical moves’ that are hallmarks of academic writing (Swales, 2004; Hyland, 2005), that is, “phrases and sentences that indicate […] the writer’s attitude or position in relation to […] the text” (Knight et al., 2020, p.143). AcaWriter’s feedback can be contextualised in several ways to instructors’ own learning designs (Shibani et al., 2019, 2020). Specifically, instructors can change feedback messages, pre-conditions, and the guidance resources they provide to help students understand how the feedback maps to their task, strengthening the alignment between AF and assessment criteria (Shibani et al., 2019).

Apart from the four open AF tools described above, we identified one other tool, the Student Advice Recommender Agent, SARA (Greer et al., 2015). SARA also provides automated formative feedback messages to students, tailored to students’ demographics, their responses to a large freshman survey, and based on predicted grades in a learning context. However, to the best of our knowledge only two papers have been published regarding SARA (Greer et al., 2015; Mousavi et al., 2021), neither of which show that instructors had any control over the parameters of the tool’s behaviour. Hence, we did not include this in our analysis.

We recognise that some Learning Management Systems now provide varying degrees of targeted messaging, whereby the teacher can configure messages to be sent to different subsets of students, defined by online engagement criteria. For instance, D2L’s Brightspace offers “Intelligent Agents” that the teacher configures using IF…THEN… rules to message students who satisfy criteria such as whether they have accessed a course page, or authored a discussion post.Footnote 2 While this is a subset of open AF capability as defined above (i.e., data and rules are restricted to just this platform, not open to data from any platform), nonetheless, we hypothesise that this requires teacher feedback literacy to use effectively. However, we could not find any research studying teachers’ use that met the above literature inclusion criteria, so they are not reflected in this analysis.

To summarise, the open AF approach negotiates the complexity/intelligibility tradeoff differently to the ITS approach. An ITS maintains detailed models of (i) the knowledge and skills taught by each element of the curriculum, (ii) the learner’s degree of mastery, and (iii) pedagogical strategies in order to fully automate task presentation, assessment and feedback—but this complexity closes off the possibility to non-technical people (such as teachers) of making substantial modifications to the student experience. Open AF tools maintain much simpler representations, but their intelligibility permits greater teacher agency over data, rules, feedback information, communication modalities, and modes of student feedback engagement.

Classification and synthesis of evidence

In order to synthesise the evidence about teachers’ practices with open AF tools, the paper’s second author reviewed the articles to identify examples of educators displaying competencies foregrounded in the Boud and Dawson (2021) framework. For example, Blumenstein et al. (2019) described how instructors used SRES for personalised ‘nudge’ feedback to prompt students to complete course tasks, which was interpreted as competency 15: Designs to intentionally prompt student action. Table 3 illustrates this synthesis using the language of the framework (in bold) at the relevant level (macro/meso/micro). The complete analysis is presented in Appendix B. The first author then reviewed the analysis in order to verify the classifications, and the very few differences identified (< 5) were resolved through discussion. Finally, a draft of this article was shared with each of the research teams responsible for the tools, most of whom confirmed that they were satisfied with the way their tool was presented, and a few descriptive and classification changes were made once they clarified aspects of their tools or research studies.

Table 3 Illustrative examples from the analysis of empirical evidence using the teacher feedback literacy competency framework (Boud & Dawson, 2021)

We recognise that the framework’s boundaries between the macro, meso and micro levels are soft, but for coherence, we have retained the categories of feedback literacy as originally presented in the article and use bold text to signal the concepts from the framework. We focus on the competencies that were evident across the systems, with some comparison against Boud and Dawson’s (2021) results. Furthermore, we note that there is a learning curve for teachers to gain fluency with such tools, and institutions developing capacity to deploy them. They require teachers to learn (i) how to differentiate students based on digital traces of behaviour, and (ii) how to differentiate their feedback experience accordingly.

Following the typical process through which research-driven educational technology innovations embed into work practices, it is initially researchers who support teachers in universities pioneering the use of open AF. The researchers then begin to transfer that skill to their teaching innovation and learning technology centres. Thus, the evidence we are about to review concerns ‘early-adopter’ teachers who have an unusual level of support from researchers, and who may be more skilled than most with respect to the tools, affordances and feedback competencies. However, as the literature also documents, (i) those researchers’ institutions are now scaling open AF tools in a sustainable manner, and (ii) other institutions are emulating this through their respective teaching innovation units.

Results

Our analysis of open AF research identified 14 of the 19 teacher feedback competencies being demonstrated by instructors as they used the tools, although some only partially. As summarised in Table 4, the feedback competencies were evident at all three levels of Boud and Dawson’s (2021) framework, to different extents.

Table 4 Summary of the empirical evidence surveyed, in terms of coverage of the Boud and Dawson (2021) teacher feedback literacy competency framework

We now elaborate on the nature of this evidence, at the macro, meso and micro levels, using bold to signal terms from the teacher feedback literacy competency framework.

Macro level: programme design and development

Plans feedback strategically

This competency was observed across all four AF systems. Instructors using OnTask and SRES sought to scale up feedback to meet the needs of larger cohorts. Blumenstein et al. (2019) describe how instructors harnessed the flexible data import function in SRES to collate data across multiple sources, building a more holistic profile based on their students’ behavioural and cognitive engagement, which informed personalised feedback nudges. Similarly, one of the key motivators for instructors to use AcaWriter was to increase their capacity to support student cohorts, because “we can’t afford to do that [giving formative feedback] when we have 400 students because it already takes us maybe 20 h to mark one class… we had to do it in a way that is time-efficient” (Shibani et al., 2020, p.4).

In planning strategically for feedback, instructors using these AF tools also demonstrated how they were viewing complex feedback connections across a whole unit/program. Lim et al. (2020a) document how the instructor of a flipped learning course designed feedback to be embedded in iterative cycles of activity and feedback over the span of the whole unit, with the intention of helping students to improve their subsequent learning cycles within the unit (Pardo et al., 2019). Similarly, when instructors use ECoach in their unit, they need to consider that other instructors may also be using ECoach to provide feedback to students in their units. Matz et al. (2021) describe how instructors can customise student surveys that will be the basis for AF. Students then receive feedback regarding their progress over several units, in various forms over a semester of study.

Uses available resources well

Through the effective use of AF tools, instructors demonstrate how they harness technology for feedback. With SRES, attendance data has been reported to be particularly significant for instructors, who use the mobile web app to quickly record students’ attendance, which informs subsequent feedback information (Arthars et al., 2019; Blumenstein et al., 2019). These tools are designed to ensure that students can readily access feedback data. AcaWriter instructors reported that a key motivation was that students receive instant, formative feedback on multiple drafts (Shibani et al., 2020). While AF systems facilitate instructors to scale feedback, some have acknowledged that they have not necessarily seen time savings in using these systems for feedback. Arthars et al. (2019) noted instructors’ comments that time was still needed to craft the personalised emails or to respond to replies from students. Notwithstanding this, some instructors expressed that this extra time needed to use the automated feedback system was not necessarily a concern, as “care [for students] overrides the [additional] time” (quoted in Arthars et al., 2019, p. 235).

Creates authentic, feedback-rich environments

Authentic feedback refers to “feedback processes that resemble the feedback practices of the discipline, profession or workplace” with the goal of “promot[ing] the development of capabilities that transfer effectively from university to the world of work” (Dawson et al., 2021, p. 287). Following this definition, authentic feedback is less evident, as the focus in many studies has been on supporting students’ learning in and for academic environments. AcaWriter’s use in legal studies provides one example, since a key graduate competency is the ability to write clearly and persuasively using a comprehensive line of argument. When instructors worked closely with researchers to contextualise AcaWriter feedback to meet the criteria for writing, they sought to promote the writing capabilities required of Law graduates by enabling students to improve their writing through feedback-revision cycles. The results from case studies described in Knight et al. (2020) point to improvements in students’ disciplinary writing submissions, which instructors believed would translate into future career performance, as demonstrated in this quote by a Law instructor: “suddenly I noticed their essays were better. And they will be better in court and they will be better lawyers for it” (quoted in Knight et al., 2020, p. 165). Thus, when designed carefully with instructors, AF for writing may serve as authentic feedback in disciplines where writing is a key professional competency.

From the studies reviewed, it is apparent that instructors were using AF to create feedback-rich environments (Esterhazy, 2018). Feedback practice in higher education has often been criticised as being limited to written feedback on writing tasks, or end-of-module assessment feedback with no opportunity for students to use it for improving their performance (Boud & Molloy, 2013; Dawson et al., 2021; Winstone et al., 2017). In contrast, instructors leverage AF for more regular communication of feedback, especially with respect to students’ out-of-class or self-regulated learning processes over a semester of study. With OnTask and SRES, instructors made feedback processes familiar and commonplace, in the form of regular emails to inform students about their progress with required learning tasks and importantly, to communicate to students how they could stay on course with their study. Similarly, for ECoach, students became familiar with regular pushes of information and feedback through a combination of the five tools within the system, so that they were informed about their progress and provided with actionable feedback to know how to optimise their learning (Matz et al., 2021). Finally, with AcaWriter, instructors played an important role in assisting students to utilize the enriched feedback environment, by briefing students on the tool’s relevance to the specific assignment (Shibani et al., 2020).

Develops student feedback literacy

Feedback literate teachers work to develop their students’ feedback literacy. Carless and Boud (2018) define student feedback literacy in terms of four processes: appreciating feedback (specifically, understanding their role in the process); making judgments (evaluating their own performance against standards), managing affect (in particular, coping emotionally with critical feedback), and taking action (deciding on how to enact information in feedback). They note the potential for AF tools to assist in cultivating such literacy, but caution that, “there remain risks that the process may still be dominated by feedback as telling, learner agency may be lacking and productive action may not ensue unless there are designs for student uptake” (Carless & Boud, 2018, p. 3). In the AF literature, we were able to identify some examples of teacher feedback competency in these regards. In an analysis of students’ affective responses to OnTask-enabled feedback, Lim et al. (2020b) found that students reported taking an active role in response to OnTask feedback, choosing to improve their skills through practice quizzes, and to improve their learning strategies by focussing on topics that required greater mastery.

An important finding from this study was that a high proportion of students experienced negative affective responses in response to their feedback. While much of this negative affect was in the form of anxiety, students’ comments indicated that they were able to manage this by completing the required task—again providing evidence of feedback enactment and therefore suggesting effective design of feedback on the part of the teacher. On the other hand, the study documented other less productive types of negative affect, namely, frustration, in response to OnTask feedback. Importantly, the study found that students’ affective responses to AF were tied to their perceptions of the feedback information: especially, when students perceived that this provided advice that contradicted their own preferred learning strategies, they experienced frustration, as demonstrated in this quote: “I am a bit frustrated because I know that… my study methods work” (quoted in Lim et al., p. 349). This particular finding therefore highlights that when using AF to develop students’ feedback literacy, teachers should provide avenues for students to manage their affective response.

We also find evidence of teachers adapting the student task and assessment criteria to take advantage of the affordances of AcaWriter, in order to promote deeper engagement (Shibani et al., 2022). A civil law academic required her students to demonstrate their critical engagement with AcaWriter’s annotations on their text by adding their own annotations to indicate whether they agreed with it, an activity which contributed towards their final grade. This adaptation to a more dialogical feedback model, in which students are incentivised to exercise agency and give critical feedback on the automated feedback, was in response to the finding that some students were not engaging meaningfully with AcaWriter. This exemplifies a feedback literate use of an AF tool by the teacher in order to promote student feedback literacy.

Finally, there is emerging evidence that learners’ attributes have some influence on their feedback literacy, such as prior academic performance (Lim et al., 2021a), self-efficacy and baseline self-regulation (Tsai et al., 2021), or performance goals (Brown et al., 2019). Moreover, Lim et al. (2021c) found evidence of students’ defensive self-reactions in response to OnTask feedback, suggesting a need for more support in managing affect around their feedback. Tsai et al. (2021) therefore recommend that when using OnTask, there needs to be a recognition of students’ varying feedback literacy, and to provide greater support for fostering such literacy. The ECoach team uses validated survey instruments such as the Motivated Strategies for Learning Questionnaire (Pintrich & de Groot, 1990) and the Test Anxiety Inventory (Taylor & Deane, 2002) to help differentiate how they communicate with students, demonstrating a relatively advanced form of this literacy (Matz et al., 2021). We anticipate that feedback literate teachers in the future will be aware of the evidence regarding the differing responses that these tools can elicit from students, and adopt strategies that respect these. In Sect. “Limitations in the open AF literature”, we discuss those aspects of this competence that have yet to be described in the literature on open AF tools.

Manages feedback pressures

A key objective of AF tools is to relieve the pressures that prevent educators from providing timely, personalised feedback to students, especially in large cohorts of potentially hundreds (Pardo et al., 2019). Studies of open AF tools describe how instructors leverage these technologies to personalise feedback to cohorts as large as 800 students (Arthars et al., 2019). With respect to ECoach, Matz et al. (2021) indicate that instructors can actively address feedback priorities for their courses by selecting which of the five feedback tools they want to implement in their course, so that students get timely and appropriate feedback to support their learning in the course. The following quote by a Biology instructor illustrates the active selection process of ECoach’s feedback features: “I do a pretty hard sell on exam playbook but it’s supported by the data so that’s part of why we do it” (quoted from Center for Academic Innovation, 2021, 2:18–2:23). Unlike these three other systems, AcaWriter is designed for student self-correction, releasing teacher time to assess and provide feedback on the final submitted drafts, as illustrated in the following comment: “the broader motivation [for implementing AcaWriter] was… to provide feedback to students on their written communication that did not require the tutors to have to mark-up reports and provide that back” (quoted in Shibani et al., 2020, p. 4). Shibani et al. (2020), in documenting educators’ experiences of AcaWriter, report that compelling reasons given for its adoption included: (i) the impossibility of giving timely feedback comments on drafts to hundreds of students, and (ii) building students’ literacy regarding what was expected of their writing.

We turn now to the meso level in the Boud & Dawson framework.

Meso level: course module/unit design and implementation

Maximises effects of limited opportunities for feedback

One notable way that instructors using AF demonstrate this competency is efficiency. Specifically, instructors using open AF tools noted how they capitalised on the ability to perform multiple operations on a single platform, such as gathering attendance data, marking students’ in-class assessment, and collecting students’ response to surveys all within SRES, and then using the platform’s delivery system to send personalised feedback to all students based on this information (Arthars et al., 2019). An important effect documented by Arthars et al. (2019) of such tailored feedback was a strengthening of teacher–student relationships through a greater understanding of students, as well as an observable reduction in students who dropped out or withdrew from the course, as illustrated by the following: “we used to have maybe 30, 40 people drop out minimum. Now we have a handful” (Arthars et al., 2019, p. 234).

Secondly, by using such systems, instructors reduce the lag time between students’ assessment and feedback. For example, Arthars et al. (2019) reported how some instructors were using SRES in more advanced ways, building forms to capture performance on in-class assessments, resulting in more efficient marking processes and more detailed feedback. An important effect of this increased efficiency was that course coordinators were able to then work with tutors to address students’ learning gaps through targeted instruction. Turning to another example, once instructors using AcaWriter had contextualised written assignment rubrics with the tool’s feedback, students could request and obtain instant comments on their writing at any time. Evidence from AcaWriter’s implementation in an accounting context indicates that students who used AcaWriter for their written assignment scored significantly higher marks than those who did not, although further investigations are needed to ascertain the consistency of this result (Knight et al., 2020).

Thirdly, instructors who deployed ECoach in their courses demonstrated this competency by choosing the right tool for the job, since they needed to decide which of the five features would be most useful for students’ learning in their own teaching contexts. In studying the relationship between students’ use of the ECoach tools and final course grade in two contexts, Matz et al. (2021) found that the extent of the relationship varied depending on the courses, which highlights the key role that instructors play in deciding which of the features to implement.

Given that AF expands the opportunities for feedback to students, we argue that the ability to maximise the effects of limited opportunities for feedback is a key affordance for instructors leveraging AF to enhance the feedback ecosystem. Deployed skilfully, the emerging evidence indicates such tools can increase the otherwise limited opportunities for feedback.

Organises timing, location, sequencing of feedback events

Instructors using the four AF tools were reported to be organising feedback activities early in the unit/module. For example, instructors using both OnTask and SRES planned feedback at the outset of the actual teaching period and were intentional about its purpose; instructors planned personalised feedback to students about their progress prior to census date, to inform students about their progress so that they could make more informed decisions about whether to continue their studies without financial penalty (Arthars et al., 2019; Lim et al., 2021a). Instructors also ensure that feedback information is available in time for subsequent tasks, selecting relevant activity data such as to set criteria to send students feedback about task progress, prompting them to complete tasks before deadlines (Blumenstein et al., 2019; Lim et al., 2020). In so doing, instructors may also sequence feedback events to maximise their influence on student learning. A case in point is described in Lim et al. (2020), where the instructor of the flipped learning unit used OnTask to personalise feedback messages on students’ progress and performance on weekly preparatory tasks, in order to regulate future learning cycles.

The deployment of AF also involves instructors taking a step to organise the timing and location of feedback. For example, with ECoach, the behavioural science team worked together with instructors to generate the to-do list to help keep students organised and motivated, with nudges about when to study or use ECoach resources (Matz et al., 2021). The following quote from the Statistics instructor this feature is “one of my favourite parts of ECoach … an actual interactive list of items that they can do or should do and it would still be personalised to them… this is a way to kind of help organise and figure out how to navigate a learning experience that they could take beyond the Statistics class itself” (quote from Center for Academic Innovation, 2022, 2:42–3:28). With AcaWriter, instructors directed students to AcaWriter as part of assignment preparation (Shibani et al., 2020).

Designs for feedback dialogues and cycles

Skilful instructors design feedback cycles when they use AF tools. Importantly, designing for feedback cycles implicates a need for learning design that involves such elements as nested assessments or staged tasks. Such designs allow for feedback loops (Askew & Lodge, 2000; Carless, 2019), where students have the opportunity to put feedback into practice. For example, the flipped learning course described in Lim et al. (2020) required students to complete weekly preparatory activities (watch a topic video, complete a video quiz, and practice their knowledge with a set of exercises). The instructor used OnTask to create conditions for rule-based messages for students’ engagement with each week’s preparatory cycle (see Pardo et al., 2019). These were then combined into a single memo sent out as weekly feedback to students. In addition, Arthars et al. (2019) describe how instructors intentionally used SRES to provide more immediate feedback so that students could use it to improve on their next submission task: “We’ve moved from…not the best feedback mechanisms to…very prompt feedback on any submitted work. So that the students, before they have to complete their next submission task, have an opportunity to improve” (quoted in Arthars et al., 2019, p. 234). Instructors can also use AF tools to scale up the opportunity for pre-submission feedback. Instructors who have integrated AcaWriter effectively in their teaching issue explicit guidance on using it to improve drafts, prior to assignment submission (e.g., Lucas et al., 2021; Shibani et al., 2020).

Manages tensions between feedback and grading

As noted by Winstone and Boud (2020), grading is often conflated with feedback, resulting in the latter being considered more of a grade justification than offering actionable information for students to improve their learning. This is especially likely if feedback is only offered as a product in return for the submission of an assessment. Instructors using open AF tools appear to avoid this tension, since feedback from these systems targets students’ learning processes (Pardo et al., 2017), thereby avoiding the discourse of grades in discussing quality work. For example, with OnTask and SRES, instructors draw on students’ activity and progress data to create personalised feedback advising on study strategies as well as positive messages of support to improve future performance (e.g., Blumenstein et al., 2019; Lim et al., 2020). Certainly, instructors use SRES to provide feedback on in-class assessments, but this is recognised to be a separate process (Arthars et al., 2019). Similarly, instructors implementing ECoach tools focus more on supporting students through their learning in large gateway courses with actionable feedback such as nudges, rather than on feedback as part of grading (Matz et al., 2021). Similarly, the key motivation for instructors to implement AcaWriter in their courses was for their students to receive formative feedback on the structure of their drafts before the summative assessment, better understanding what was being asked of their writing, so that they could improve it prior to submission (Shibani et al., 2020). While AcaWriter does not award grades, a promising observation from the Law academic was that fewer students requested a time-consuming re-grade of their assignment, “attributing it to their improved understanding of the marking criteria and learning to self-assess as a result of the intervention” (Shibani et al., 2020, p. 9).

Utilises technological aids to feedback as appropriate

While the teacher data analysed by Boud and Dawson (2021) referred to some use of technology, including learning analytics, technology was not strongly represented in their dataset. However, our analysis in this paper focuses only on instructors using open AF tools, and as such represents an in-depth elaboration of the nature of this competency.

Designs to intentionally prompt student action

Open AF tools are designed with the explicit goal of assisting teachers to prompt students into changing their behaviour productively. While all of the tools reviewed enable teachers to differentiate the prompts to action (framed as ‘call-to-action’ buttons by Iraj et al., 2020), they also provide the teacher with summaries of whether students have indeed followed links, providing a solution to the challenge of knowing if/how students engage with feedback inputs. The design of AcaWriter is explicitly intended to help students understand the features of academic writing, and prompt more effective revisions to drafts. Shibani et al., (2020; 2021) document the fine-grained revision analysis that is possible when every edit is logged, and how judgments can be made about the quality of those revisions. However, there is relatively little evidence of instructors requiring students to show how they have made use of feedback (although see the AcaWriter example in Sect. “Develops student feedback literacy”: student feedback literacy). We discuss this area for development later.

Micro level: feedback practices relating to individual student assignments

Feedback relating to individual student progress is, not surprisingly, a key area for AF because such systems log, and can operate on, large amounts of fine-grained student activity data. The question is how this capability can be harnessed by teachers to improve their own or their students’ feedback literacy.

Identifies and responds to student needs

The way that instructors use AF systems to identify student needs typically differs from the ways described in Boud and Dawson (2021), in the context of assessment feedback. Specifically, instructors using AF leverage these systems to identify students’ needs through behavioural metrics of engagement and academic performance, to provide progress or process feedback (Pardo et al., 2018). A key affordance of AF tools such as SRES is the “ability to target feedback to individual students or groups based on data” (Blumenstein et al., 2019, p. 8). In that broad sense, much of this critical literature review illustrates the different forms this can take, and the studies cited reporting improvements either in student academic achievement (Lim et al., 2021a; Pardo et al., 2019), or other process-centric aspects of learning (Lim et al., 2021b; Shibani et al., 2022), provide evidence for the claim that student needs are being met.

Crafts appropriate inputs to students

Open AF tools by definition (Sect. “Methodology”) permit or require the teacher to craft conditional feedback messages and define the conditions that will differentiate which students receive them and when. As explained in Sect. “Methodology”, open AF tools place no constraints on teachers regarding how they craft the automated messaging, so while messaging can address the quality of a student’s task performance or online engagement, a key motivator for the development of the research-based tools reviewed here has been to “communicate care” (Matz et al., 2021), and cultivate self-regulated learning (e.g., Broadbent et al., 2020; Pardo et al., 2018). Thus, Lim et al. (2020) detail how instructors using OnTask communicate feedback not only as nudges for students to take action, but also with recommendations of effective study strategies, with notes of encouragement to affirm and motivate as well as to offer support, while Blumenstein et al. (2019) provide examples of similar kinds of feedback messages that instructors have crafted in SRES. ECoach uses “motivational interviewing principles” to refine educators’ messaging, “apply[ing] the principles of plain language […], tailored communication […], and “sticky” communication” (Matz et al., 2021, p. 217).

This competency is also demonstrated when teachers choose to explicitly align the AF messaging with the assessment rubric. For instance, teachers have used AcaWriter to clarify to students how different rhetorical moves in writing map to the rubric (Knight et al., 2018, 2020), and AF-rubric mapping is explicitly supported in SRES, which enables the teacher to design messages based on each cell in a rubric table, to help them keep track that they have differentiated messages for each student level (Arthars & Liu, 2020).

We note that in the context of this teacher feedback competency, the openness and transparency of these rule-based open AF tools, while desirable attributes for teachers, also bring technical limitations. Firstly, since the software is dependent on predictions made by teachers about what needs to be monitored in students’ work, it is not able to respond autonomously to situations revealed in students’ work that extend beyond these expectations, and adapt its messaging. Consequently, responsibility for responding to unexpected student behaviour rests with the teacher, who may choose to adjust the AF tool (for instance, if many students unexpectedly failed a task, a teacher using OnTask or SRES could adjust subsequent rules and messaging). Making sense of surprises and responding appropriately is of course where humans often excel: skilled teachers recognise a wide range of unexpected signals in students’ work and behaviour that is invisible to machines, and can craft appropriate inputs.

Secondly, teachers’ expectations about how their students will behave are of course limited to what can be identified in digital activity traces, and they can differentiate students only to the degree that they can codify rules specifying their feedback inputs. “What is codifiable” is changing continuously given advances in learning analytics that show the potential to recognise both multimodal and higher-order student capabilities (e.g., Joksimovic et al., 2020; Schneider et al., 2021). Regardless of technical developments, however, the responsible use of open AF tools should not be conceived as a way to automate teachers ‘out of the loop’, but augmenting their capacity in specific ways. It is conceivable that in the future, open AF tools will be able to handle unexpected patterns in student data using some form of machine learning, and generate natural language feedback inputs other than those explicitly crafted by the teacher, but the complexity of such systems will in turn reduce algorithmic transparency, and potentially, the trustworthiness of a tool that could give feedback that is not easily predictable to teachers.

Differentiates between varying student needs

Boud and Dawson (2021) highlighted three challenges that teachers faced with respect to this competency: meeting the needs of disengaged or marginalized students; students who were not receptive to feedback; and supporting students emotionally. It is here that teachers using AF tools—especially open AF tools—have demonstrated some features of these competencies and may be able to address the first of these challenges. As AF systems draw on live data around student engagement, instructors can be made more aware of their students’ ongoing progress during a course; this allows them to identify, and support disengaged or marginalized students much earlier in the course than can be detected by other means. As noted by the Statistics instructor using ECoach in her subject, “we were able to get a lot of data from the use of this ECoach tool to learn about my students and what they were doing… so that we could tweak what we provide for them in the appropriate way to enhance their learning. It was very powerful” (Center for Academic Innovation, 2022, 3:46–4:08). Research on OnTask and SRES describes how instructors analyse course analytics to group students into bands of engagement (Blumenstein et al., 2019; Lim et al., 2019, 2021b). In fact, one of the expressed intentions of instructors using SRES was to be able to communicate feedback not only to students who were underperforming, but also to those who were making good progress, because “…we forget that group, often. We don’t often give them enough praise and recognition” (quoted in Arthars et al., 2019, p. 235). However, with regards to the other two challenges highlighted by Boud and Dawson (2021) for this competency, it is not yet clear from the research how these could be addressed through the use of AF—these are avenues for further investigation.

This concludes the analysis of the open AF literature through the lens of Boud and Dawson’s framework, in response to RQ1: Which teacher feedback competencies are necessary for the skilled use of open AF tools? Next, we review the relative strengths and weaknesses identified.

Limitations in the open AF literature

We discuss now the competencies that were less visible in the open AF literature reviewed, as summarised in Table 4, which motivates implications for future work to strengthen the evidence base.

Macro level

Develops/coordinates colleagues. An example of how this competency could be further addressed is provided in SRES research. Arthars et al. (2019) describe how communities of practice have been formed within and between departments, where learning designers and faculty work together to advance their implementation of feedback with SRES. This has not been formally studied with the other AF tools, so this serves as an exemplar to share with other educators using AF systems.

Improves feedback practices. Given the novelty of AF systems, studies are emerging to evaluate the outcomes of a tool’s implementation, to gather evidence of its effects and possible system improvements. For open AF tools, more could be done by instructors to establish processes to know if students have actioned feedback. While the technology is capable of collecting ‘read-receipts’ from recipients as well as logging information on students’ interactions with the feedback messages, this is not yet part of teacher practice (see Knight et al., 2020). To date, only the study on SRES by Arthars et al. (2019) describes how instructors have actively solicited students’ comments regarding their personalised feedback, by having students respond directly via the SRES platform. By establishing this process, instructors were able to see if students were attending to the automated feedback, to inform instructors’ reflection on how they could improve it. This is one example of how instructors could close the feedback loop for themselves, to know how their feedback can be refined for students.

Develops student feedback literacy. While we found evidence of one aspect of this competency across all four open AF tools (Taking action), an aspect that was weak in the literature is Managing affect. Generally, students’ affective responses to feedback from these open AF tools have rarely been documented. Given that negative affective responses to feedback, if not addressed, can be detrimental to students’ continued learning efforts, as well as their recipience to further feedback from AF tools, future research work should explore this aspect of students’ feedback recipience to AF, and build in avenues for helping students to counter any unproductive negative emotions.

The tone of the message itself is a significant factor in the arousal of positive or negative emotions (Winstone et al., 2017)—when feedback communicates care, students are more likely to engage with feedback. Furthermore, students’ feedback literacy is enhanced when they trust their teachers (Boud & Carless, 2018; Winstone et al., 2017). This connects with the importance of the culture set by the teacher: caring AF messaging that is undermined by other teacher practices will be seen as hollow, undermining the building of trustful relationships. In this sense, the skilful use of open AF tools will be congruent with, and reinforce, other teacher practices that promote students’ sense of belonging.

Meso level

Constructs and implements tasks and accompanying feedback processes. A critical consideration in demonstrating this competency is designing for self-assessment. To do this, instructors could encourage students to evaluate their own progress, for example by checking-off tasks as they are completed, before presenting students with feedback.

Frames feedback information in relation to standards and criteria. A constraint on this competency is the widespread problem that while many courses define learning objectives, they are often not clearly formulated in terms of standards and criteria, that is, learning outcomes. Until this is addressed, open AF tools are limited in being able to augment this feedback competency. Indicators of how this is currently happening include: (i) teachers can build forms in SRES for rubric-based marking and feedback on in-class assessments (Arthars & Liu, 2020); (ii) the ECoach Exam Reflections Tool prompts students to reflect on their performance on exams, with feedback personalised based on instructor-defined grade thresholds (Matz et al., 2021); (iii) teachers have co-designed AcaWriter’s behaviour and messaging to ensure that feedback aligns with the rubric, and other expectations of writing.

Designs feedback processes that involve peers and others. Future development of AF tools should do more to recognise the social dimensions of feedback. Given the limitations of AF, peer feedback could in principle provide a scalable, complementary source of human input for reflective deliberation. An indication of the efficacy of peer-to-peer conversations about AF is provided by Shibani (2019, Chap.4) who reports an exploratory pilot study of peer discussion about AcaWriter feedback, concluding that while this showed promise, more scaffolding is required to address students’ different abilities. Another approach might share study strategies used by strong students to help new students. ECoach includes these tips through expert curation, but variations on this model might scaffold peer feedback more directly.

Micro level

Differentiates between varying student needs. While one aspect of this competency is addressed strongly in the literature (Student online disengagement), two others are not yet. There is no evidence as yet, that teachers’ use of any of the open AF tools have made an impact (positive or negative) on Students’ feedback recipience or Students’ emotional needs. The studies by Lim et al., (20202021c) documented some of the less productive emotional responses from students in response to their personalised feedback, namely frustration at being advised to use a learning strategy that went against their own preference. However, there was no information about whether the instructors actually knew about their students’ feedback recipience, particularly the negative emotional response to the personalised feedback they had crafted, as well as any further support for students’ emotions that may have been given. For example, it is possible that students could have replied to their personalised feedback, thereby continuing in a feedback dialogue, and presenting further opportunities for instructors to provide emotional support that might be needed. Further research is needed to document this aspect of supporting students’ recipience of automated feedback.

Automated feedback competencies for teachers

As noted earlier, the competency framework was derived from interviews with university teachers about their expert practices, only a few of which were technology intensive. The preceding analysis has provided a detailed account of the ways in which the effective uses of open AF tools map to the identified teacher competencies. In the process, we have recognised some competencies that extend the framework.

Thus, building on the detailed analysis above, we are now able to formulate a response to RQ2: What does the skilled use of open AF tools add to our conceptions of teacher feedback competencies?

We propose three additional “AF competencies” for teachers, each of which maps most obviously to the meso level (course module design) of the framework:

  • Explicitly structures AF feedback design

  • Sequences AF feedback processes across a course using digital templates

  • Adapts learning design to exploit automated feedback

Explicitly structures AF feedback design

Computers and humans have very different capabilities. The computational thinking underlying AF design leads to particular requirements for teachers. In SRES, ECoach and OnTask, teachers must translate their learning intentions—what kinds of students should receive what feedback, and when—into a suitable dataset, plus IF…THEN… rules that specify how the tool will recognise the work of those students, and the use of variables to design the message templates. While teachers may not be used to this type of structuring with respect to feedback, most teachers have an internal logic around when and what comments they make on particular student products. In working with AF, it becomes necessary to articulate them.

A commonly reported method of inducting teachers into this more explicit form of feedback design is in partnerships, whereby a learning technologist or researcher helps translate between the teacher’s mental model and the computational model. The technology specialist ‘drives’ the AF tool in consultation with the teacher, who explains what they want to accomplish. While this relieves the cognitive load on the teacher, it correspondingly leaves them dependent on the expert. In time, however, as they build confidence and fluency, they begin to experiment (cf. Arthars et al., 2019). There may be disciplinary differences in how comfortable teachers are with these processes.

In the case of AcaWriter, this explicit structuring of feedback design takes different forms. The tool models sentences at a high level of abstraction, which identifies the particular ‘rhetorical move’ which is made to the teacher and student. AcaWriter provides three levels of explicit structuring, each requiring different levels of teacher agency and design effort. Relatively little effort is required if the teacher wishes to simply reword feedback messages for each condition; deeper computational thinking is required if they want to invent new conditions to trigger the messages; and if they want to change the parser’s behaviour, they must work with a researcher who can edit the rule-based patterns.

The act of explicit structuring is not a trivial act. It requires the teacher to work with a tool, closely, but also to devise structures that support students appropriately. The dangers with rule-based feedback designs is that it is tempting to frame these as information provision opportunities alone. The more sophisticated question is how teachers can employ structures which support students’ agentic interaction with feedback processes. This requires teachers to develop a particular AF feedback literacy, in association with the deep understanding associated with teaching a particular cohort in a specific context.

Sequences AF feedback processes across a course using digital templates

The malleability of digital artifacts has created completely new ways to create, share, annotate and improve design artifacts, typically, documents and visualisations that aid personal and joint cognitive work. Well-designed digital templates promote good feedback practices, and thus raise feedback literacy if the teachers understand the rationale for the practices. Teachers often find such representations helpful for thinking strategically about when, and why, they will want to differentiate comments to students and how such series of comments might inform students’ future work. An open AF example of this is the Mailout Scheduling Template created to scaffold a teacher or team in planning how they will use OnTask to send feedback over the course of a semester (as reported in Lim et al., 2021a).

An example is shown in Fig. 1, which serves as a template, encapsulating a learning design pattern (Bearman et al., 2021) that explicitly embodies feedback. Such a document both supports the development of teacher feedback literacy as well as assists with designing feedback processes. Without such templates and plans, the use of open AF tools such as SRES, OnTask and ECoach will be ad hoc, with the risk of creating a rather chaotic learning experience with poorly timed, poorly written feedback.

Fig. 1
figure 1

AF tools introduce new digital documents. Example of a Mailout Scheduling Template (Top: a timeline schematic; Bottom: extract from an Excel spreadsheet)

Digital templates are in some ways another means of structuring, but ones that also allow the joining of AF processes with general course processes. Implicitly, these templates, like many learning design patterns, allow the sharing of course materials amongst teachers and possibly students. Thus, the skilled use of a template is not just about ‘filling it in’ but about coordinating people and artifacts involved feedback processes. The joint development of the document may serve as a pivotal point for feedback design for a large team working with the AF.

Adapts learning design to exploit automated feedback

Feedback literate teachers understand the strengths and weaknesses of an AF tool, and can adapt their teaching to exploit the former, and compensate for the latter. An example is adapting one’s learning design to exploit the strengths of AF, which then leads to changes in AF functionality, leading to further changes to learning design, and so on in an iterative cycle. This is an instantiation of the Task-Artifact Cycle (Carroll & Rosson, 1992) originally developed in the field of human–computer interaction, to describe how a task shapes the artifacts that are selected or designed to assist it, whose affordances in turn shape the task, and so on.

To take a simple educational example, once one understands the interactional affordances of a collaborative document editor, one can design student co-writing and co-annotation tasks that would otherwise be impractical or impossible. Having run this with a cohort, the teacher might recognise weaknesses in the quality of student annotations, and specify particular kinds of annotations that they wish to see classified in the tool. This in turn enables new kinds of analytics-driven information to students on the kinds of annotations they make, about which students can then be asked to reflect in their final reports. The task and digital artifact are thus mutually shaping each other as they evolve.

This competency necessarily takes time to emerge, hence in the context of open AF tools, we see only preliminary evidence that as educators come to understand how instant feedback changes the student experience, this can lead them to evolve student activities to take advantage of these new capabilities. Arthars et al. (2019) document that once teachers understood how SRES worked, and built their confidence, they started to build mini-surveys within SRES to capture attendance and students’ understanding of concepts, in order to return rapid feedback. In another example of the task-artifact cycle, Shibani et al. (2019) document how teachers (i) learned to align the language they used in class to describe academic writing, with the language used by AcaWriter; (ii) they aligned AcaWriter’s sentence classifications with their rubrics to ensure students could see the relevance to their grades; and (iii) they evolved the student task to take advantage of AcaWriter (e.g., to include explicit reflection on AcaWriter’s feedback; or to incorporate its feedback into peer discussions).

Looking ahead, we anticipate teachers with this competency will demonstrate an understanding of the differences between human and machine ‘intelligence’ which helps articulate some of the limitations associated with AF tools. Computers can complete tasks at scale, rapidly and reliably (Bearman & Luckin, 2020). Computers, at present, do not develop rationales or adapt to deal with situations that are not within their original parameters (Luckin, 2018). So, it is important to understand that AF tools cannot, without the explicit structuring provided by people, adapt to different kinds of learners, or recognise concerns beyond the immediate scope of the tool.

Skilled use entails knowing the limitations of the particular AF tools. Through knowing limitations, teachers can make better use of existing affordances or seek out other means of supplementing gaps. Some tend towards overly positive views of technologies in higher education, as critiqued by Selwyn (2014) but brings the danger that educators cannot properly integrate such tools into their learning design. For example, by identifying whether a tool affords open AF, with its need for human involvement, a teacher can start to understand how they work with the tool. Limitations are inevitable but they are valuable: through knowing limitations a skilled educator can mobilise the range of resources, including AF, that will help their learning design.

Conclusion

This paper opened by recognising that two ‘tectonic forces’ are changing the conceptual and digital landscapes for feedback. Firstly, there is growing recognition that we need to conceive “feedback” in much richer ways, and secondly, there are exciting new possibilities for technology-enhanced feedback. The risk is that “the tail wags the dog”, with technology making it possible to implement poor feedback design and practices at unprecedented speed and scale.

We have argued, therefore, that the effective design and deployment of automated feedback tools should be (i) grounded in the scholarship of feedback literacy, and moreover, (ii) should promote feedback literate competencies. However, to date, there has been no systematic account of the nature of teachers’ feedback competencies when using AF tools. In the absence of clear guidance on how AF tools should be deployed responsibly, they risk being ‘bolted on’ to courses without due consideration of how they will affect student sensemaking and action—indeed, risking undermining students’ own feedback competencies.

This paper’s analysis of the literature on the skillful use of AF tools provides evidence that the effective use of open automated feedback tools requires teachers to demonstrate a wide range of competencies identified in the Boud and Dawson (2021) framework. We have documented and discussed the current strengths and weaknesses of four open AF tools, clarifying which competencies are evidenced most strongly in the literature, and which have weak evidence. In the process of reflecting on teachers’ AF competencies, we have identified three “automated feedback competencies” distinctively tied to effective teacher practices with open AF tools. Finally, our analysis has shown some evidence that newer, open AF tools—at least, the tools described here—are not necessarily fostering passive forms of feedback processes, as feared by Carless and Boud (2018), but can be used to cultivate learner agency and productive action. However, we acknowledge that research needs to continue, to clarify how open AF tools may be used in richer ways to foster both teacher and student feedback literacies.

While this analysis has identified a diverse range of teacher feedback literacy competencies with open AF tools, and specific areas of weakness in the current literature, we suggest that this work has further implications for researchers, designers and evaluators. For example, those designing new tools could use the competencies discussed in this paper as inspiration for how the user experience can be designed to facilitate the adoption of more feedback literate practices. Another implication is that the competencies detailed here offer helpful analytical constructs to interpret both qualitative data (e.g., student/teacher interviews), and quantitative data (e.g., can we envisage ‘feedback literacy analytics’ which extract patterns from student activity data?). Finally, when evaluating open AF tools, this work foregrounds a fundamental question for the developers of automated feedback tools, namely, Does use of this tool promote or obstruct teacher feedback literacy?

We recognise that this analysis has limitations, which also open avenues for future work. Firstly, as noted in Sect. “Limitations in the open AF literature”, the open AF literature is sparse with respect to evidence of certain feedback practices. Future work should investigate whether open AF tools can engage and develop these teacher feedback practices. Secondly, while the focus of this paper has been on teachers’ feedback literacy with open AF tools, an important question arising from this analysis is how best to cultivate students’ feedback literacy for engaging with automated feedback. Section “Develops student feedback literacy” reviewed the small amount of available evidence, as new analyses and empirical evidence emerge (Shibani et al., 2022; Tsai et al., 2021). Thirdly, technology is in constant flux, and the specific examples of open AF tools selected for this analysis will undoubtedly date. Future analyses that reflect critically on the affordances of new tools, and the degree to which this paper’s analysis applies to the evidence as these tools are deployed, will be important. Finally, this paper has argued that open AF tools represent a pedagogically important category due to the agency it gives teachers to exercise their feedback literacy competencies. By implication, closed AF tools deny or limit such opportunities, but it is possible that teachers work around these limitations in creative ways. It would be interesting to document if, and how, this is occurring.

To conclude, our vision is to create feedback-rich environments that can scale through the judicious use of feedback technologies, by equipping teachers with the feedback literacy practices they need to wield such tools effectively. It is hoped that this paper provides a helpful map of the current state of the art, a guide for teachers about the capabilities they need to make the most of automated feedback, and orientation for the researchers and designers of automated feedback tools.

Availability of data and materials

See Table 2 for the full list of papers analysed.

Notes

  1. The availability of behavioural scientists to advise teaching teams on the design of AF is a distinctive feature of the ECoach model, developed at the University of Michigan. This is justified by the focus on student success in introductory gateway courses studied by many hundreds of students at a time. Clearly, most universities do not yet have such expertise readily available, but a consortium is now demonstrating how the model can be adopted and adapted in other institutions.

  2. Brightspace Intelligent Agents feature: https://documentation.brightspace.com/EN/le/intelligent_agents/instructor/create_agent.htm

Abbreviations

AF:

Automated feedback

AI:

Artificial intelligence

ITS:

Intelligent tutoring systems

SRES:

Student relationship engagement system

References

Download references

Acknowledgements

We thank the teams who created and evaluated the open automated feedback tools reviewed in this paper, and for their feedback on earlier drafts: Abelardo Pardo (University of South Australia: OnTask), Danny Liu (University of Sydney: SRES), Tim McKay, Holly Derry, Cait Hayward, Rebecca Matz (University of Michigan: ECoach).

Author information

Authors and Affiliations

Authors

Contributions

The authors have contributed equally to this paper and approve submission to IJETHE.

Corresponding author

Correspondence to Simon Buckingham Shum.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

No financial or non-financial competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Examples of open Automated Feedback tools included in the literature analysis

OnTask: The teacher’s user interface (Pardo et al., 2018). (Left) The message editor for metric selection, rule generation and personalisation. (Right) Previewing an email to check that it has personalised the feedback information correctly for a given student. Blue outline indicates portion of text personalised to students who have not yet participated in discussion forum activity. For details see https://cic.uts.edu.au/tools/ontask

figure a

ECoach: (Left) A To-Do list tuned to the curriculum, prompting student reflection and action. (Right) a tailored post-exam reflection message integrating student self-report data about how they regarded their grade, and eliciting reflection on study habits. For details see https://ai.umich.edu/software-applications/ecoach

figure b

AcaWriter: Automated feedback on academic writing. (Left) Highlighting of sentences in which it detects academic ‘rhetorical moves’ (see legend). (Right) Feedback information for the author. For details see https://cic.uts.edu.au/tools/awa

figure c

Appendix B: Summary of the literature analysis

The paper’s literature analysis under the headings and competencies of the teacher feedback literacy competency framework (Boud & Dawson, 2021).

Level no.

Teacher feedback literacy

Automated Feedback Tool

OnTask & SRESa

ECoach

AcaWriter

Macro

1

Plans feedback strategically

Identifies feedback as a strategic intervention by (1) scaling up feedback to suit larger cohorts (Pardo et al., 2019); (2) viewing complex feedback connections across a whole unit (Lim et al., 2020)

Identifies feedback as a strategic intervention (1) by scaling up feedback to suit larger cohorts (Brown et al., 2021); (2) viewing complex feedback connections across a whole unit and even a whole course (Matz et al., 2021)

Identifies feedback as a strategic intervention by scaling up feedback to suit larger cohorts (Shibani et al., 2019)

2

Uses available resources well

(Both OnTask & SRES) Utilises technology to collate multiple data sources for personalised feedback, and to push feedback to students

(Both) Ensures students can readily access data by pushing feedback out via email and/or LMS

(SRES) Saves instructors time (Blumenstein et al., 2019)

Utilises technology to collate multiple data sources for personalised feedback, and to push feedback to students (Matz et al., 2021)

Ensures students can readily access data by pushing feedback out via email and/or personalised web portal (Chen et al., 2017)

Utilises technology to collect and analyse written data for personalised feedback, and to push feedback to students (Shibani et al., 2020)

Ensures students can readily access data at any time when requested (Shibani et al., 2020)

3

Creates authentic feedback-rich environments

(Both) Makes feedback processes familiar and commonplace, in the form of regular email communication

(SRES) Modelling authentic feedback—through rubrics (Arthars et al., 2019)

Makes feedback processes familiar, through regular pushes of information and feedback through the platform (Matz et al., 2021)

Modelling authentic feedback—by co-designing feedback with disciplinary writing practices (Knight et al., 2020)

Assists students to utilize information from the environment in which they operate by briefing students on AcaWriter and how the feedback is relevant for assignment (Shibani et al., 2020)

4

Develops student feedback literacy

(Both) Students appreciate especially the quality and personalised nature of their feedback and act on recommendations to improve their work (Arthars et al., 2019; Lim et al., 2020)

(OnTask) Students are willing to seek this feedback (Tsai et al., 2021)

(OnTask) However, students may need more support in managing affect around their feedback (Lim et al., 2020; Tsai et al., 2021)

Students have been evidenced to be able to make judgments/take action based on feedback, though this may depend on their own performance goals (Brown et al., 2019)

Instructors induct students into AcaWriter and explain how to engage with it (Shibani et al., 2020)

Students (though not all) appreciate the specificity of the feedback and are able to act on recommendations

The writing task and assessment criteria can be designed to require students to evidence critical engagement with AF, with deeper engagement associated with stronger grades (Shibani et al., 2022)

5

Develops/coordinates colleagues

(SRES) Mutually shares successful feedback practices with colleagues—Communities of practice have been formed within and between departments, where learning designers and faculty work together (Arthars et al., 2019)

(Both) Co-design with researchers- Researchers work with instructors to co-design the implementation of personalised AF using OnTask (e.g., Iraj et al., 2020)

Co-design with researchers—Behavioural scientists work with instructors to determine which features of ECoach to use in their courses, and to customise surveys for students, which forms part of the basis of personalised AF (Chen et al., 2017; Matz et al., 2021)

Co-design with researchers—Instructors co-designing with researchers is a key strategy helping instructors to adopt AcaWriter in their classrooms (Shibani et al., 2020)

Researchers co-design learning tasks to integrate AWA into meaningful teaching and learning activities (Knight et al., 2020)

6

Manages feedback pressures (for self and others)

(Both) Organises feedback information generating sessions to minimise teachers repetitive work, by having the course coordinator as the sole feedback coordinator

(SRES) Manages workload to ensure that greatest feedback priorities are met—SRES facilitates the recording of attendance data, assessment data, and other important student data, and act on it over the semester (Arthars et al., 2019)

(SRES) Students sometimes reply to the feedback, allowing staff to know that they have read them (Arthars et al., 2019)

Manages workload to ensure that greatest feedback priorities are met—Instructors can select how many of the 5 ECoach feedback tools they want to implement for their course so that students get timely and appropriate feedback to support their learning in the course (Matz et al., 2021)

Designs for student self-correction—students reflect on their own writing in view of AcaWriter feedback (Lucas et al., 2021; Shibani et al., 2022)

7

Improves feedback processes

(SRES) Collects evidence about the effectiveness of feedback on learning—Students can comment directly to the platform in response to personalised messages—allowing teachers to close the loop and to reflect on their feedback support (Arthars et al., 2019)

Researcher/system developer collects evidence about the effectiveness of feedback on learning—Uses design-based research approach to evaluate ECoach feedback and enhance its design (Matz et al., 2021)

Researcher/system developer collects evidence about the effectiveness of feedback on learning—Log data from AcaWriter can be mined to examine how students have engaged with their feedback, in order to optimise the system (Shibani, 2020)

Meso

8

Maximises effects of limited opportunities for feedback

(Both) Coordinates feedback with other pedagogical practices—e.g., triggers based on timely access to resources for assessment (Lim et al., 2019)

(Both) Promotes efficiency—Instructors have all the data in the repository for personalising feedback, and use a single email to personalise feedback to different groups of students (Arthars et al., 2019)

(Both) Allows instructors to give feedback on students’ out-of-class online activities (Lim et al., 2021a)

(SRES) Reduces lag time between students’ assessment and feedback—builds forms in SRES to capture performance in in-class assessments, resulting in more efficient marking processes and more detailed feedback (Arthars et al., 2019)

Using the right tool for the job—Academics can select which of 5 tools in ECoach will be best for giving students feedback to optimise learning in their course (Matz et al., 2021)

Reduces lag time between students’ submission and feedback—students get instant feedback on their writing any time they request it (Shibani et al., 2019)

9

Organises timing, location, sequencing of feedback events

(Both) Organises feedback activities early in the semester—plans feedback ‘mailouts’ at the outset and is intentional about its purpose (Arthars et al., 2019; Lim et al., 2019)

(Both) Ensures that feedback information is available in time for subsequent tasks—sends students prompts about tasks they have not yet completed (Lim et al., 2020)

(Both) Sequences feedback events to maximise their influence on student learning – e.g., sets feedback messages on performance on weekly preparatory tasks, or sets feedback messages to prompt students to access assessment-related resources to avoid late submissions (Lim et al., 2020)

Organises timing of feedback—instructors and BS team generate to do list to help keep students organised and motivated, with nudges about when to study or use ECoach resources (Matz et al., 2021)

Organises timing and location of feedback events—instructors direct students to AcaWriter for obtaining feedback, timed to assignments that need to be submitted (Shibani et al., 2020)

10

Designs for feedback dialogues and cycles

(Both) Designs for feedback loops – e.g., students obtain weekly feedback on their mastery of the weekly topic; they can then review these topics for exam preparations (Lim et al., 2020). Instructors use SRES to provide more immediate feedback so that students can use that feedback to improve on their next submission task (Arthars 2019)

(OnTask) Instructors design feedback as an inherent part of the flipped learning cycle, where students do their weekly preparatory work and get feedback on their progress and performance on each iteration (Lim et al., 2020)

(OnTask) Stages tasks to maximise effects of feedback information—eg in Foundation course students get feedback on their task completions that will enable them to complete the final assessment (Lim et al., 2019)

(SRES) Designing online materials for student engagement—In the first 3 weeks of the course, students are required to use the course LMS to complete 2 short participation activities and a quiz to elicit ‘active interaction’ Students who have not completed these are given a feedback nudge (Blumenstein et al., 2019)

Designs for feedback loops—Instructors use the to-do list feature to provide tailored feedback about study over the course

ECoach enables a dialogic process in that students are invited to input their academic habits, emotions, concerns, motivations once a semester (Matz et al., 2021)

Designs for feedback loops—instructors direct students to use AcaWriter through cycles of requesting feedback and revising drafts, as many times as they like before submission

Designs for ‘intermediate’ feedback—instructors use AcaWriter as part of in-class work (Shibani et al., 2020)

11

Constructs and implements tasks and accompanying feedback processes

 

Instructors work with the behavioural sciences team to generate a to do list, which students can use to assess their own progress (Matz et al., 2021)

Exam reflections prompt students to reflect on expected and actual exam performance. Students get personalised feedback with resources and advice based on their reflection (Matz et al., 2021)

Designs intermediate feedback tasks to enable students to self-assess before input from teachers—Encourages students to reflect on AcaWriter feedback, to know how to improve drafts before submission (Shibani et al., 2022)

Undertakes in-class discussions about feedback—builds in discussion times about AcaWriter feedback (Knight et al., 2020)

Sources and deploys a wide range of exemplars to demonstrate features of good work—Instructors discuss high and low quality exemplars and run them through AcaWriter (Shibani et al., 2020)

12

Frames feedback information in relation to standards and criteria

(SRES) Frames feedback information in relation to standards and criteria

Instructors can build forms in SRES for rubrics-based marking and feedback on in-class assessments (Arthars & Liu, 2020)

Frames feedback information in relation to standards and criteria—Exam reflections tool—prompts students to reflect on their performance on exams. Feedback is personalised based on whether students were below or above the grade threshold set by instructors (Matz et al., 2021)

Frames feedback information in relation to standards and criteria—By co-working with AcaWriter researchers, teachers explicitly connect AcaWriter feedback to assessment rubrics (Shibani et al., 2020)

13

Manages tensions between feedback and grading

(Both) Distinguishing between feedback information and grade justification and deploy each appropriately—As feedback is based on activity and performance data, instead of being used to justify grades, instructors focus on communicating feedback as support and fostering of self-regulated learning to improve future performance (e.g., Lim et al., 2020)

Instructors also used SRES to provide prompt feedback on assessments eg through the use of rubrics, but this is recognised to be a separate process (Arthars et al., 2019)

Feedback messages in ECoach focus more on supporting students in their learning in large gateway courses, rather than on feedback for grading (Matz et al., 2021)

Distinguishing between feedback information and grade justification and deploy each appropriately—Instructors emphasise that AcaWriter does not grade the assignment, but that it is for formative feedback to improve the draft toward final submission (Knight et al., 2020)

14

Utilises technological aids to feedback as appropriate

(Both) Instructors use these systems to support the logistics of feedback, such as enabling personalised feedback at scale (Lim et al., 2020), or to record attendance to personalise feedback at scale, as well as to facilitate grading and feedback on performance (Arthars et al., 2019)

Instructors select the specific tool(s) in ECoach to use, to support the logistics of feedback, such as enabling tailored feedback according to students’ characteristics (Matz et al., 2021)

Instructors implement AcaWriter to support the logistics of feedback, such as enabling students to request instant, formative feedback on their writing whenever they wish (Shibani et al., 2020)

15

Designs to intentionally prompt student action

(Both) Provides persuasive rationales for the importance of student actions in feedback processes

Instructors design to nudge behaviour—Instructors use OnTask for personalised ‘nudge’ feedback to prompt students to complete course tasks (Lim et al., 2021a)

Instructors may also design actionable elements within feedback messages that lead students directly to the required task (Iraj et al., 2020)

Instructors design to nudge behaviour—Instructors use SRES for personalised ‘nudge’ feedback to prompt students to complete course tasks (Blumenstein et al., 2019)

Provides persuasive rationales for the importance of student actions in feedback processes

Instructors design to nudge behaviour – Instructors select activities for personalised ‘nudge’ feedback to prompt students to complete course tasks (Matz et al., 2021)

Instructors implement AcaWriter to enable students to make feedback visible to students, highlighting areas where they can improve their writing (Shibani et al., 2019)

16

Designs feedback processes that involve peers and others

(SRES) Facilitates students to engage in peer feedback processes

Instructors using SRES set up peer assessment feedback to be shared directly with students on their web portals (Arthars & Liu, 2020)

 

Facilitates students to engage in peer feedback processes

When implementing AcaWriter in their courses, instructors incorporate discussion about feedback with peers in class, so that students can see the relevance of the feedback they receive in AcaWriter (Knight et al., 2020)

Micro

17

Identifies and responds to student needs

This is a general teacher competency that all automated feedback tools support, as detailed below under the sub-headings

 

Instructors fine tune their comments to individual student needs—Instructors work with researchers tune AcaWriter feedback specific to the assignment, highlighting what students need (Shibani et al., 2019)

18

Crafts appropriate inputs to students

(Both) Provides comments that identify needed improvements—Instructors craft feedback messages to correct, affirm and motivate (Arthars et al., 2019; Lim et al., 2020)

Provides comments that identify needed improvements—Behavioural scientists work with instructors to craft feedback messages in the Messages tool in ECoach, to affirm and motivate students in achieving their stated goals (Matz et al., 2021)

Provides comments that identify needed improvements—Instructors work with researchers to craft AcaWriter feedback to correct and affirm students’ writing (Knight et al., 2020)

19

Differentiates between varying student needs

(Both) Provides differentiated feedback support to different groups of students—Instructors analyse course analytics to group students into bands of engagement for example with login frequencies, and crafts personalised feedback for students in each band (Blumenstein et al., 2019; Pardo et al., 2019)

(OnTask) Seeks to engage difficult to involve/ marginal/ excluded students—Instructors use OnTask feedback messages to support students in foundational courses, who tend to have less experience with formal education and have less confidence in their ability to succeed in higher education (Lim et al., 2019)

Provides differentiated feedback support to different groups of students—ECoach messages are personalised to students’ learning profiles as identified through their entry and ongoing survey inputs (Matz et al., 2021)

 
  1. aSince OnTask and SRES have very similar core functionality they are combined to avoid creating two similar columns. However, as discussed in the paper, and noted in this column, SRES offers important additional capabilities that educators have leveraged

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Buckingham Shum, S., Lim, LA., Boud, D. et al. A comparative analysis of the skilled use of automated feedback tools through the lens of teacher feedback literacy. Int J Educ Technol High Educ 20, 40 (2023). https://doi.org/10.1186/s41239-023-00410-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41239-023-00410-9

Keywords