Skip to main content
  • Research article
  • Open access
  • Published:

Scalable authentic assessment of collaborative work assignments in wikis

Abstract

Wikis are appropriate tools for deploying authentic assessment experiences for learning and work scenarios in which a group of users are asked to develop a shared task. However, when the number of wiki users increases, the number of contributions can grow at a pace whereby accurately assessing them becomes a complex and non-scalable task. While different quantitative approaches have been shown to be scalable, they are usually coarse-grained and provide limited feedback about the assessment. This work proposes a scalable assessment methodology for wiki-based tasks, based on qualitative self- and peer assessment of wiki contributions. The methodology is implemented using a software tool and is applied as part of an undergraduate course, complementing a quantitative assessment approach. Positive evidence on the scalability of the method and how it implements a more fine-grained qualitative assessment than the regular quantitative approach is found, providing indicators for assessing both individual and group generic skills.

Introduction

The use of wikis for computer-supported collaborative learning experiences provides a number of advantages over traditional approaches (Elgort, Smith, & Toland, 2008). An interesting feature of wikis is that they keep track not only of the final version of the document, but also of all the intermediate versions that result from the numerous contributions made by each user (Trentin, 2009). As a result, wikis yield a considerable number of indicators that can be used to assess different skills (Ortega Valiente & Reinoso Peinado, 2011), and collaborative wiki assignments can be assessed in terms not only of the final version of the deliverable product, but also each author’s contributions and the group dynamics within the timeline of the document’s creation. Unfortunately, conducting a detailed assessment of each and every wiki contribution can often become too complex, due to the large amount of information this entails, leading to scalability issues. Scalability is the capability of a system or process to handle a growing amount of work, or its potential to be enlarged to accommodate that growth (Bondi, 2000). In particular, an assessment of all contributions made does not scale well as the number of users and their interactions with the wiki increases (Boud & Soler, 2015; Hatzipanagos & Warburton, 2009).

This article draws attention to the ways in which wikis are useful for assessment and aims at engaging students in tasks that are analogous to the kinds of problems faced by professionals in the field (Ashford-Rowe, Herrington, & Brown, 2014; Gulikers, Bastiaens, & Kirschner, 2004; Herrington & Herrington, 2006). Communication and collaborative tools which enable users to participate in the process of knowledge-building are integrated in many IT companies, where such tools are used for knowledge management (Sousa, Aparicio, & Costa, 2010). In fact, wikis are typically created as part of the information ecosystem of hosting organizations (Díaz & Puente, 2012). Thus, students need to be able to use these tools in order to simulate the authentic structures of work practices in disciplines as diverse as health, economics and software engineering (Minocha, Petre, & Roberts, 2008).

This work proposes a methodology for conducting a scalable qualitative assessment of wiki assignments. To support the application of the methodology, a software tool named AssessMediaWiki (AMW) was specifically developed. AMW was designed to implement a scalable fine-grained qualitative assessment method based on the self- and peer-assessment of wiki contributions using rubrics, a well-known assessment instrument (Florian-Gaviria, Glahn, & Fabregat Gesa, 2013). In addition, this method provides students with evidence of grades received and formative feedback (Shute, 2008).

This methodology has been applied within an Action Research (AR) (Runeson & Höst, 2009) conducted as part of an undergraduate course on the administration of operating systems for the assessment of participants’ generic skills. Generic skills are relevant and valuable abilities across various areas of life, and graduates are required be competent in these (Llorens, Llinàs-Audet, Ras, & Chiaramonte, 2013). Skills such as motivating people and moving toward common goals, working within a team or planning and time management are developed by students when they collaborate in undertaking a collective task (Macdonald, 2003).

Motivation

Wikis have long been incorporated as a supporting tool for learning processes (Parker, & Chao, 2007). One of their most important features is the environment they provide for their users to collaborate, easing the creation of content and giving students a tool that supports asynchronously working from different locations. This work tries to take advantage of the collaborative work performed in a wiki environment to assess students’ performance in generic skills. Previous works tried to perform similar assessments with automated tools that generate relevant quantitative information (Balderas, de-la-Fuente-Valentin, Ortega-Gomez, Dodero, & Burgos, 2018; Díaz & Puente, 2012; Ortega, González-Barahona, & Robles, 2007; Palomo-Duarte et al., 2014), but this work is motivated by the lack of a qualitative approach.

Learning experiences only take advantage partly of wikis potential, since their value for learning should not be just the resulting final work of the collaborative work performed, but the way in which it has been developed until that final result has actually been delivered. All the information on the development of wiki content is reflected in the individual contributions, and this information can provide teachers with a relevant evidence for the assessment of students’ performance.

From this motivation derives the research question of this work: Does a methodology based on the peer- and self-assessment of students’ wiki contributions provide a scalable authentic assessment of their collaborative skills?

Key terms and definitions

In this article, terms related to wikis, assessment and scalability are widely used. The purpose of this section to clarify their meaning throughout the manuscript.

Wikis are sets of linked-web pages, created through the incremental development by groups of collaborating users (Leuf & Cunningham, 2001). The content provided by each user to a wiki page is known as contribution. Each contribution generates a new version of a page. Since every version of the page is stored, wiki contribution is defined as the difference between two consecutive versions of a wiki page. The aim of this work is the assessment of students’ work in the wiki through their contributions.

An assessment can comprise a wide range of methods for evaluating students’ performance and attainment, including formal testing and examinations, practical and oral assessments and classroom-based assessment (Brown, 2014; Brown & Pickford, 2006). Additionally, it must provide informative feedback to students during instruction and learning so that their practice of a skill and its acquisition will be effective and efficient. (Committee on the Foundations of Assessment et al., 2001). This work uses the term assessment to refer to judgement of students’ work made by the teacher and/or the students themselves (self and peer assessment). The different assessment procedures and instruments used will be explained in detail throughout the document. In these instruments, the term grade will be used when a symbol is used to rate a student’s achievement, while the term mark will be used when the students’ achievement is represented as a number in an interval.

Obviously, a detailed assessment of students’ work in a wiki through their contributions will imply a considerable increase in the workload of a teacher. Thus, scalability issues will arise, since a teacher will hardly be able to handle this growing amount of work.

The rest of this article is organized as follows: Literature review section describes the foundations of this work; Empirical work section introduces the empirical work; Discussion section discusses the findings of the work; and finally, Conclusion section presents the conclusions and an outline of future work in this area.

Literature review

The approach of assessment as learning and empowerment is based on three central challenges (Rodríguez-Gómez & Ibarra-Sáiz, 2015). Firstly, it implies the involvement of students in assessing their own learning in a way that is transparent and that encourages dialogue by assessment modalities such as self and peer assessment. Secondly it incorporates feedforward, defined as strategies and comments that provide information about the results of assessment in a way that enables students to take a proactive approach to making progress. Finally, it is the design of high quality assessment tasks. It is vital that assignments to be assessed are demanding, meaningful and authentic. Such high quality assessment tasks will also demand that students engage in reflexive and analytical thought processes.

Mueller (2005) defines the following attributes of authentic assessment approaches.

  • Performing a task: Authentic assessments ask students to demonstrate their understanding by performing a relatively complex task which is representative of a more meaningful application.

  • Devised in real life: As is common in real-life situations, authentic assessments ask students to demonstrate proficiency by carrying out a particular task.

  • Construction and application of knowledge: Authentic assessments ask students to construct a product or performance out of facts, ideas and propositions, so that students are invited to analyze, synthesize and apply what they have learned and create new meaning in the process.

  • Student-structured: Authentic assessments support for greater student choice in the construction of determining what is presented as evidence of proficiency, generally through multiple acceptable routes towards constructing a product or performance.

  • Direct evidence: Authentic assessments offer more direct evidence of the application and construction of knowledge than traditional tests.

Authentic assessments champion two main features of new generation assessment methodologies, namely their alignment with learning outcomes and the embedding of assessment activities within the learning flow (Biggs, 2015; Biggs, Tang, & Society for Research into Higher Education, 2011). Learning outcomes define the skills and knowledge that students will acquire and how these will be applied. The design of authentic assessment experiences must be fundamentally aligned with the intended learning outcomes. In addition, the definition of authentic assessment experiences generally reverses the traditional approach to defining the learning flow; in traditional assessment models, the curriculum drives the assessment, while in authentic assessment models, assessment drives the curriculum (Mueller, 2005). Thus, embedding or integrating assessments into the learning flow is a natural process. In summary, the alignment of learning outcomes, learning activities and assessments is central to authentic assessment experiences.

In a wiki, students can participate in an authentic assessment experience. Wikis, as shared digital artefacts, enable users to participate in the process of knowledge building (Moskaliuk, Kimmerle, & Cress, 2012). They are web applications for which the content is collaboratively added to, updated, and organized by users (Mitchell, 2006). Wikis are used in a wide variety of contexts to facilitate interaction and cooperation in projects at various scales: educational (Cole, 2009), organizational (Lykourentzou, Papadaki, Vergados, Polemi, & Loumos, 2010), architectural (Jackson, 2009) and general purpose (Aronsson, 2002), among others. Nowadays, citizens need to know how to manage knowledge, i.e., how to access, create, interpret and distribute knowledge. We live in a knowledge-based society; governments increasingly characterize the societies over which they preside as ‘knowledge societies’, in which knowledge is the primary driver of national and international economic and social prosperity (Henkel, 2007). Thus, if students work on a wiki, they are required to collaborate in order to provide knowledge, in the same way as required by today’s society. Additionally, they must assess both their own contributions and those of their peers. Peer and self-assessment facilitate the acquisition and development of generic skills (Ibarra Saiz, Rodriguez Gomez, & Gomez Ruiz, 2012), and have shown positive formative effects on students’ achievement and attitudes, enabling students to evaluate themselves in relation to the performance of their classmates (Gielen, Dochy, & Onghena, 2011). This can generate debate and an interchange of opinions or ideas. In this way, students can be participants in an authentic assessment experience, equivalent to those they will meet in real life.

A wiki supports a massive collaboration process, whereby users located in different places can modify the same website. In this context, wikis host the dynamic, real-time teacher-to-student and student-to-student interactions that are required in collaborative learning experiences (Jaksch, Kepp, & Womser-Hacker, 2008).

In collaborative work, the dilemma of conducting group or individual assessments often arises (Bocconi & Trentin, 2012). Following the mainstream approach described in the literature (Dillenbourg, 1999; Fountain, 2005), the collaborative work developed by students in this study is assessed through group rather than individual work. However, an individual assessment is required regarding students’ performance in generic skills throughout this collaborative work, since the ability to perform generic skills is a characteristic of each individual (Spencer & Spencer, 1993).

In the literature regarding the assessment of wiki assignments, two approaches can be found: quantitative approaches measure users’ contributions or created content (Ortega et al., 2007), while qualitative approaches are based on a content analysis of users’ contributions (Su & Beaumont, 2010).

Quantitative assessment usually takes into account the number, time and size of wiki contributions. StatMediaWiki (SMW) is an analysis tool that assists the teacher in monitoring a wiki evolution and assessing several skills related to the developed work, e.g. whether all the users of a team collaborating on a wiki page contributed a similar amount of work, or how the students worked throughout the entire period of time, i.e., whether they created the page progressively according to a balanced work plan or instead completed all the work after the deadline (Palomo-Duarte et al., 2014).

Other tools are available for performing a quantitative analysis of collaborative work on wikis. WikiXRay (Ortega et al., 2007) is a scripting toolset for quantitative analysis that uses the database dumps of a MediaWiki website, which must be provided by the wiki administrator. It generates diverse statistics and graphics, new instances of which can be created to obtain customized output. HistoryFlow (Viégas, Wattenberg, & Dave, 2004) is a data analysis tool that retrieves the history of a single wiki page. It shows the changes in each version of the page in graphical format, with a higher level of detail than the usual information given in MediaWiki. Different aspects of authorship can be highlighted, such as contributions from all authors, contributions from a single author or new contributions from any author. Unfortunately, this tool only provides single-page reports, and no by-user information is provided. In the same way as Chronogram (Wattenberg, Viégas, & Hollenbach, 2007), it has been effective in quantitatively detecting edition patterns and reactive behaviours in Wikipedia.

It is undeniable that qualitative assessment, as carried out by teachers, can yield more significant results than quantitative approaches; however, it poses issues of scalability (Benlloch et al., 2012; Lacuesta, Palacios, & Fernández, 2009). In order to assess the quality of the contributions made by each student to a wiki page, revisions made to the page must be taken into account. If the assessment of a single page takes a considerable time, assessing each page revision may exponentially increase the effort of assessment.

A proposal for the automatic evaluation of wiki contributions based on heuristics is presented in (Arevalillo-Herráez, Perez-Muñoz, & Ezbakhe, 2010) in order to address these scalability issues. In particular, contributions are appraised more highly if their content does not vary over a long period of time. The time for which a contribution remains on a page can be used as an indicator of quality; however, we propose an assessment of students based on a greater number of indicators. In this work, we aim to create a peer-to-peer qualitative assessment process which addresses these scalability issues.

Three peer assessment scenarios were carried out in (De Wever, Van Keer, Schellens, & Valcke, 2011) using a web-based peer assessment form, which describes a case study integrating intra-group peer assessment in a wiki environment in a higher education setting with more than 300 students. However, De Wevers’ work did not focus on scalability, but on demonstrating the reliability of intra-group peer assessment in a wiki-environment. Additionally, De Wever’s work did not considered a detailed assessment of students’ contributions, but students had to assess the overall work of their mates.

Unfortunately, these approaches lack of a solution to address the issue of scalability when it comes to performing a qualitative assessment of wiki contributions. The empirical work presented in this work presents a qualitative assessment framework to conduct the computer-supported peer-assessment of students’ wiki contributions. This way, the assessment is based on evidences and provide feedback to the students assessed. Finally, the information collected is used to measure students’ performance in several generic skills.

Empirical work

The empirical work was conducted following an AR methodology. The purpose of an AR methodology is to influence or change some aspect of whatever is the focus of the research (Robson & McCartan, 2016), trying to improve a certain aspect of the studied phenomenon, in this case the assessment of wiki contributions. According to Oates (2006), the main features of this methodology were as follows:

  • Concentration on practical issues: an authentic assessment of collaborative work in a wiki-environment.

  • An iterative cycle of plan-act-reflect: several iterations were performed following the scheme shown in Fig. 1. The first iteration comprised a quantitative analysis through SMW (a tool introduced in Literature review section), and a second comprised the application of the proposed methodological framework in this work and AMW. In some cases, additional iterations were applied to refine indicators or assess new skills.

  • An emphasis on change: improving previous experiments assessing generic skills based only on quantitative data.

  • Collaboration with practitioners: people working in the situation under study, i.e. students working in their wiki assignments.

  • Multiple data generation methods: both quantitative and qualitative data.

  • Action outcomes plus research outcomes. AR outcomes relate to both “action” and “research”:

    1. a.

      Action: practical achievements in the problem situation, i.e. the assessment of students’ skills by their wiki contributions.

    2. b.

      Research: learning about the processes of problem solving and action in a situation, i.e. the refinement of indicators using quantitative and qualitative data and the proposed framework.

Fig. 1
figure 1

Iterative process for assessing the wiki assignments

Context

The empirical work was conducted in the University of Cadiz (Spain), involving a final year course on Operating Systems Administration within a Computer Science degree course on which 43 students were enrolled. The course was coordinated by one of the AMW project members and an author of this article. Learning activities within the course included the development of a wiki-based project, the assessment of which was carried out following the approach proposed here.

In order to develop an authentic assessment experience, the assignment tasks consisted of planning and managing the actual migration process of an enterprise information system. Firstly, the original system was required to consider legacy issues of a system that had been running for certain time, a common task in sysadmin professional role (Brodie & Stonebraker, 1995). Secondly, students were required to use some virtualization (Pearce, Zeadally, & Hunt, 2013) or cloud solution (Bhopale, 2013), two widely demanded technological solution in today’s Information Technology world. Students were divided into thirteen groups of three members and two groups of two members. Each group was required to write its project documentation in a wiki page (the wiki assignment).

Besides, students knew in advance assessment criteria (described in detail in Qualitative assessment workflow section). In particular, two of part of their marks, “A. Team work skills” and “B. Communication and knowledge management” were share by all team members. This moved them toward common goals as a team, and dissuaded students from dividing the project into independent task and work separately. Then, planning and time management was implemented in the mark “D. Final deliverable product”. Students had a deadline to write the assignment. If they finished later, their mark was capped.

The experience carried out during the course involved three stages, which will be described in the approach subsection:

  1. 1.

    Development of the wiki assignment: This stage began with a seminar, in which students were instructed on how to work collaboratively on a MediaWiki wiki. Students were responsible for planning and managing their assignment, coordinating its tasks and working collaboratively. The wiki was publicly available, and over the six weeks of the project, students made more than 1400 wiki contributions.

  2. 2.

    Peer- and Self-Assessment of wiki contributions: This stage started with a seminar to teach students how to peer-assess wiki contributions using AMW. Following this, students made 412 qualitative assessments of wiki contributions. This process provided students with critical feedback about their work. Students were also able to respond to an assessment if they disagreed with the mark received.

  3. 3.

    Teacher refereeing: The teacher conducted a mixed quantitative/qualitative assessment over multiple iterations. The quantitative assessment was made using the information provided by SMW, while the qualitative assessment was supported by AMW.

Approach

The research question of this work is the following: “Does a methodology based on the peer- and self-assessment of students’ wiki contributions provide a scalable authentic assessment of their collaborative skills?” From this research question derives the main goals of this work: firstly to provide teachers with a fine-grained assessment of students’ collaborative work on a wiki by engaging students in an authentic assessment experience, and secondly, to achieve the first goal without losing sight of the non-functional requirements of scalability for the entire process.

These goals motivate the design of the methodological framework and also guide the development of the software artefact (the AMW tool) that supports it. This section is divided into two subsections: firstly, the proposed workflow is presented, and secondly, the AMW tool is described.

Qualitative assessment workflow

In this subsection, the methodology for the qualitative assessment of students’ wiki contributions within the context of a wiki assignment is detailed. It consists of three consecutive stages (Fig. 2). During the first stage, each group of students are required to develop their wiki assignment via the collaborative creation of content on the wiki page of their project. The second stage involves the assessment of wiki contributions through a self- and peer-evaluation process. Finally, the third stage is for the refereeing of the teacher, in which he/she performs the following activities: assessment of the final wiki page, resolution of peer assessment replied, review of other peer-assessment not replied and assessment of wiki contributions. These stages are described below.

Fig. 2
figure 2

Stages of the qualitative assessment workflow

Stage 1: Development of the wiki assignment

Throughout this stage, each group of students are required to develop their work within the wiki. A series of wiki contributions to an initially empty wiki page is represented in Fig. 3. The author of each wiki contribution is indicated below the arrow representing his/her wiki contribution (between the previous version and the new wiki page). In Fig. 3, R1 represents the first version of a wiki page. Then, a first student makes a contribution, which generates a second version of the wiki page (R2). Later, a second user makes another contribution resulting in a new version (R3). This stage ends when the deadline for the wiki assignment is reached. At this point, the wiki pages are ready to be assessed (the final version of the wiki page, labelled Rf). Although a group of students is responsible for the wiki page (the group of students below the last version), other students on the course may contribute to the wiki page. In this case, the group members decide whether the wiki contribution deserves to be kept, modified, removed or even reported to the teacher (if it is intentionally wrong) in subsequent wiki contributions.

Fig. 3
figure 3

Development of the wiki assignment as a result of the students’ contributions

Stage 2: Peer- and self-assessment of wiki contributions

This stage comprises the following activities:

  • Peer and self-assessment: students conduct peer and self-assessments of wiki contributions using a rubric defined by the teacher. The rubric had several criteria. Each assessment comprises a mark and a comment for each criteria in the rubric. From now on, the descriptor for this assessment will be peer-assessment, as peer and self-assessments are processed in the same way. Each peer-assessment refers only to a wiki contribution made by a single participant, and can therefore be used as a reliable indicator of the actual individual student contribution to the wiki. The student to which the peer-assessment refers is represented in the upper left-hand corner of the assessment report (i.e., the filled rubric) in Fig. 4.

    For example, in the peer-assessment illustrated in Fig. 4, the student represented by the striped figure receives the task of assessing a wiki contribution (1). The student checks the resulting wiki page, an overview of the changes between the current revision and the previous one (2). Using this information, the student assesses the wiki contribution filling the rubric, resulting in an assessment report including detailed feedback, i.e., the peer-assessment (3).

  • Checking assessments: students can check the peer-assessments received, and can see not only the marks received with their comments, but also the link to the wiki contributions to which these refer. In this way, the evidence from the peer-assessment provides formative feedback to the student. The peer-assessment in Fig. 4 shows how the assessment report is available to the assessed student, providing evidence of the assessment (4). The assessor’s identity is anonymized.

  • Replying: students may reply to any peer-assessment received with which they disagree. Using the same rubric, they must explain the reason for their disagreement. In Fig. 4, the assessed student considers that the peer-assessment is unfair and reports it to the teacher (5). The teacher receives this report notification in the subsequent stage, and referees it.

Fig. 4
figure 4

Assessment stage (student-assessor)

An interesting issue within the methodology is the question of which wiki contributions are assigned to be assessed by each student. The methodology needs a selection function that chooses relevant contributions to be assessed. The relevance of the assessment of each wiki contribution may vary. For example, wiki contributions affecting a large amount of text may be more interesting than shorter ones. In addition, shorter wiki contributions which only add or remove negative terms may be interesting, since they change the sense of a phrase or paragraph. Even other actions, such as the inclusion of images, may be evidence of the interest of the wiki contribution. Students may also recommend their own wiki contributions as interesting for assessment if they wish, even if they are discarded by the selection function.

Stage 3: Teacher refereeing

This stage comprises three activities: assessment of the final wiki version, resolution of the replies received and, if desired, a review of other assessments not replied.

  • Assessment of the final wiki page: the teacher assesses the final version of the wiki pages developed by each group of students. This global assessment is necessary since the actual aim of the task is to produce a good final document for the wiki page. As in any other assignment, it must be assessed by the teacher according to the course syllabus. Furthermore, certain assessment criteria can only be evaluated in the final version of the page, such as the coherence of the text. From now on, the descriptor for this assessment will be final-assessment.

  • Resolution of peer-assessments replied: The teacher resolves the replies, indicating whether they are appropriate or not. If alterations are approved, the relevant grades are modified.

  • Review of other peer-assessments not replied: The teacher may review a certain number of random peer-assessments even if they were not replied; any marks considered to be wrong are corrected.

  • Assessment of wiki contributions: If required, the teacher can assess any other wiki contribution.

The information on the reply resolution and other assessments reviewed by the teacher is available anonymously to the students involved (both to the assessors and those assessed).

AssessMediaWiki tool

The qualitative assessment workflow introduced is technologically supported by AMW (anonymized reference). AMW is an open-source web application that, when connected to a MediaWiki installation, enables hetero-, self- and peer-to-peer assessment procedures, while keeping track of the compiled assessment data. In this way, teachers can obtain reports which support the student assessment process. The main features of the application are:

  • User roles: AMW includes two different user roles: teacher and student. Students can choose three options: assessment of a wiki contribution using the rubric (peer-assessment), a check of their assessed wiki contributions and a review of the wiki contributions that they have assessed. To facilitate this assessment, AMW provides a link showing the differences that the contribution made to the wiki page. The teacher can define the assessment rubric, indicate the number of peer-assessments each student has to make and check the students’ peer-assessments.

  • Selection function: AMW implements a partially random selection function. When a student requests a wiki contribution for assessment, this is randomly chosen from the largest 30% of the contributions to the wiki which have not already been assessed.

  • Review and reply system: when checking peer-assessments, students can review the marks they have received and the feedback provided, and see the particular wiki contribution to which their mark refers. For instance, Fig. 5 shows the formative feedback that students receive for one of their assessed contribution (left screenshot) and the wiki contribution referred to (right screenshot). If a student does not agree with the mark received in a peer-assessment, they can respond. In this case, they are provided with the same rubric to indicate the criteria with which they disagree, add the mark they believe they deserve and explain the reason for this in a description field. Later, the teacher must check each case and decide whether or not to approve it. Both the peer-assessment and the reply form are anonymous for the students, although not for the teacher.

Fig. 5
figure 5

Formative feedback example (left) and wiki contribution assessed (right)

Outcomes

This subsection evaluates the achievement of the objectives posed in the previous subsection. Firstly, a detailed explanation is provided for how each skill is assessed in the authentic assessment method. Following this, the improvement in scalability is evaluated.

Scalability assessment experiment

The proposal for the skill assessment is detailed below and is summarized in Table 1. This proposal is based on the course syllabus. Depending on the specific wiki assignments and experiment settings, a teacher may use these indicators as proposed, adapt them for grading other skills, or define new ones.

Table 1 Summary of skills assessed and iterations carried out

A. Teamwork skill

This measures the ability of students to work collaboratively.

First iteration: teamwork skill were measured by examining the ratio of students who had contributed several times to the same wiki page in their project. Using SMW, the teacher can see whether the students have worked together by checking that all of them have contributed to the same wiki page. This criterion was based on a coarse-grained indicator, and was relatively easy for all students to achieve, even if they did not work as a team.

Second iteration: the teacher could also detect whether the students had actually collaborated, because they had contributed to the same criterion of the project. This dimension measures the criteria contributed by each user to the wiki page. The teacher considered that a student in a team contributed to a technical criterion (Cr1 to Cr10) of a project if that criterion is assessed on the rubric of any of their assessed wiki contributions (through the peer-assessments received). In Table 2, the criteria assessed for each member of Project13 are shown. While User1 had five rated criteria, User2 and User3 were rated only on one. The only criterion worked on by more than one member (User1 and User2) was Cr8. Thus, in the teacher’s interpretation, these two students worked collaboratively, while User3 did not.

Table 2 Criteria contributed by each user to Project13

B. Communication and knowledge application skill

This measures the ability to apply knowledge within a practical situation and the ability to communicate with colleagues within the development of a project.

First iteration: the teacher assessed this skill according to the number of team members who contributed at least 20% to the final wiki page version byte count (for groups of three members). In Fig. 6 Work distribution chart of students of Project4 obtained via SMW, a work distribution chart is displayed, showing the ratio of total bytes contributed by each student of Project4. User13 (dark area) contributed 41.8%, User15 (light area) 27.8% and User14 (striped area) 30.4%. Since they all contributed more than the threshold, the teacher considered that they worked collaboratively to develop the project.

Fig. 6
figure 6

Work distribution chart of students of Project4 obtained via SMW

Second iteration this skill was assessed as the average of the marks received by students in a team. This indicator assesses the proficiency of the teamwork contributing to the project’s success. Average marks in this contribution may indicate poor project contributions or that a certain wiki contribution obtained good marks for some criteria and an average mark for others, indicating deficient communication between the team members or a limited commitment to the global aim. The mark is calculated using the formula SRAG/NRAG, where SRAG is the sum of the marks the students in a team received through the peer-assessment of their wiki contributions, and NRAG is the number of marks they received. The mark of Project4 (GRDG) is shown in Table 3.

Table 3 Marks for members of Project4

It should be noted that some of User15’s wiki contributions consisted in moving long pieces of text within the wiki page. Considering its quantitative value, SMW adds this to the student’s statistics, providing a limited assessment indicator. Although a detailed review may show that this provided a limited value to the project, a review of each wiki contribution would not be scalable. It can therefore be concluded that under the qualitative approach, this type of wiki contribution can be easily detected.

C. Individual and critical skills

This measures the ability to produce and maintain the quality of the project.

First iteration: the byte contribution timeline profile was measured. For example, the wiki contributions of User15 and User14 are shown respectively in Fig. 7. While both are stepwise profiles, they are not the same: User15 made all of these wiki contributions within just 10 days, while User14 worked for three weeks.

Fig. 7
figure 7

Content evolution chart for User15 (left) and User14 (right) obtained via SMW

Second iteration: the mark in this dimension (GRDs) is the average of the marks received for each student (through the peer-assessments), expressed by the formula SRAs / NRAs, where SRAs is the sum of marks received by each student and NRAs is the number of marks received by each student. The students’ marks for Project4 are shown in Table 3. Marks for members of Project4, showing a difference of three points out of 10 between User14 and User15, meaning that User15 contributed more to the quality of the project than User14. Again, the qualitative approach provides a more detailed indicator than the quantitative one.

Third iteration: in this case, a third iteration was deployed taking into account the replies that each student’s peer-assessment received in order to assess his/her critical thinking skills. As part of the instruction process, students received clear instructions on the peer-assessment task. Thus, their performance can be used as evidence for the skill of critically assessing their colleagues’ work. Each student started with 10 points in this dimension, and lost 2.5 points for each peer-assessment they made that was corrected by the teacher.

D. Final deliverable product

The final result of a project was assessed as for an enterprise project, where the result must meet stakeholders’ requirements. In this way, all wiki contributions which are not assessed will also be implicitly considered as a whole.

In this approach, the wiki assignment of each group had its final-assessment following the rubric defined by the teacher (Table 4). It has a final-mark which ranged between 0 and 10 that was calculated summing the criteria of the rubric. These criteria were assessed by the teacher once the deadline had been reached. The final-mark was the same for all the students in each team, as all of them (as a team) were responsible for the final result of the project.

Table 4 Marks the teacher gave to the Project13

Scalability

One of the objectives of this experience was to perform a qualitative assessment of the students’ work on the wiki by assessing their wiki contributions. In previous experiments, the teacher did not consider the qualitative assessment of wiki contributions since this would take too long. From a theoretical point of view, it can be considered that the amount of time required to assess a wiki contribution i T = t_page. Thus, the time required to assess a number n of wiki contributions is T = n* t_page.

As mentioned above, students made more than 1400 wiki contributions within their 15 wiki pages. The teacher assessed the final version of each page, i.e., the theoretical time required to assess these was T = 15*t_page. Due to the peer and self-assessment stage, the teacher received 412 extra qualitative assessments of wiki contributions. Thus, the teacher had 427 qualitative assessments (412 performed by students and 15 by the teacher); however, the time required for these was the time required by the teacher to perform the assessments of the 15 final version of the wiki pages (T = 15*t_page).

Therefore, the time required by a teacher to assess 427 wiki contributions is T = 427*t_page. This is more than 3000% of the time required following the qualitative approach presented in this methodological framework.

Discussion

Firstly, this section compares the methodological framework presented in this article with several works that also use the peer and self-assessment approach. Secondly, the evaluation of the qualitative approach is discussed and compared with the quantitative one.

Peer and self-assessment review

Peer and self-assessment have received attention within higher education (Falchikov & Goldfinch, 2000; Gielen et al., 2011; Ibarra-Sáiz & Rodríguez-Gómez, 2017). Several proposals for self and peer-assessment considered when developing this methodological framework are described below.

An online collaborative learning environment for facilitating peer assessment is introduced by Xiao & Lucking (2008). The proposal stimulates interaction between students, helping them to improve their academic writing effectively, and also reducing administrative load. However, the authors remarked that both rating and the provision of qualitative feedback under anonymous conditions would be more effective in order to ensure a more objective and in-depth understanding of how and why students’ qualitative feedback impacts the writing performances of their peers. In the methodological framework presented in this article, peer-assessment is deployed under anonymous conditions for the students but not for the teacher.

In contrast to traditional teacher-based procedures of assessment, peer assessment requires students to be more actively implicated in their own learning. Three peer assessment scenarios were carried out by De Wever with more than 300 students (De Wever et al., 2011). In De Wever’s work, each student assessed his/her peers based on his/her perception during the collaboration phase, so only intra-group peer assessment could be conducted. Our work follows a different approach, based on wiki contributions (i.e based in work evidence instead of student’s perception). Therefore, each contribution could be potentially assess by any student, supporting a maximum of more than 60,000 peer-assess assignments. Particularly, in our study each student was asked to assess 10 wiki contributions (more than the 7 assessments made by each student in De Wevers’s experiment). Anyway, our tool AMW can be config to indicate a different number of wiki contributions each student had to assess.

Qualitative assessment discussion

Together with the empirical work, several comparisons have been presented between the quantitative approach and the qualitative one implemented in this research. A summary of the advantages of the qualitative approach is given below:

  • The qualitative approach supports a more fine-grained analysis of collaborative work than the analysis performed in previous quantitative iterations. It supports an examination of how students work as a team on their wiki page using evidence that they have worked on the same aspect (when the same criterion is assessed). Additionally, it supports the teacher to evaluate contributions on the quality of their content rather than on their quantity.

  • This approach supports the detection of wiki contributions that simply copy and paste large pieces of text without actually improving the wiki page, which otherwise may have received an undeservedly good quantitative assessment.

  • All contributions to other wiki pages can be included in a student’s individual mark, although these wiki contributions are considered more coarse-grained in the quantitative approach.

  • The critical thinking ability of students can be trained and assessed through the response process.

The main disadvantages of the qualitative approach are the following:

  • The selection function must be carefully chosen, so that assessments can be conducted on significant contributions.

  • Reviewing all the peer-assessments performed by students is still not a scalable task for the teacher. Thus, some poor peer-assessments may be not detected by the teacher if they are not reported or randomly chosen for review.

In general, students performed fairly well in the wiki assignment (Table 5). All wiki assignments were finished before the deadline and therefore no capping was applied. We graded students calculating a weighted average of the previous marks: students with a mark of 9 or higher (out of 10) had a Distinction grade (D). Those with a mark of 7 or higher had a Credit grade (C). Those with a mark of 5 or higher had a Passed grade (P). And finally those with a mark lower than 5 Failed grade (F). Only three students (6.98%) failed the project evaluation. Moreover, 37 students (86.04%) earned a grade of C or D. The students received detailed feedback and evidence for each assessment received. The amount of peer assessment that each student received was significantly spread, with 22 being the average number of criteria assessed per student. More than 40% of students had less than 10 criteria assessed, while another 40% had between 10 and 29 criteria assessed, and less than 10% had 30 or more.

Table 5 Final assignment grades of the students

Moreover, the group’s self-organization, i.e., the role adopted by each member, could be also detected. This was achieved by aggregating the criteria for which each group member was assessed. In this way, groups in which each member focused on different criteria were easily identified. This was not necessarily negative, if the final version of the wiki page met the project’s purpose. However, in cases where it did not, this was evidence of an issue arising in the group’s internal dynamics; each member did their own work, and nobody paid attention to refining the individual contributions to produce a coherent deliverable product for the wiki. The talk pages can be analysed to see whether students communicated and, if so, to identify the role that each member played.

Regarding the validity of this methodology in massive online courses, several lines of evidence indicate that as the number of students grows, although it does not affect to students’ peer- and self-assessments, fewer revisions can be assumed by the teacher. However, other factors such as the high dropout rates in massive online courses can affect its implementation. For instance, it may occur that a given student’s contributions were not assessed because the automatically assigned evaluators dropped out the course (Onah, Sinclair, & Boyatt, 2014). Thus, further studies should be performed to validate the applicability of this methodology in massive courses.

Conclusion

Wikis are widely used nowadays both in academia and industry, especially since they facilitate collaboration between their users. Unfortunately, an assessment of the actual contribution of each user to a wiki is not a manageable task. In previous work, scalable quantitative assessments have been conducted. While easy to automate, the information provided was not fine-grained enough to measure certain skills accurately, and did not support for the measurement of others at all. In this article, a scalable methodological framework for conducting qualitative assessments of collaborative wiki assignments was introduced. It is supported by AMW, a software tool that supports anonymous self-, peer and hetero-assessment of wiki contributions. Additionally, it provides students with formative feedback as well as evidence of the assessments received, while preserving students’ right to respond to unfair assessments received. Both approaches provide the teacher with indicators for assessing students from different points of view.

In this work, the AR methodology is followed for the assessment of an authentic learning experience implemented in a wiki-environment. The students’ grades were refined due to qualitative information. For instance, the teacher was able to detect that some contributions from students who, according to SMW, contributed significantly to their project, in fact consisted solely of moving large pieces of text within the wiki. Furthermore, a number of indicators were refined in relation to the ability to work collaboratively. For example, by checking the criterion worked on by the different team members, the teacher could detect that in some groups there was no actual collaboration (although the previous coarse-grained approach provided evidence for this). This can be particularly significant in inter-project wiki contributions. Students’ contributions to other project pages may also be considered as evidence of cooperation between peers. Further investigation is needed to draw stronger conclusions on the validity of the information retrieved by AMW in assessing collaboration skills in other case studies.

In future experiments, the authors will encourage students to use wiki talk pages effectively to communicate and reflect on the requested assessments, so their dynamics can be studied, their organization analysed and the role each member played identified. Another approach which the authors are working on is the application of human learning interfaces (HLI) (Koper, 2014). Humans interact with the outside world through their senses (input) and their behaviour (output). If teachers were able to define accurate indicators of performance in certain skills, by means of system inputs and outputs, they would have a role model to imitate. Then, learners could be assessed according to the distance between this model and their actual interaction.

References

  • Arevalillo-Herráez, M., Perez-Muñoz, R., & Ezbakhe, Y. (2010). A wiki based system to produce high quality teaching materials. In 5th Iberian Conference on Information Systems and Technologies (CISTI), (pp. 1–4).

    Google Scholar 

  • Aronsson, L. (2002). Operation of a large scale, general purpose wiki website. In Proceedings of the 6th International ICCC/IFIP Conference on Electronic Publishing. ELPUB, (pp. 27–37). Karlovy Vary: VWF Berlin.

    Google Scholar 

  • Ashford-Rowe, K., Herrington, J., & Brown, C. (2014). Establishing the critical elements that determine authentic assessment. Assessment & Evaluation in Higher Education, 39(2), 205–222 https://doi.org/10.1080/02602938.2013.819566.

    Article  Google Scholar 

  • Balderas, A., de-la-Fuente-Valentin, L., Ortega-Gomez, M., Dodero, J. M., & Burgos, D. (2018). Learning management systems activity records for students’ assessment of generic skills. IEEE Access, 6, 15958–15968 https://doi.org/10.1109/ACCESS.2018.2816987.

  • Benlloch, J. V., Benet, G., Blanc, S., Gil, D., Busquets, J. V., Gil, P., … Albaladejo, J. (2012). Análisis de la implantación de la asignatura Tecnología de Computadores en el Grado de Ingeniería Informática. In 10th Congreso de Tecnologías Aplicadas en la Enseñanza de la Electrónica, (pp. 358–363).

    Google Scholar 

  • Bhopale, S. D. (2013). Cloud migration benefits and its challenges issue. IOSR Journal of Computer Engineering, 1(8), 40–45.

  • Biggs, J. (2015). Assessment in a constructively aligned system. In International Conference Assessment for Learning in Higher Education. Hong Kong: The University of Hong Kong. https://www.cetl.hku.hk/conf2015/assessment-in-a-constructively-aligned-system/.

  • Biggs, J. B., Tang, C. S., & Society for Research into Higher Education (2011). Teaching for quality learning at university: What the student does. Berkshire: McGraw-Hill/Society for Research into Higher Education/Open University Press.

  • Bocconi, S., & Trentin, G. (2012). Wiki supporting formal and informal learning. New York: Nova Science Publishers.

  • Bondi, A. B. (2000). Characteristics of scalability and their impact on performance. In Proceedings of the second international workshop on software and performance - WOSP ‘00 (pp. 195–203). New York: ACM Press. https://dl.acm.org/citation.cfm?id=350432

  • Boud, D., & Soler, R. (2015). Sustainable assessment revisited. Assessment & Evaluation in Higher Education, 0(0), 1–14.

    Google Scholar 

  • Brodie, M. L., & Stonebraker, M. (1995). Migrating legacy systems: gateways, interfaces & the incremental approach. Massachusetts: Morgan Kaufmann Publishers.

  • Brown, S. (2014). Learning, teaching and assessment in higher education: global perspectives. Basingstoke: Palgrave Macmillan.

  • Brown, S., & Pickford, R. (2006). Assessing skills and practice. Abingdon: Routledge.

  • Cole, M. (2009). Using Wiki technology to support student engagement: Lessons from the trenches. Computers & Education, 52(1), 141–146.

    Article  Google Scholar 

  • Committee on the Foundations of Assessment, Pellegrino, J. W., Chudowsky, N., Glaser, R., Board on Testing and Assessment, Center for Education, … National Research Council (U.S.) (2001). Knowing what students Know: The science and Design of Educational Assessment. Washington: National Academies Press.

  • De Wever, B., Van Keer, H., Schellens, T., & Valcke, M. (2011). Assessing collaboration in a wiki: the reliability of university students’ peer assessment. The Internet and Higher Education, 14(4), 201–206 https://doi.org/10.1016/j.iheduc.2011.07.003.

    Article  Google Scholar 

  • Díaz, O., & Puente, G. (2012). Wiki scaffolding: aligning wikis with the corporate strategy. Information Systems, 37(8), 737–752 https://doi.org/10.1016/j.is.2012.05.002.

    Article  Google Scholar 

  • Dillenbourg P. (1999) What do you mean by 'collaborative learning'?. In P. Dillenbourg (Ed) Collaborative Learning Cognitive and Computational Approaches, (pp.1–19). Oxford: Elsevier. https://telearn.archives-ouvertes.fr/hal-00190240/document.

  • Elgort, I., Smith, A. G., & Toland, J. (2008). Is wiki an effective platform for group course work? Australasian Journal of Educational Technology, 24(2), 195–210.

    Article  Google Scholar 

  • Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: a meta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287–322.

    Article  Google Scholar 

  • Florian-Gaviria, B., Glahn, C., & Fabregat Gesa, R. (2013). A software suite for efficient use of the European qualifications framework in online and blended courses. IEEE Transactions on Learning Technologies, 6(3), 283–296 https://doi.org/10.1109/TLT.2013.18.

    Article  Google Scholar 

  • Fountain, R. (2005). Wiki pedagogy. Dossiers Pratiques. Profetic.

    Google Scholar 

  • Gielen, S., Dochy, F., & Onghena, P. (2011). An inventory of peer assessment diversity. Assessment & Evaluation in Higher Education, 36(2), 137–155 https://doi.org/10.1080/02602930903221444.

    Article  Google Scholar 

  • Gulikers, J. T. M., Bastiaens, T. J., & Kirschner, P. A. (2004). A five-dimensional framework for authentic assessment. Educational Technology Research and Development, 52(3), 67–86.

    Article  Google Scholar 

  • Hatzipanagos, S., & Warburton, S. (2009). Feedback as dialogue: Exploring the links between formative assessment and social software in distance learning. Learning, Media and Technology, 34(1), 45–59.

    Article  Google Scholar 

  • Henkel, M. (2007). Can academic autonomy survive in the knowledge society? A perspective from Britain. Higher Education Research & Development, 26(1), 87–99.

    Article  Google Scholar 

  • Herrington, J., & Herrington, A. (2006). Authentic conditions for authentic assessment: aligning task and assessment: In A. Bunker & I. Vardi (Eds.), Proceedings of the 2006 Annual International Conference of the Higher Education Research and Development Society of Australasia Inc (HERDSA): Critical Visions: Thinking, Learning and Researching in Higher Education: Research and Development in Higher Education, 29, 141–151. Milperra: HERDSA.

  • Ibarra Saiz, M. S., Rodriguez Gomez, G., & Gomez Ruiz, M. A. (2012). Benefits of peer assessment and strategies for its practice at university. Revista de Educación, 359, 206–231.

  • Ibarra-Sáiz, M. S., & Rodríguez-Gómez, G. (2017). EvalCOMIX® a web-based programme to support collaboration in assessment. In Smart Technology Applications in Business Environments, (pp. 249–275). IGI Global https://doi.org/10.4018/978-1-5225-2492-2.ch012.

  • Jackson, D. (2009). Wiki-tecture: The DRAPE artist residence and gallery. Journal of Architectural Education, 63(1), 97–106.

    Article  Google Scholar 

  • Jaksch, B., Kepp, S.-J., & Womser-Hacker, C. (2008). Integration of a wiki for collaborative knowledge development in an E-learning context for university teaching. In A. Holzinger (Ed.), HCI and Usability for Education and Work (Vol. 5298, pp. 77–96). New York: Springer Berlin Heidelberg.

  • Koper, R. (2014). Conditions for effective smart learning environments. Smart Learning Environments, 1(1), 5.

    Article  Google Scholar 

  • Lacuesta, R., Palacios, G., & Fernández, L. (2009). Active learning through problem based learning methodology in engineering education. In Frontiers in Education Conference, 2009. FIE’09. 39th IEEE, (pp. 1–6).

    Google Scholar 

  • Leuf, B., & Cunningham, W. (2001). The wiki way: quick collaboration on the web. Boston: Addison-Wesley Professional.

  • Llorens, A., Llinàs-Audet, X., Ras, A., & Chiaramonte, L. (2013). The ICT skills gap in Spain: industry expectations versus university preparation. Computer Applications in Engineering Education, 21(2), 256–264 https://doi.org/10.1002/cae.20467.

    Article  Google Scholar 

  • Lykourentzou, I., Papadaki, K., Vergados, D. J., Polemi, D., & Loumos, V. (2010). CorpWiki: A self-regulating wiki to promote corporate collective intelligence through expert peer matching. Information Sciences, 180(1), 18–38.

    Article  Google Scholar 

  • Macdonald, J. (2003). Assessing online collaborative learning: process and product. Computers & Education, 40(4), 377–391 https://doi.org/10.1016/S0360-1315(02)00168-9.

    Article  Google Scholar 

  • Minocha, S., Petre, M., & Roberts, D. (2008). Using wikis to simulate distributed requirements development in a software engineering course. International Journal of Engineering Education, 24(4), 689–704.

    Google Scholar 

  • Mitchell, P. (2006). Wikis in education (Ed. Jane K).

    Google Scholar 

  • Moskaliuk, J., Kimmerle, J., & Cress, U. (2012). Collaborative knowledge building with wikis: The impact of redundancy and polarity. Computers & Education, 58(4), 1049–1057.

    Article  Google Scholar 

  • Mueller, J. (2005). The authentic assessment toolbox: enhancing student learning through online faculty development. Journal of Online Learning and Teaching, 1(1),1–7.

  • Oates, B. J. (2006). Action research. In Researching information systems and computing, (pp. 154–172). London: Sage Publications.

  • Onah, D. F. O., Sinclair, J., & Boyatt, R. (2014). Dropout rates of massive open online courses: behavioural patterns. In 6th international conference on education and new learning technologies (EDULEARN’14), (pp. 5825–5834).

    Google Scholar 

  • Ortega, F., González-Barahona, J. M., & Robles, G. (2007). The top-ten wikipedias-a quantitative analysis using wikixray. In ICSOFT (ISDM/EHST/DC), (pp. 46–53).

    Google Scholar 

  • Ortega Valiente, J., & Reinoso Peinado, A. J. (2011). New educational approach based on the use of wiki platforms in university environments. In Next Generation Web Services Practices (NWeSP), 2011 7th International Conference on, (pp. 280–284).

    Chapter  Google Scholar 

  • Palomo-Duarte, M., Dodero, J. M., García-Domínguez, A., Neira-Ayuso, P., Sales-Montes, N., Medina-Bulo, I., … Balderas, A. (2014). Scalability of assessments of wiki-based learning experiences in higher education. Computers in Human Behavior, 31(1) https://doi.org/10.1016/j.chb.2013.07.033.

  • Parker, K., & Chao, J. (2007). "Wiki as a teaching tool." Interdisciplinary journal of knowledge and learning objects. Interdisciplinary Journal of E-Learning and Learning Objects, 3(1), 57–72. Informing Science Institute.

  • Pearce, M., Zeadally, S., & Hunt, R. (2013). Virtualization. ACM Computing Surveys, 45(2), 1–39 https://doi.org/10.1145/2431211.2431216.

    Article  Google Scholar 

  • Robson, C., & McCartan, K. (2016). Real world research: a resource for users of social research methods in applied settings, (4th ed., ). Chichester: Wiley.

  • Rodríguez-Gómez, G., & Ibarra-Sáiz, M. S. (2015). Assessment as learning and empowerment: towards sustainable learning in higher education, (pp. 1–20). Cham: Springer https://doi.org/10.1007/978-3-319-10804-9_1.

    Google Scholar 

  • Runeson, P., & Höst, M. (2009). Guidelines for conducting and reporting case study research in software engineering. Empirical Software Engineering, 14(2), 131–164 https://doi.org/10.1007/s10664-008-9102-8.

    Article  Google Scholar 

  • Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189.

    Article  Google Scholar 

  • Sousa, F., Aparicio, M., & Costa, C. J. (2010). Organizational wiki as a knowledge management tool. In Proceedings of the 28th ACM international conference on Design of Communication, (pp. 33–39). New York: ACM.

    Google Scholar 

  • Spencer, L. M., & Spencer, S. M. (1993). Competence at work: models for superior performance. New York: Wiley.

  • Su, F., & Beaumont, C. (2010). Evaluating the use of a wiki for collaborative learning. Innovations in Education and Teaching International, 47(4), 417–431.

    Article  Google Scholar 

  • Trentin, G. (2009). Using a wiki to evaluate individual contribution to a collaborative learning project. Journal of Computer Assisted Learning, 25(1), 43–55.

    Article  Google Scholar 

  • Viégas, F. B., Wattenberg, M., & Dave, K. (2004). Studying cooperation and conflict between authors with history flow visualizations. In Proceedings of the 2004 conference on human factors in computing systems - CHI ‘04, (pp. 575–582). New York: ACM Press https://doi.org/10.1145/985692.985765.

    Chapter  Google Scholar 

  • Wattenberg, M., Viégas, F., & Hollenbach, K. (2007). Visualizing activity on Wikipedia with chromograms. In C. Baranauskas, P. Palanque, J. Abascal, & S. Barbosa (Eds.), Human-Computer Interaction – INTERACT 2007 (Vol. 4663, pp. 272–287). New York: Springer Berlin Heidelberg.

  • Xiao, Y., & Lucking, R. (2008). The impact of two types of peer assessment on students’ performance and satisfaction within a Wiki environment. The Internet and Higher Education, 11(3), 186–193.

    Article  Google Scholar 

Download references

Acknowledgments

Thanks to José Tomás Tocino and Alberto Pinteño for supporting the development of AMW.

Funding

This work was supported by the Spanish Government under the VISAIGLE project (grant TIN2017-85797-R); and the Andalusian Government under the programme for Research and Innovation in Education (grant PI2_12_029).

Availability of data and materials

Author information

Authors and Affiliations

Authors

Contributions

The course through which the scalable assessment methodology was applied and studied has been run by prof. MP. The research was conducted by Dr. AB, and supervised by Prof. MP and Prof. JD. Prof. MI and Prof. GR conducted the theoretical framework. All authors have approved the manuscript for submission.

Corresponding author

Correspondence to Antonio Balderas.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Balderas, A., Palomo-Duarte, M., Dodero, J.M. et al. Scalable authentic assessment of collaborative work assignments in wikis. Int J Educ Technol High Educ 15, 40 (2018). https://doi.org/10.1186/s41239-018-0122-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41239-018-0122-1

Keywords