Skip to main content
  • Research article
  • Open access
  • Published:

Turning a traditional teaching setting into a feedback-rich environment

Abstract

There is a constant need to look for new ways of motivating students, of providing them prompt feedback and of helping them to retain the material at lectures. This need is met here by introducing a game (a cyclist race inspired by ‘Le Tour de France’) built around a Student Response System where students are addressed with questions aligned to learning outcomes, which they answer on their own tablets, laptops or smartphones during lectures. Stages take place at selected lecturing slots and for each stage, standings with times are allocated to the students based on the accuracy and speed of their replies. By computing the times of all stages, it is possible to obtain the overall standings. All this info is updated live during and right after each stage. The learning experience is tested in two civil engineering subjects where students’ satisfaction and performance are shown to be significantly enhanced.

Motivation

Educational technology has evolved significantly in the twenty-first century. Online courses are becoming increasingly popular and have expanded into universally accepted Virtual Learning Environments (VLEs). Britain and Liber (2004) confirm an increase in uptaking of VLEs across all UK Higher Education sectors, with Blackboard and WebCT dominating the market. VLEs include uses such as access to course material (most popular), access to web-based resources, collaborative working, assignment submission, formative assessment, peer support, e-assessment, online student presentations, access to multimedia resources, problem-based learning, e-portfolio, learning design, etc. However, Britain and Liber point out that a lack of innovative educational technologies in the field of wireless and mobile phones still remains in Higher Education Institutions.

It has also been reported that e-learning has focused too much on making digital content and not enough on the learning (Squire, 2005), and that students sometimes feel a sense of isolation in online or distance education courses. Anderson et al. (2001) and Siemens (2002) underline the importance of teaching presence, where the traditional ‘provider of knowledge’ becomes a ‘facilitator of knowledge’ and the student shifts from being a ‘passive learner’ to an ‘active learner’. Whether the material is delivered online or not, its nature can be tedious and boring, and/or the feedback be insufficient or late, leading to student’s frustration and disengagement. The latter often derives in procrastination (Steel, 2007) and/or high student absenteeism and dropout rates (López-Bonilla & López-Bonilla, 2015). Therefore, the ongoing challenge for teachers is how to engage students best when facilitating knowledge.

In order to meet this challenge as well as addressing the need for novel applications of wireless and mobile phones to Higher Education, this paper combines Student Response Systems (SRSs), also known as Audience Response Systems (ARSs), with a game element within the lecture theatre. The concept of using games for motivating groups is often referred to as gamification (Filsecker & Hickey, 2014; Kulmer et al., 2016; Morillas Barrio et al., 2016; Sailer et al., 2017; Wang et al., 2016). The SRS game proposed here lays the basis for a continuous monitoring and assessment that will provide immediate feedback to both facilitator and students. The students will learn and test their knowledge by playing on their own wireless-based mobile, tablet or laptop, while results are updated live and anonymously on the theatre screen. It is expected that its implementation will make lectures more entertaining and motivational and promote student attendance and engagement.

Literature review

There is evidence to demonstrate that students appreciate feedback and that learning improves by allowing students to become aware of how well they are doing and what they need to do to improve, but favourable learning conditions need to be met first (Hattie & Timperley, 2007; Sadler, 2010). Difficulties may arise from the different perceptions by facilitators and students on the amount of detail of feedback, its usefulness, the extent to which students are only interested in grades and the fairness of marking procedures (Carless, 2006). In order to meet conditions for good feedback, Nicol and Macfarlane-Dick (2006) establish that these should: (1) clarify what good performance is, (2) facilitate the development of self-assessment in learning, (3) deliver high-quality information to students about their learning, (4) encourage teacher and peer dialogue around learning, (5) encourage positive motivational beliefs and self-esteem, (6) provide opportunities to close the gap between current and desired performance and, (7) provide information to teachers that can be used to help shape teaching. Wiliam (2011) reports on different kinds of feedback interventions going from weaker feedback only (students are given only the knowledge of their own score) to strong formative assessment (students are given information about the correct results, some explanation, and specific activities to undertake in order to improve) when the average effect size increases from 0.14 to 0.56. Another factor to take into account in the implementation of feedback is the speed of reply to student response. For instance, one of the main reasons that one-to-one tutoring is so effective, is that errors in the student’s work are identified immediately, and then explanations and further follow-up are provided whenever deemed to be necessary (Wiliam, 2011). In this paper, a SRS game is brought into the learning environment for detecting potential problems sufficiently early (i.e., during learning) and building student trust.

On the pedagogical use of games

Since children and adults learn and develop through play, there is a rationale behind integrating games within teaching strategies for promoting active learning and enhancing student’s critical thinking abilities. Papert (1998) and Baker (2010) associate poorer learning to boredom. In order to tackle the later, games have been widely adopted to improve children’s learning. Typically, the game assigns scores to the students as a function of the accuracy of their answers. Only recently, games have been targeted as a means to develop innovative forms of learning in Universities (Pivec et al., 2003). Examples in Higher Education are games based on television shows (Glendon & Ulrich, 2005) and on sports competitions (Gonzalez et al., 2014; Gonzalez & Covian, 2015) that divide the class into teams that play against each other using oral or written questions. Morillas Barrio et al. (2016) attribute a higher student motivation and degree of connection in class to the use of competitive games. Similarly, Burguillo (2010) uses game theory tournaments to support competition-based learning, i.e., to achieve learning through a competition, even though the student’s score in the competition is independent from the learning result. This must be distinguished from competitive-based learning, where the learning depends on the result of the competition itself. A question then arises ‘How to assess a game-based learning environment?’ Ifenthaler, Eseryel, and Ge (2012) distinguish three alternatives: (1) game scoring, which focus on targets achieved while playing the game and the time taken to complete the tasks, (2) external, which is not part of the game-based environment (i.e., assessment via interviews, knowledge maps, essays, etc.) and (3) embedded, which does not interrupt the game (i.e., clickstreams, information trails).

Technology can be brought into educational games (i.e., computer and video games) as discussed by Gee (2003) to get people to learn and master something that is long and challenging and to enjoy it, too. He argues that books and videos, for all their virtues, cannot engage players/students in ‘action at a distance’, much like remotely manipulating a robot, which causes humans to feel as if they have stretched into a new space. The relevance of instructional support in game-based learning is demonstrated by Wouters and Van Oostendorp (2013). The growth rate of publications on digital game-based learning has significantly increased from the beginning of the century (Hwang & Wu, 2012). Squire (2005) provides a model of learning with digital gaming technologies and guidelines to apply them in e-learning. The objectives of the game need to be well structured, sequential and with a sustained meaning to motivate players to achieve those goals (Kapp, 2012). In an Elementary School, Filsecker, and Hickey (2014) show how external rewards or competitions do not have to imply a negative impact on student’s motivation if the learning environment provides adequate feedback and opportunities to improve. In Secondary School, Huizenga et al. (2009) employ mobile game-based learning to teach historical subjects to students that outperform others with regular project-based lessons. In spite of the results, they surprisingly discover that there are no significant differences with respect to the motivation for the subject between learners with or without a game-based approach. It must be clarified that by motivation they specifically refer to interest for the subject as opposed to fun and/or engagement. In High School, Annetta et al. (2009) find that students using video games in their learning of a biology course are more engaged than students with the traditional print material. Statistical results do not appear to indicate that they have a greater understanding of the concepts being taught, however, cognitive processing is only one factor that contributes to effective learning, and affective impacts and emotional factors should also be considered.

In Higher Education, Becker (2001) asks 1st year students in computer science to write games (‘minesweeper’ and ‘asteroids’) as a way to motivate and inspire them. They conclude that students gain experience with all the topics that the assignment was designed to exercise, and learn concepts more thoroughly than they would have had they not been so keen. It is acknowledged that some educational computer games, such as Mindtools, may pose difficulties to students when playing it individually, an issue that is addressed by Sung and Hwang (2013) via a collaborative approach. Blunt (2007) notes that, University classes in business, economics and management using video games, outperform classes that do not use them regardless the gender. However, student populations aged over 40 years did not feel an improvement with the use of video games. In line with this finding, Whitton (2007) argues that the belief of computer games being intrinsically motivational and as a result, a useful educational tool with learning happening almost without the individual realising it, can be true for children, but it cannot be generalized to older populations in Higher Education. However, she acknowledges that Higher Education students may be motivated to use games if they perceive them as the most effective way to learn. Even further, Huang et al. (2010) reveal a significant correlation between learners’ motivational and cognitive processing based on 144 undergraduate students. The use of a computer-based learning game by Kulmer, Wurzer, and Geiger (2016) to teach contents in Electrical Engineering leads to a strong relationship between the number of times the game is played and student performance, together with a student desire for more learning games. An example of the use of computer games to support the learning of theory of structures in Civil Engineering at Master level is provided by Ebner and Holzinger (2007). They find that it is key to motivate the students to play the game, which they recommend being simple, available anywhere and anytime, playable within a short time and part of a competition.

On the use of student response systems in lectures

Crews et al. (2011), Bruff (2009) and many others suggest SRSs (also known as clickers) as a means to make students focus and participate actively in lectures. The facilitators use the SRS to send specific questions to the students in real time, that they answer using remote handsets, tablets, laptops, and smartphones. SRSs commonly have the option of providing anonymous results or linking responses to individual students. Once the students’ answers are gathered, the facilitator can make use of instant result aggregation and visualization to assess the level of understanding. Results are often viewed as a bar chart, indicating how many students voted for each possible answer. There are four appealing features of SRSs:

  • formative assessment

  • real-time results (immediate feedback for both students and facilitator)

  • increase in student engagement

  • projection on screen during lectures for the class to see

SRS technology allows the facilitator to create multiple-choice, text, or numeric response question formats. These questions can be reused in subsequent academic seasons. A thorough review of the state of the art in the use of clickers by Caldwell (2007) highlights their value for introducing peer learning methods in large classes and provides guidelines for writing good questions and best-practice tips. Wieman et al. (2008) also give recommendations on the ideal approach to formulating questions that will engage students in the use of SRSs. Gok (2011) reviews what benefits have been appreciated about SRSs in the literature, and classify them into three categories with sub-categories, namely, student involvement (attendance, attention and anonymity, participation and engagement), learning (interaction, discussion, contingent teaching, learning performance and quality of learning) and assessment (feedback, formative and comparison to class responses). Kay and LeSage (2009) survey the literature looking for advantages and disadvantages in using SRSs, when the time needed to set up a SRS, to create effective questions, to cover the material adequately, and to respond to instantaneous student feedback, stands as a major challenge for the facilitator.

Trees and Jackson (2007) point out that unlike websites, PowerPoint, WebCT, and Blackboard, the success of clickers depend less on the facilitator and more on the students accepting clickers to positively affect their learning. There are many records of successful implementation of SRSs in the literature. For example, Cohn and Fraser (2016) observe large differences of 1.17–2.45 standard deviations for seven learning environment, attitude and achievement criteria in favour of a group of 532 science students using SRS, compared to a group 565 who did not use SRS, in a Middle School. D’Inverno et al. (2003) use clickers in both lectures and tutorials to promote greater student interaction in a large lecture class in Engineering mathematics. Forty students score the quality of the clicker tutorials with an excellent mean value of 4.4 on a scale 1–5. Stowell and Nelson (2007) compare clickers to standard lecture, hand-raising and response card methods of student feedback for 140 undergraduates taking Psychology courses in their 1st year in college. The SRS group is found to have the highest participation, greater positive emotion during lectures and are more likely to respond honestly to in-class review questions. 1221 undergraduates taking Chemistry courses participate in an investigation by Hall et al. (2005) that shows that semesters using a SRS leads to substantially better grades than semesters without a SRS. Therefore, students indicate an increased level of engagement, learning, and motivation. Preszler, Dawe, Shuster, and Shuster (2007) find that the more frequent the use of SRSs in lectures, the larger the increase in students’ learning and the improvement in student’s performance on the exam for all six Biology courses being tested. The clickers also act as a reflective tool through which facilitators engage in their own educational development and help them to make learning environments more student centred. With the objective of developing higher-level thinking skills in mind, Dangel and Wang (2008) provide a framework that couples SRSs with good pedagogical practice to promote deep learning.

Crossgrove and Curran (2008) confirm that exam performance improves with the use of clickers for Biology courses, although changes are more dramatic in non-major than in major courses. They also carry out a post-course test to analyse the long-term retention of material, which they observe to increase with the use of clickers for the non-major course, but interestingly, not for the major course. The authors argue that the smaller impact on the major course may be due to a somewhat lower level of feedback. Gok (2011), based on a survey consisting of 241 male and 262 female students, from freshman to senior, taking chemical, physical and geology courses, and Morillas Barrio et al. (2016), based on a survey of 77 male and 54 female students, aged 15 to 24, taking telecommunication engineering, socio-statistics and computer courses, find that male students have significantly more positive attitudes towards the SRSs than female students. Nevertheless, SRS are preferred by shy students and female participants over raising their hands when having to answer controversial questions in a psychology course, and as a result, the SRS makes a contribution to a greater diversity of students’ opinions (Stowell et al., 2010). MacGeorge et al. (2008), in contrast, do not find differences in gender, ethnicity or academic year when a SRS is evaluated by 854 undergraduates (approx. 50% each gender) attending courses on social scientific theory in communication, natural resource conservation, and introduction to business.

Remote handsets are becoming less common as Wi-Fi devices become more available. SRSs based on handheld mobile Wi-Fi devices overcome the need of facilitators for having traditional remote handsets in the vicinity. A Wi-Fi based SRS is exploited by Stav et al. (2010) to provide an instruction-training course between different countries via video conferencing. Socrative (Socrative, 2017; Mendez & Slisko, 2013; Walsh, 2014; Awedh et al. 2014; Dervan, 2014), ExitTicket (2017), PollEveryWhere (2017) and Qwizdom (2017) are popular Wi-Fi based SRS alternatives. Some SRSs have options to introduce team competitions or games to make the questioning more attractive such as Kahoot (Wang, 2015; Wang et al., 2016; Zainol & Kamaru, 2017; Grinias, 2017) or Quiz-a-tron (Wang & Hoang, 2017), but not to the level intended in this investigation. It is worth noting the comparisons between gamified and non-gamified versions of a SRS reported by Morillas Barrio et al. (2016) and Wang et al. (2016). Results indicate that the gamified SRS leads to significant improvement in motivation, engagement, enjoyment, and concentration, but the gain in learning performance has a low statistical significance and remains to be proven. Furthermore, Wang (2015) investigates the danger of students losing interest in the game when employed on a continuous basis for a long period, but the wear off effect is found to be minimal on motivation and engagement, and null on perceived learning. However, he acknowledges that if the same game was frequently used in many courses, the wear off effect could become substantial. Therefore, a variety of games or game modes as proposed in Wang and Hoang (2017), is needed. ExitTicket is the SRS platform chosen to implement the game proposed in the following sections, due to its appealing graphical interface and to its ability to measure both percentage of successful answers and speed of reply by the students. These two parameters will be used to convert the data gathered by the SRS application into an exciting game.

Method

Participants and context

The concept is tested on two CiVil ENgineering (CVEN) modules of an Irish University during the first semester of the academic season 2015/16. The overall grade for both modules is the result of computing a continuous assessment component and an unseen end-of-semester exam. The weight of the continuous assessment is approximately determined by the ratio of the duration of the activity and its preparation to that of the course (Hornby, 2003) and the remaining percentage is assigned to the exam, in line with the high and low stakes assessment typically given to the summative and formative assessments respectively. Details on these modules are presented next.

  • CVEN30170 (title: Stress + Finite Element Analysis): This is a 5-credit 3rd year core module delivered within a 12-week lecturing period. The syllabus is divided into two sections of equal weight and duration, ‘Stress Analysis’ and ‘Finite Element Analysis’, each imparted by a different lecturer. There are 13 Civil Engineering undergrads (1 female and 12 male; 2 international and 11 national) and 13 Structural Engineering with Architecture undergrads (2 female and 11 male; 1 international and 12 national) registered to this module. CVEN30170 has a continuous assessment component worth 30% and a 2 h unseen end-of-semester examination worth 70%. The continuous assessment component consists of a computer lab assignment (15%) and tutorials (15%), but the tutorials have been replaced by the SRS game in the current season.

  • CVEN40150 (title: Structural Analysis, Design, and Specification): This is a 5-credit 4th year core module delivered within a 12-week lecturing period. The syllabus is divided into three sections of equal weight and duration, ‘Structural Analysis’, ‘Structural Design’ and ‘Specification’, each imparted by a different lecturer. There are 12 and 13 students registered to the Civil Engineering (1 female and 11 male; 2 international and 10 national) and Structural Engineering with Architecture (5 female and 8 male; 8 international and 5 national) programmes respectively. CVEN40150 has a continuous assessment component worth 20% and 2 h unseen end-of-semester examination worth 80%. The continuous assessment component typically consists of tutorials only, but this season, 10% have been assigned to tutorials and 10% to the SRS game. Here, tutorials refer to a facilitator providing a set of questions that students must answer in class and submit within an allocated time. During this time, students solve the tutorial individually but they can use their notes, help each other and/or require the assistance of the facilitator whenever needed.

From past seasons, students appear to have a high degree of satisfaction and performance in CVEN40150 but they find CVEN30170 too abstract, being unable to perform in the exam at the same level of tutorials where they receive additional support. In CVEN40150, the average score of a total of 118 students in the exam component of the previous three years has been 54.5% with a failure rate of 25.9% (the passing threshold is set at 40%). In CVEN30170, a total of 76 students have had an average exam score of 43.5% with a failure rate of 42% in the same three years. It must be noted that these results correspond to the exam component only and that the continuous assessment component needs to be added to compute the final grade for the module. One potential cause of the high failure rate in the exam is the tendency of students to leave their studying for the very end. Another possible cause is that students’ learning is not supported correctly and it is mis-orientating student’s effort (condition 3 in Gibbs & Simpson (2004)). The feedback received from the exam score (assessment ‘of’ learning) at the end of the semester can be used to corroborate if the learning outcomes have been met, but it is not that useful in contributing to achieving the learning outcomes. Gibbs and Simpson (2004) warn that ‘learners need prompt feedback to learn effectively’. This brings up a question: ‘Is sufficient and efficient feedback provided to the students?’ It could happen that the feedback provided in lectures to date is not fully satisfactory, and a more self-assessment orientated strategy should be sought (Sadler, 2010). It becomes evident that complementary assessment methods (‘for’ and ‘as’ learning) that provide early feedback to the student are necessary.

Having said that, an overload of assignments and conflict of coursework with other modules in the same semester are not desirable. Therefore, the proposed method aims to promote the frequent use of the SRS game within the existing time slots scheduled for lectures and tutorials in an attempt to overcome some of the concerns described earlier. Out of the total number of registered students, 25 and 24 students are exposed to the SRS game in CVEN30170 and CVEN40150 respectively. The approach is implemented and tested on the ‘Stress Analysis’ contents of CVEN30170 delivered during the second half of the semester (≈ 24 h) and on the ‘Structural Analysis’ contents of CVEN40150 delivered during the second third of the semester (≈ 14 h). The selected contents compose the picture of a virtual race where stages are made of questions to be answered via the SRS. The questions in the stages are related to learning outcomes taught at the time in lectures. In CVEN30170, these involve calculation of stresses and strains, constitutive equations, Mohr’s circle and stress functions. In CVEN40150, moment distribution method, elasto-plastic analysis of beams and analysis of slabs are covered.

Objectives

On the basis of past experience, the main objectives of the SRS game proposed here are:

  • To develop a highly motivational learning experience.

  • To provide students immediate feedback that will allow them to assess their understanding and bring them early awareness if a further study was needed.

  • To provide the facilitator immediate feedback on the delivery of topics to be reinforced, redesigned or customized for those students in need.

  • To make a tool available to students that will allow them to learn and to test their knowledge at all times via their own Wi-Fi enabled devices.

  • To increase retention rates at lectures.

  • To serve as a teaching strategy to enhance the curriculum, to make learning fun and to engage and challenge students.

In the process of achieving these objectives, it is expected that the seven principles for good feedback practice by Nicol and Macfarlane-Dick (2006) will be met.

Implementation

The steps towards the implementation of the new educational game proposal, how it operates and is integrated within lectures, are introduced next.

The theme

According to Squire (2005), games must first create an emotionally compelling context for the player. Cycling has been chosen as the theme of the game as students are likely to connect with it. Most people have cycled sometime and some follow the big cyclist events with interest. ‘Surviving Le Tour de France’ is presented to them as a rich storyline with a longitudinal development (i.e., from start to the end of a lecture and from start to the end of a module). Getting better at cycling (or at any particular set of skills) is about endurance, sacrifice and lots of training, and similarly, the game rewards students who work hard and do well.

The syllabus of a module is divided into stages of a simulated cyclist race. A stage is basically an online test consisting a set of questions addressed to the students, often released in a step-by-step format (i.e., breaking a problem into small ones). One stage takes place at each lecture. The time taken by a student is a function of the accuracy of the answers, with ties between students being resolved by the speed of reply. Following a lecture, final stage standings with times are allocated to each student, and those students ranked in the top positions get a time bonus as an incentive. By computing the times of all stages, it is possible to obtain the overall standings. All this info is updated right after each stage in Blackboard VLE. The timing of the students in the overall standings is employed for continuous assessment purposes (“Participants and Context” Section).

Arrangements before and during a stage

Facilitators and students have access to different areas of the SRS application. Before a stage, the facilitator needs to plan for and upload a database of questions (multiple-choice with randomized answers, true/false, numerical, etc.) to her/his area within the SRS. A specific set of questions, classified according to the degree of difficulty, is planned for each lecture of the academic calendar. These questions are aligned to learning targets and mapped to module outcomes. Figure 1 illustrates these preliminary arrangements.

Fig. 1
figure 1

Before a stage

A lecture slot (1 or 2 h) is divided into two parts. In the first part, the facilitator delivers or reviews syllabus in classical fashion. In the second part, students log in to the SRS via their own mobile/tablet to compete in a stage. During a stage, students answer the questions posed by the SRS on their own Wi-Fi enabled devices (smartphones, mobiles, and tablets) as pictured in Fig. 2. Each student is assigned an anonymous random id that s/he will identify within the screen of the lecture theatre. The adoption of this anonymous cyclist id does not only protect the privacy of student records, but it is also a way for the student to recreate herself/himself in a new world while achieving deep learning. Initially, all students are grouped together. The SRS is employed to record two features in all answers: accuracy and speed of reply. Right and faster answers cause students to move up in the classification. During a stage, students can visualize progress on both their own device and the theatre screen in real time. Live time questioning, instant result aggregation and visualization via projection on the theatre screen are key elements for a stage to engage students (Fig. 2). The SRS can also be accessed by students outside lecture hours (from anywhere with Wi-Fi access) for monitoring their progress or reviewing old material, although in this case, their attempts are not computed for grading purposes.

Fig. 2
figure 2

During a stage

Feedback and assessment

Students and facilitator receive feedback on how well the learning outcomes are being satisfied with the accuracy of the answers via the SRS (Fig. 2). Figure 3 shows some of the wealth of information available after a stage for creating a rich feedback environment. The facilitator has access to individual and global scores, for each question and for each student, that are used as a reference for improving the delivery of the syllabus. Based on this info, the facilitator uses the theatre screen for reporting on overall percentages of success for each question as well as highlighting typical errors and lessons learnt from each stage. This level of feedback fits with the principles by Nicol and Macfarlane-Dick (2006) listed in Section “Literature Review”. Other benefits of using a SRS for continuous assessment purposes include the principles of transparency, reliability (i.e., unaffected by human factors such as tiredness or criteria differences in markers), practicability and efficiency (i.e., the initial implementation and creation of sets of questions may be time-consuming, but once available, they can be reused to provide students and facilitator automatic results and feedback). In spite, however, of these advantages, the author acknowledges that online assessment may not be able to measure all student’s knowledge compared to other methods of assessment. For this reason, the in-class SRS game proposed here is recommended as a formative low stakes assessment aimed to give student timely and good feedback as well as achieving the objectives outlined in Section “Objectives”.

Fig. 3
figure 3

After a stage

Measuring tools

Four populations are employed to constitute two control groups (fifteen CVEN30170 and twenty-three CVEN40150 students in 2014/15) and two test groups (twenty-three CVEN30170 and twenty-one CVEN40150 students in 2015/16). The two test groups are taught with the SRS game. The two control groups learn with conventional teaching methods as described in “Participants and Context” Section. The measuring tools consist of a confidential in-class questionnaire to be filled by the test groups before completion of the semester, and confidential online surveys and exam results to be taken by both control and test groups at the end of the semester. Students’ satisfaction in an online survey and in-class questionnaires are measured using a 5-point Likert scale going from ‘Strongly Agree’ to ‘Strongly Disagree’. The categories ‘Strongly Agree’, ‘Agree’, ‘Neither Agree nor Disagree’, ‘Disagree’ and ‘Strongly Disagree’ are weighed 5, 4, 3, 2 and 1 respectively, to obtain a mean value, that will be higher the greater the overall satisfaction. While only a small percentage of students (28.2% and 19.6% of 2014/15 and 2015/16 classes respectively) filled the online survey, all students in the test groups took the in-class questionnaire and end-of-semester exam respectively.

The online survey has five core questions determined by the School of Teaching and Learning that are common to all academic modules. Data are gathered via a Blackboard VLE where students are asked to voluntarily answer these questions, which are repeated for all modules every year. It is then possible a historical comparison between different modules and academic seasons. The online survey is opened to students from mid-semester to the end of the semester (beyond the examinations period). From the control and test groups, eleven and nine students filled the online survey in 2014/15 and 2015/16 respectively.

A short in-class questionnaire consisting of eight specific questions is designed to assess students’ motivation and attitude towards the SRS game, and how the objectives underlined in “Objectives” Section are met. Some of these questions are similar to others published elsewhere (Hall et al., 2005; Crossgrove & Curran, 2008; Gonzalez et al. 2014; Gonzalez & Covian, 2015), although they may have been rephrased to adapt it to the proposal at hand. The questionnaire was provided in paper and collected at the end of a lecture that took place half-way through the first semester of 2015/16. Twenty-three CVEN30170 students and twenty-one CVEN40150 students (100%) filled the questionnaire.

One of the advantages of the end-of-semester examination is that it allows assessing learning outcomes all together, although some authors argue that the exam is an artificial situation that, in some cases, may be a poor indicator of student’s performance in real-life situations (Gibbs & Simpson, 2004) and that students may underperform as a result of anxiety. Even so, plagiarism is unlikely, and the exam measures the knowledge of the individual with the certainty that it is only his/hers. Therefore, exam scores are gathered for 2014/15 and 2015/16 populations in CVEN30170 and CVEN40150 modules at the end of the semester. Info is also provided for a third module, CVEN30020, where there have not been any changes in the teaching method. CVEN30020 (title: Analysis of Structures) is a 5-credit 3rd year core module in the same subject area of structures for Civil Engineers and Structural Engineers with Architecture degrees as CVEN30170 and CVEN40150. Even though both CVEN30020 and CVEN30170 are 3rd year core modules, there are small differences in the samples due to the participation of repeat students, international exchange students or selection of these modules as elective by other degrees. For this reason, the total population defined in “Participants and Context” Section for CVEN30170 is reduced to a sample with the same fifteen students in 2014/15 and twenty-three students in 2015/16 that also took CVEN30020. The non-intervention CVEN30020 sample is employed as a form of pre-test aiming to check that the two academic seasons have an equivalent prior knowledge and academic ability. Syllabus and facilitators have remained the same in the three modules for the two academic seasons. In all cases, the end-of-semester examination lasts two hours, and students do not have access to material other than the information provided with the script.

Results

This section provides a measure of how students’ feedback and performance at the end-of-semester exam support the innovative approach.

Students’ feedback

General online survey

Table 1 compares mean Likert values obtained in the academic season 2015/16 to the immediately previous season 2014/15 and to an average of the three precedent seasons (2012/13, 2013/14 and 2014/15). The very low sample size of the test and control groups that participated in the online survey does not make possible to assess the improvement in 2015/16 with respect to previous years. Therefore, the discussion in this section is centred around how the two modules under investigation (CVEN30170 and CVEN40150) historically compare to each other and to the average for all CVEN modules when considering a long period from 2012 to 2015. It can be seen that the degree of satisfaction when considering an average of all CVEN modules in the Civil Engineering School is fairly consistent throughout all seasons and questions, and it oscillates around a value of 4. In relation to question C1, scores in CVEN40150 indicate that students feel the module content has contributed to their overall understanding of the subject to a greater extent than the average of all CVEN modules. However, CVEN30170 score in C1 is relatively low compared to the average of all CVEN modules, which suggests that students feel that the material, teaching, or content did not improve their understanding of the topic. When analysing C2, scores are high for both modules, i.e., the module’s assessment facilitated students’ learning. According to question C3, CVEN40150 students appear to be well aware of how the taught material is relevant to the module outcomes. This is evidenced by an average score of 4.14 in the preceding three seasons and 3.91 for all modules. However, a score of 3.42 by CVEN30170 students from 2012 to 2015 is again significantly lower than by CVEN40150 students, as if they did not have a clear idea of what the learning outcomes are or how they relate to the module. Similarly, CVEN30170 students give a score of 3.42 for C4 compared 4.16 by CVEN40150 students. Finally, answers to question C5 refer to an overall feeling on the content, structure, and teaching of the modules. In the three seasons from 2012 to 2015, they yield mean peak values of 4.13 and 3.34 for CVEN40150 and CVEN30170 respectively.

Table 1 Mean of Responses to General Questions (maximum and minimum are 5 -strongly agree- and 1 -strongly disagree- respectively)

Although the sample sizes do not allow for a meaningful comparison of how the intervention in 2015/16 performs, it transpires that CVEN30170 has been a less popular module than CVEN40150 in the preceding years, and as already noted in “Participants and Context” Section, one that students find particularly difficult. Overall it is possible to observe a more positive student’s perception in 2015/16 than in previous years, however, this interpretation is at best tentative, as students are providing feedback on the module as a whole, possibly after the end-of-semester examination making it prone to bias, and the percentage of participation is very low. The following subsections aim to corroborate the encouraging, although limited, views gathered by these general questions.

Specific in-class questionnaire

In order to gather a more representative sample, anonymous in-class student questionnaires are circulated as described in “Measuring Tools” Section. The percentage number of hits for each response is illustrated in Fig. 4. There is not one single record of ‘Strongly Disagreement’ on the targeted questions amongst the forty-four students that completed the questionnaire. The largest proportion of CVEN30170 students ‘Agree’ to five questions and ‘Strongly Agree’ to the remaining three questions. For CVEN40150 students, ‘Agree’ and ‘Strongly Agree’ are majority in six and two questions respectively. Both classes mostly ‘Strongly Agree’ to Q3, i.e., they work harder and pay more attention as a result of being informed of their scores in real time.

Fig. 4
figure 4

Student feedback: (a) Q1 - The new approach was efficient in allowing me to practice and retain material taught in lectures & test my knowledge on a continuous and promptly basis, (b) Q2 - Revising answers and my score let me reflect on the topic and made me aware of points that I failed to understand or that I needed to reinforce, (c) Q3 - Being informed of my performance in real-time made me want to work harder and to pay more attention to notes and to preceding lectures to improve my score, (d) Q4 - The alignment of all questions posed by the SRS to learning outcomes helped me to be aware of the concepts/skills that I need to achieve for the module, (e) Q5 - I believe it is a good idea to organise SRS questions and answers around a game theme, (f) Q6 - I found lectures to be more engaging with the use of a wireless-based SRS than without it, (g) Q7 - I feel my learning has been accelerated by the guided step-by-step questioning of SRS approach used here, (h) Q8 - Overall I would recommend the use of a wireless-based SRS approach in the future

Table 2 provides mean and standard deviation values per question to allow for a meaningful comparison of the degree of fulfilment. While CVEN30170 students consider the lectures being more engaging with the use of wireless-based SRS as the most positive impact (Q6 with 4.61), CVEN40150 students appreciate the combination of SRS questions around a game theme the most (Q5 with 4.43). The average degree of satisfaction of CVEN30170 (yielding an average of 4.32 across the 8 questions) appears to be higher than CVEN40150 (average of 4.17). The latter can be attributed to the characteristics of the syllabi, the difference in age of the samples, duration (smaller period of exposure in CVEN40150) and timing (the SRS game was delivered in CVEN40150 first, and some aspects were improved later in CVEN30170). The standard deviation in CVEN40150 is larger than in CVEN30170 in 6 out of the 8 questions. When averaging the answers of both classes, there are three questions that stand out with the highest score of 4.43, which are Q8, recommending its use in the future, and the aforementioned Q3 and Q6, i.e., students working harder and being more engaged in lectures.

Table 2 Mean and standard deviation of responses to specific questions (maximum and minimum are 5 -strongly agree- and 1 -strongly disagree- respectively)

Open-ended comments

The questionnaire also included open-ended comments that reinforced students’ positive perception of the new approach. A sample of quotes is grouped here under five major identified topics.

  1. 1.

    The SRS game increases students’ participation and engagement: ‘In-class quizzes are very good and actually make you pay attention in class!’, ‘It was a really good way to learn and I paid a lot more attention in class and my attendance increased’, ‘I found the SRS a very useful tool in aiding my learning for the module; I definitively kept more focused in class than I would have otherwise!

  2. 2.

    The SRS game enhances metacognitive awareness: ‘This method was very effective as it kept the second hour of the class quite lively. The step by step method helped me identify areas I need to work on

  3. 3.

    The SRS game accelerates students’ learning and increases the level of retention in lectures: ‘Tour de France allowed me for quick revision and allowed me to understand concepts almost immediately’, ‘Very useful for recapping tough subjects

  4. 4.

    The frequent assessment and feedback by the SRS game is found highly beneficial: ‘The SRS gave real examples of questions on a regular basis preparing me for the final exam’, ‘The section was well supported by continuous assessment, and I think the use of that really, really helped with the understanding and thought process being reinforced and transferred regardless of the application

  5. 5.

    The SRS game offers more support than other lectures delivered in a traditional format, which would benefit from the implementation of the same idea: ‘I would happily do this again as it was very worthwhile and the other part of the module would benefit from this idea going forward’. In relation to a section X of the module covered by a different lecturer in a more traditional format, students point out ‘I think that, overall, section X was really unsupported in terms of ensuring understanding throughout the semester. Though it was lectured well, the lack of assessment or assignments throughout the semester meant it was not very well or thoroughly tested, when compared with the way in which the section with the SRS was imparted

Students do not only find assessments more interesting and enjoyable but also feel that the perceived learning justifies the effort invested and as a result, they are more engaged and motivated.

End-of-semester examination

Those students exposed to the SRS game have produced an average score in the final exam of 51.4% and 60.0% in CVEN30170 and CVEN40150 respectively. These average scores represent relative increments of + 28.8% in CVEN30170 and + 4.9% in CVEN40150 with respect to the previous season, and of + 17.4% in CVEN30170 and + 8.5% in CVEN40150 with respect to an average of the previous three seasons. Comparing the previous three-year period to the current season, failing rates have dropped from 42.1% to 28.0% in CVEN30170 (When adding the continuous assessment component to the exam component, the overall failure rate in CVEN30170 further decreases from 28.0% to 8.0% in 2015/16) and from 25.9% to 8.7% in CVEN40150. These percentages refer to total scores, and it must be noted that the SRS game was applied only to a portion of the exam that weighed 1/2 and 1/3 of the CVEN30170 and CVEN40150 exams respectively.

The impact of the new approach can be evaluated more realistically when analysing the questions of the exam covered by the SRS game in isolation from the rest. Here, CVEN30170 students have gone from an average score of 32.0% in 2014/15 (fifteen students) to 53.0% in 2015/16 (twenty-three students), and similarly the average score in CVEN40150 increased from 67.2% in 2014/15 (twenty-three students) to 73.8% in 2015/16 (twenty-one students). Figure 5 compares the distribution of scores for the exam questions on SRS contents between both academic seasons. The mode of CVEN40150 reaches the top interval (≥ 90%) in 2015/16, and the mode of CVEN30170 has shifted from a failing range (< 40%) in 2014/15 to a medium range interval (50 to 60%) in 2015/16.

Fig. 5
figure 5

Distribution of scores in exams: (a) CVEN30170, (b) CVEN40150

Table 3 compares the statistics of the final scores in the exam component for CVEN30170, CVEN40150, and CVEN30020 in the seasons 2014/15 and 2015/16. CVEN30020 is a 3rd year Civil Engineering module titled “Analysis of Structures” where a traditional format, similar to that employed with CVEN30170 in the season 2014/15 (consisting of lectures, tutorials, and computer labs), has been employed in both academic seasons. While mean scores show that students with the SRS game clearly outperformed those without it in CVEN30170, the improvement is smaller in CVEN40150, although better than no intervention at all (i.e., CVEN30020). Cohen’s effect size and t-test are employed here to address the fact that the difference of means between 2014/15 and 2015/16 do not take into account the variability within the groups. Effect size is calculated dividing the difference between two means by the pooled standard deviation (Cohen, 1992). As expected, a difference of means of − 1.8, a negligible effect size of − 0.09 and p > 0.05 are found for CVEN30020, i.e., the performance in 2014/15 and 2015/16 are about the same. The effect size for CVEN40150 has a difference of means of 2.8 and a small effect size of 0.15 towards the SRS game. The latter represents that students in 2015/16 received a score of around 0.15 standard deviations more than the mean score received in 2014/15 with p > 0.05, therefore, there is no significant improvement. The effectiveness of the SRS game method in improving the performance at the exam is felt most strongly in CVEN30170 with a difference of means of 11.5, a large effect size of 0.78 and p < 0.05, i.e., the performance is significantly improved. Summarizing, the SRS game appears to have been more efficient in supporting students’ learning than using a traditional setting, more prominently in the subject with younger students and the lowest performance.

Table 3 Mean, Standard Deviation, Skewness and Kurtosis of Total Exam Scores (in percentage) for Three Modules and Two Academic Seasons, Effect Size and t−test Result ((*) It includes SRS game intervention)

Limitations

There are limitations to the potential generalization of this investigation beyond the four populations of fifteen (CVEN30170, 2014/15), twenty-three (CVEN30170, 2015/16), twenty-three (CVEN40150, 2014/15) and twenty-one (CVEN30170, 2015/16) students tested on structures-related modules within an Irish Civil Engineering School. The SRS game is tested partially in 2015/16, i.e., in one-third of the lectures in CVEN40150 and in one-half of the lectures in CVEN30170. It is not possible to gain insights into any gender effects given that 82.3% of the population was male. Ideally, both control and test groups would have been selected within the same academic season and pre- and post- tests employed to evaluate students’ performance. However, control (2014/15) and test (2015/16) populations are selected from different academic seasons due to limited resources. Exam performance by the same students in another module without any intervention (CVEN30020) has been employed to compare the mean academic ability between 2014/15 and 2015/16. The populations taking the CVEN40150 exam have similar dimensions in both years; however, the CVEN30170 population in 2015/16 is 53.33% larger than in 2014/15. It must also be taken into account that the level of difficulty of the exams in these two years is comparable but the questions are not identical.

Conclusions

In the introduction of this paper, it has been argued that there are remaining issues in providing successful feedback and in engaging students to be addressed. The benefits of using SRSs to fill the gap between what is understood and what is aimed to be understood has been published extensively. However, this paper has shown its applicability to a particular field in Civil Engineering education and has also introduced the novel idea of linking it to a relatable game for the learners (‘Le Tour de France’). Students have been provided with a tool that brings the benefits of Wi-Fi based SRSs and games together to achieve high levels of student engagement. Here, the game is a race that has been designed as a type of formative assessment where students are graded according to their timing in the overall standings.

The approach has been successfully tested in two structures-related civil engineering subjects: CVEN30170 and CVEN40150. Compared to previous seasons, the impact of the SRS game has been noticed in CVEN30170 more strongly than in CVEN40150, largely due to a historically poorer exam performance and a lower degree of students’ satisfaction. While students have received immediate feedback and measure of their knowledge, the facilitator has been able to monitor their level of understanding on a continuous basis and to take remedial actions as needed. As a result, students have confirmed via confidential questionnaires and exams that they are studying harder and that they are learning more than with past traditional practices. These responses can be attributed to a better alignment between outcomes and assessment, and to the higher frequency and intensity of assessment via the SRS game. It is worthwhile to mention that students have weighed the use of a game theme as one of the most positive aspects of the initiative. Firstly, instant feedback has led to more effective learning in accordance with Gibbs and Simpson (2004) and Wiliam (2011). Secondly, the game component has brought an increased effort and motivation into students compared to the observation of a conventional SRS alone. As reported by Whitton (2007) and Huang et al. (2010), this motivation can be the result of perceiving the game as the most effective way to learn.

It must be noted that the SRS game can also be played outside the lecture theatre. By facilitating learning at different times and in different places as recommended by Ebner and Holzinger (2007), the SRS is serving students to support and maintain the empowering sense of taking charge of their own learning in a way that a traditional teaching setting cannot meet. The fact that the students have been exposed to the SRS game by the first time could suggest that their positive perception has been affected by the novelty of the intervention. Nonetheless, students were exposed in these and other modules of successive years to the same SRS game, when they quoted ‘the Tour de France was a great success again’ and ‘a good way to learn’, and they kept emphasizing how useful it was in helping them to check if they were grasping each concept in class and how it encouraged them to study the notes prior to the exam period. All these experiences justify the expense in time that the setting of the SRS game requires.

At present, the connection between the cloud-based SRS and the game is handled manually. Future developments seek to integrate both into a unique interface where the information from each stage will be transferred into the overall classification automatically. The possibility of incorporating graphics with stage profiles (rougher the more difficult the questions) and of allowing students to choose a customized cyclist jersey, to visualize animated cyclists with a cycling pace linked to their answers and superposed on the stage profile, to play with other peers and/or computer characters (i.e., simulating cyclists performing at a range of grade levels) or to hover over a specific cyclist or group of cyclists in the stage to display either the time difference, the full list of cyclists with their live times at the stage in course or the live overall standings (sum of all stages up to that point) will be explored amongst other appealing features, to convert the suggested approach into a unique stand-alone educational game.

References

  • Anderson, T., Rourke, L., Garrison, D. R., & Archer, W. (2001). Assessing teacher presence in a computer conferencing context. Journal of Asynchronous Learning Networks, 5(2), 1–17.

    Google Scholar 

  • Annetta, L. A., Minogue, J., Holmes, S. Y., & Cheng, M.-T. (2009). Investigating the impact of video games on high school students’ engagement and learning about genetics. Computers & Education, 53(1), 74–85.

    Article  Google Scholar 

  • Awedh, M., Mueen, A., Zafar, B., & Manzoor, U. (2014). Using Socrative and smartphones for the support of collaborative learning. International Journal Integrating Technololgy in Education, 3(4), 17–24.

    Article  Google Scholar 

  • Baker, R. S., D’Mello, S. K., Rodrigo, M. T., & Graesser, A. C. (2010). Better to be frustrated than bored: The incidence, persistence, and impact of learners’ cognitive–affective states during interactions with three different computer-based learning environments. International Journal of Human-Computer Studies, 68(4), 223–241.

    Article  Google Scholar 

  • Becker, K. (2001). Teaching with games: The minesweeper and asteroids experience. Journal of Computing Sciences in Colleges, 17(2), 23–33.

    Google Scholar 

  • Blunt, R. (2007). Does game-based learning work? Results from three recent studies in Proceedings of the Interservice/Industry Training, Simulation & Education Conference, 945–955.

  • Britain, S., & and Liber, O. (2004). A framework for pedagogical evaluation of virtual learning environments, Educational Cybernetics: Reports. Paper 2, https://hal.archives-ouvertes.fr/hal-00696234/document. Accessed 12 Nov 2017.

  • Bruff, D. (2009). Teaching with classroom response systems: Creating active learning environments: John Wiley & Sons.

  • Burguillo, J. C. (2010). Using game theory and competition-based learning to stimulate student motivation and performance. Computers & Education, 55, 566–575.

    Article  Google Scholar 

  • Caldwell, J. E. (2007). Clickers in the large classroom: Current research and best-practice tips. CBE-Life Sciences Education, 6(1), 9–20.

    Article  Google Scholar 

  • Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher Education, 31(2), 219–233.

    Article  Google Scholar 

  • Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159.

    Article  Google Scholar 

  • Cohn, S. T., & Fraser, B. (2016). Effectiveness of student response systems in terms of learning environment, attitudes and achievement. Learning Environments Research, 19(2), 153–167.

    Article  Google Scholar 

  • Crews, T. B., Ducate, L., Rathel, J. M., Heid, K., & Bishoff, S. T. (2011). Clickers in the classroom: Transforming students into active learners. ECAR Research Bulletin, 9, 502.

    Google Scholar 

  • Crossgrove, K., & Curran, K. L. (2008). Using clickers in nonmajors- and majors-level biology courses: Student opinion, learning, and long-term retention of course material. CBE-Life Sciences Education, 7(1), 146–154.

    Article  Google Scholar 

  • D’Inverno, R., Davis, H., & White, S. (2003). Using a personal response system for promoting student interaction. Teaching Mathematics and its Applications, 22(4), 163–169.

    Article  Google Scholar 

  • Dangel, H. L., & Wang, C. X. (2008). Student response systems in higher education: Moving beyond linear teaching and surface learning. Journal of Educational Technology Development and Exchange, 1(1), 93–104.

    Article  Google Scholar 

  • Dervan, P. (2014). Increasing in-class student engagement using Socrative (an online student response system). All Ireland Journal of Teaching and Learning in Higher Education, 6(3), 1801–1813.

    Google Scholar 

  • Ebner, M., & Holzinger, A. (2007). Successful implementation of user-centered game based learning in higher education: An example from civil engineering. Computers & Education, 49(3), 873–890.

    Article  Google Scholar 

  • ExitTicket, 2017 originally at http://exitticket.org/ and currently migrated to http://gooru.org. Accessed 12 Nov 2017.

  • Filsecker, M., & Hickey, D. T. (2014). A multilevel analysis of the effects of external rewards on elementary students’ motivation, engagement and learning in an educational game. Computers & Education, 75, 136–148.

    Article  Google Scholar 

  • Gee, J. P. (2003). What video games have to teach us about learning and literacy. Computers in Entertainment (CIE), 1(1), 1–4.

    Article  MathSciNet  Google Scholar 

  • Gibbs, G., & Simpson, C. (2004). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1(1), 3–31.

    Google Scholar 

  • Glendon, K., & Ulrich, D. (2005). Using games as a teaching strategy. The Journal of Nursing Education, 44(7), 338–339.

    Google Scholar 

  • Gok, T. (2011). An evaluation of student response systems from the viewpoint of instructors and students. TOJET: The Turkish Online Journal of Educational Technology, 10(4), 67–83.

    Google Scholar 

  • Gonzalez, A., & Covian, E. (2015). Enhancing student performance through a competitive team tournament, in Proceedings of 19 th International Conference on Engineering Education (ICEE 2015), Zagreb, Croatia, July 20–24.

    Google Scholar 

  • Gonzalez, A., Jennings, D., & Manriquez, L. (2014). Multi-faceted impact of a team game tournament on the ability of the learners to engage and develop their own critical skill set. International Journal of Engineering Education, 30(5), 1213–1224.

    Google Scholar 

  • Grinias, J. P. (2017). Making a game out of it: Using web-based competitive quizzes for quantitative analysis content review. Journal of Chemical Education, 94(9), 1363–1366.

    Article  Google Scholar 

  • Hall, R. H., Collier, H. L., Thomas, M. L., & Hilgers, M. G. (2005). A student response system for increasing engagement, motivation, and learning in high enrollment lectures, in Proceedings of the Eleventh Americas Conference on Information Systems, Omaha, NE, USA, august 11–14.

    Google Scholar 

  • Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.

    Article  Google Scholar 

  • Hornby, W. (2003). Case studies on streamlining assessment, https://doi.org/10.2139/ssrn.405760. Accessed 12 Nov 2017.

  • Huang, W.-H., Huang, W.-Y., & Tschopp, J. (2010). Sustaining iterative game playing processes in DGBL: The relationship between motivational processing and outcome processing. Computers & Education, 55(2), 789–797.

    Article  Google Scholar 

  • Huizenga, J., Admiraal, W., Akkerman, S., & Dam, G. T. (2009). Mobile game-based learning in secondary education: Engagement, motivation and learning in a mobile city game. Journal of Computed Assisted Learning, 25(4), 332–344.

    Article  Google Scholar 

  • Hwang, G.-J., & Wu, P.-H. (2012). Advancements and trends in digital game-based learning research: A review of publications in selected journals from 2001 to 2010. British Journal of Educational Technology, 43(1), E6–E10.

    Article  Google Scholar 

  • Ifenthaler, D., Eseryel, D., & Ge, X. (2012). Chapter 1: Assessment for game-based learning, (pp. 1–8). Assessment in game-based learning: Springer.

  • Kapp, K. (2012). The gamification of learning and instruction: Game-based methods and strategies for training and education. John Wiley & Sons.

  • Kay, R. H., & LeSage, A. (2009). Examining the benefits and challenges of using audience response systems: A review of the literature. Computers & Education, 53(3), 819–827.

    Article  Google Scholar 

  • Kulmer, F., Wurzer, C. G., & Geiger, B. C. (2016). The magnitude response learning tool for DSP education: A case study. IEEE Transactions on Education, 59(4), 282–289.

    Article  Google Scholar 

  • López-Bonilla, J. M., & López-Bonilla, L. M. (2015). The multidimensional structure of university absenteeism: An exploratory study. Innovations in Education and Teaching International, 52(2), 185–195.

    Article  Google Scholar 

  • Mac George, E. L., Homan, S. R., Dunning Jr., J. B., Elmore, D., Bodie, G. D., Evans, E., … Geddes, B. (2008). Student evaluation of audience response technology in large lecture classes. Education Tech Research Dev, 56, 125–145.

  • Mendez, D., & Slisko, J. (2013). Software Socrative and smartphones as tools for implementation of basic processes of active physics learning in classroom: An initial feasibility study with prospective teachers. European Journal of Physics Education, 4(2), 17–24.

    Google Scholar 

  • Morillas Barrio, C., Munoz-Organero, M., & Sanchez Soriano, J. (2016). Can gamification improve the benefits of student response systems in learning? An experimental study. IEEE Transactions on Emerging Topics in Computing, 4(3), 429–438.

    Article  Google Scholar 

  • Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218.

  • Papert, S. (1998). Does easy do it? In Game Developer magazine, September, 88.

    Google Scholar 

  • Pivec, M., Diziabenko, O., & Schinnerl, I. (2003). Aspects of game-based learning, in Proceedings of the I-KNOW ‘03 Conference, Graz, Austria, July 2–4.

    Google Scholar 

  • PollEveryWhere, 2017 http://www.polleverywhere.com/. Accessed 12 Nov 2017.

  • Preszler, R. W., Dawe, A., Shuster, C. B., & Shuster, M. (2007). Assessment of the effects of student response systems on student learning and attitudes over a broad range of biology courses. CBE-Life Sciences Education, 6(1), 29–41.

  • Qwizdom, 2017 http://qwizdom.com/?lang=fa. Accessed 12 Nove 2017.

  • Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35(5), 535–550.

    Article  Google Scholar 

  • Sailer, M., Hense, J. U., Mayr, S. K., & Mandl, H. (2017). How gamification motivates: An experimental study of the effects of specific game design elements on psychological need satisfaction. Computers in Human Behavior, 69, 371–380.

    Article  Google Scholar 

  • Siemens, G. (2002). Lessons learned teaching online, http://www.elearnspace.org/Articles/lessonslearnedteaching.htm. Accessed 12 Nov 2017.

  • Socrative,2017 http://www.socrative.com/. Accessed 12 Nov 2017.

  • Squire, S. (2005). Game based learning: Present and future state of the field: Masie center e-learning consortium.

    Google Scholar 

  • Stav, J., Nielsen, K., Hansen-Nygard, G., Thorseth, T., & Trondelag, S. (2010). Experiences obtained with integration of student response systems for ipod touch and iphone into e-learning environments. Electronic Journal of e-learning, 8(2), 179–190.

    Google Scholar 

  • Steel, P. (2007). The nature of procrastination: A meta-analytic and theoretical review of quintessential self-regulatory failure. Psychological Bulletin, 133(1), 65–94.

    Article  MathSciNet  Google Scholar 

  • Stowell, J. R., & Nelson, J. M. (2007). Benefits of electronic audience response systems on student participation, learning and emotion. Teaching of Phylosophy, 34(4), 253–258.

    Google Scholar 

  • Stowell, J. R., Oldham, T., & Benneth, D. (2010). Using student response systems (“clickers”) to combat conformity and shyness. Teaching of Psychology, 37(2), 135–140.

    Article  Google Scholar 

  • Sung, H.-Y., & Hwang, G.-J. (2013). A collaborative game-based learning approach to improving students’ learning performance in science courses. Computers & Education, 63, 43–51.

    Article  MathSciNet  Google Scholar 

  • Trees, A. R., & Jackson, M. H. (2007). The learning environment in clicker classrooms: Student processes of learning and involvement in large university-level courses using student response systems. Learning, Media and Technology, 32(1), 21–40.

    Article  Google Scholar 

  • Wang, A. I. (2015). The wear out effect of a game-based student response system. Computers & Education, 82, 217–227.

    Article  Google Scholar 

  • Wang, A. I., & Hoang, T. T. (2017). Reaction vs. completeness in game-based learning: Comparing two game modes in a game-based student response system. In Proceedings of the 11 th European Conference on Games Based Learning, Graz, Austria, October 5-6.

  • Wang, A. I., Meng, Z., & Rune Se, T. (2016). The effect of digitizing and gamifying quizzing in classrooms. In Proceedings of the 10 h European Conference on Games Based Learning, paisley, Scotland, October 6-7.

  • Wash, P. D. (2014). Taking advantage of mobile devices: Using Socrative in the classroom. Journal of Teaching and Learning with Technology, 3(1), 99–101.

    Article  Google Scholar 

  • Whitton, N. (2007). Motivation and computer game based learning, in Proceedings of the Australian Society for Computers in Learning in Tertiary Education, Singapore.

    Google Scholar 

  • Wieman, C., Perkins, K., Gilbert, S., Benay, F., Kennedy, S., Semsar, K., & Simon, B. (2008). Clicker resource guide: An instructors guide to the effective use of personal response systems (clickers) in teaching. Vancouver, BC, Canada: University of British Columbia.

    Google Scholar 

  • Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37, 3–14.

    Article  Google Scholar 

  • Wouters, P., & Oostendorp, H. V. (2013). A meta-analytic review of the role of instructional support in game-based learning. Computers & Education, 60(1), 412–425.

    Article  Google Scholar 

  • Zainol Abidin, H. & Kamaru Zaman, F. H. (2017). Students’ perceptions on game-based classroom response system in a computer programming course. In Proceedings of the IEEE 9th International Conference on Engineering Education (ICEED), Kanazawa, Japan, November 9-10.

Download references

Availability of data and materials

The datasets supporting the conclusions of this article are included within the article.

Author information

Authors and Affiliations

Authors

Contributions

A.G. devised the conceptual idea and design of the SRS game proposed in the paper, implemented the acquisition of data via student questionnaires, analyzed and interpreted the data, and wrote the manuscript.

Corresponding author

Correspondence to Arturo González.

Ethics declarations

Competing interests

The author declares that he has no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

González, A. Turning a traditional teaching setting into a feedback-rich environment. Int J Educ Technol High Educ 15, 32 (2018). https://doi.org/10.1186/s41239-018-0114-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41239-018-0114-1

Keywords