Skip to main content
  • Research article
  • Open access
  • Published:

Tertiary student attitudes to invigilated, online summative examinations

Abstract

The outcomes of a trial implementation of an invigilated, online examination at a regional university in Australia and their implications for online education providers are discussed. Students in a first year online psychology course were offered the opportunity to complete their final examination task online with invigilation conducted via webcam. About a quarter of the students (125) initially elected to complete the online examination; however, after they had undertaken a practice online examination, only 29 (6.3 %) students elected to continue in the trial and proceed to take the final exam online. The study concluded that many students have substantial challenges with the idea of major stakes examinations being online. While lower associated costs and time requirements were motivations, many were challenged by the process due to technical difficulties and insufficient support. ICT infrastructure and reliable connectivity remain significant barriers to successful completion of online examinations under secure, proctored conditions.

Conceptual framework

E-assessment embraces a wide range of student assessment-related activity from online essay submission to fully automated, computer-marked online examinations. Aligning learning experiences with assessment methods to avoid cognitive conflict (e.g., Brown, Bull, & Pendlebury, 1997) means that as use of e-learning in higher education increases so should use of online examinations. Online assessment is also currently topical in MOOC world (Sandeen, 2013). Naturally enough, students want credit for MOOCs; but MOOC providers are struggling to find inexpensive but viable ways to offer accreditation that maintains academic integrity.

Assessment using e-testing software is becoming more common practice in online learning, especially computerized, self-assessment quizzes that provide instant, tailored feedback for formative assessment. The advantages of online assessment over traditional, paper-based assessment are widely recognised—lower long term costs, instant feedback to students, greater flexibility with respect to location and timing, improved reliability with machine marking, improved impartiality, and enhanced question styles that incorporate interactivity and multimedia (Boyle, 2005; James, McInnis, & Devlin, 2002). Nevertheless, online testing is rarely employed in summative assessment in higher education.

The lack of widespread use of online summative assessment is almost certainly associated with the perceived risks and security and authentication issues. Thus far, the alternative tools used in MOOCS to measure learning outcomes, such as learning analytics and digital badges awarded for completion, participation or on the basis of peer assessment, also lack credibility and have not been widely accepted as evidence of learning (Bates, 2014a). Indeed, most institutions will not accept certificates from MOOCS for admission or academic credit, even those from their own MOOC provisions (Bates, 2014b).

There are a number of reasons for reticence to use online summative assessment. Shaffer (2012) describes assessment as “a particularly thorny aspect of distance education (DE) course delivery; various researchers and practitioners hold strong beliefs with regard to the validity, reliability and fairness of various methods of assessment” (p. 1). Of foremost concern, online learners, being remote, are unverifiable, identified merely by an email address, making it difficult to ensure that the person taking the assessment online is who they claim to be. There has long been a concerted effort to find automated ways to ensure candidate authenticity, from monitoring some aspects of a student’s interactional style, such as keystrokes to programs that lock down students’ web browsers during exams. MOOCs have been the impetus for especially rapid prototyping of technology-based assessment solutions.

Online assessment is also considered to provide increased potential for cheating more broadly (Khare & Lam, 2008; Yates & Beaudrie, 2009). Students not under direct supervision have the opportunity to engage in activities such as collusion with others and reference to inappropriate materials during the assessment, which brings the academic integrity of the assessment process into question. However, this contention is not supported by the research of Yates and Beaudrie (2009). No significant difference in grades was identified between two groups of students in a mathematics course where one group undertook traditional in-person assessment and the other online, unproctored assessment (although Englander, Fask, and Wang (2011) challenged the methodology of this study in terms of sample selection, choice of measure of student performance, inability to ensure identical exam environments in different contexts, and the evolution of educational materials over the long period of the study). It has also been argued that an appropriate pedagogical model (e.g., use of constructed responses) can substantially reduce the opportunity for students to cheat in an online assessment environment (Johnson & Davies, 2012; Khare & Lam, 2008).

Indeed, online tests are relatively easy to cheat (Winslow, 2002). Monitoring of the online examination using a human proctor or electronic proctoring software coupled with biometrics to ensure the identity of the test-taker is recommended by most researchers (Bedford, Gregg, & Clinton, 2011; Caldarola & MacNeil, 2009; Chiesl, 2007; Foster, Mattoon, & Shearer, 2008; Harmon, Lambrinos, & Buffolino, 2010; Trenholme, 2006-2007; Watson & Sottile, 2010). Recently, webcams have been trialed as a potential solution to issues of both authentication and cheating, with companies offering verification technology and webcam proctoring as a service and some MOOCs incorporating this technology (New, 2013a, b). Innovations such as these are improving confidence in credentialing based on online assessment among accrediting agencies and employers (Chapman, 2006) and may eventually lead to more widespread adoption.

It is perhaps the case that more rigorous criticism is being levelled at computer-mediated assessment than is usually applied to traditional examination environments, since procedures typically used in examination centres, such as verification by student photo-ID, have proven to be fallible, and it must be conceded that even if a student submits an essay face-to-face, there is no way of verifying who wrote it. Low technology solutions to cheating abound. Studies have consistently shown that significant cheating occurs in traditional assessment settings and its incidence continues to grow (McCabe, 2005; Schmelkin, Gilbert, Spencer, Pincus, & Silva, 2008; Whitley, 1998). On the other hand, Barron and Crooks (2005) found little research on the issue of web-based cheating and, therefore, very little to support the contention that cheating in Web-based assessment is more common than in traditional settings. What little evidence exists is equivocal: some studies found that students enrolled in online classes were less likely to cheat than those enrolled in face-to-face courses; some found no difference between the two environments and some that cheating was significantly greater in an on-line test or quiz (Grijalva, Nowell, & Kerkvliet, 2006; Stuber-McEwen, Wiseley, & Hoggatt, 2009; Watson & Sottile, 2010). Obviously, further research is needed to resolve whether and to what extent the prevalence of cheating varies between online and paper-based assessment environments.

It is also true that summative assessments are often high stakes assessments; thus, there is wariness about imposing additional risks and anxieties, such as involving computers in the assessment process. Investigation of the role of technology and how it might impact on assessment is still in its infancy. Ricketts and Wilks (2002) claim that the speed of marking and immediacy of the availability of feedback are the main reasons students accept computer-based assessment even though they find it difficult to read from a computer screen for long periods of time. The negative association between increased anxiety associated with assessment and academic performance is well-established (e.g., Hembree, 1988; Stobart, 2001). Brosnan (1999) has raised the issue of computer anxiety affecting performance. Engelbrecht and Harding (2004) found that, in the domain of summative assessment, online assessment does not differ significantly from that of paper-based assessment. However, Ricketts and Wilks (2002) reported that some students feel disadvantaged by online examinations because they find these examinations more stressful or because they dislike computers. But the picture is mixed: dyslexic students considered online examinations advantageous to them, and some students found this format less stressful than paper-based exams (see also Clesham, 2010). It should be acknowledged that paper-based approaches to examination cannot avoid differential effects due to situational or other anxieties and that this is yet again a case of e-assessment receiving heightened scrutiny. Nonetheless, understanding how students experience an online assessment environment remains a valid avenue of enquiry.

Context of the study

A principle driver for the use of online technologies when delivering education to large cohorts is reduced cost to the institution (Bartley & Golek, 2004; Jung, 2003). This was no less a motivation at the institution under study. However, it was also important that online examinations could reduce time and financial costs and increase convenience for students, as the institution was equally concerned to provide the highest quality, secure, yet comfortable examination experience for students.

The research reported was undertaken at an Australian regional university where currently 30,000+ examinations annually are organised externally to the institution, all over the world, at a cost of millions of dollars. It was expected that online examination technology would bring considerable cost savings in the hosting of external exams, with efficiencies in costs associated with payment of invigilators, venue hire and courier services, as well as the costs of printing exam papers. Up to 70–80 % of students at this university study by distance, so course offerings include extensive use of online resources for the delivery of content and for ongoing assessment, such as essays, assignments and formative tasks. However, prior to this study, no use had been made of online approaches for completing final summative examinations. This is overall a fairly typical profile for a regional university in Australia.

While tertiary students may be comfortable and experienced in undertaking online study, it cannot be assumed they will demonstrate the same attitudes and have the same subjective experiences when confronted with completing a major assessment task online. This study reports on students’ attitudes to the use of proctored, online assessment for the final summative examination in a first year psychology course. The academic literature, while being vast in relation to teaching pedagogies and practices in relation to the delivery of online learning, demonstrates little attention to the issue of online assessment (Khare & Lam, 2008). In light of the extent of engagement that universities have with the delivery of online education, it seems timely to investigate the use of technology in this arena. Security, software usability and administration are the three major issues identified in the use of online assessment (Wilkinson & Rai, 2009). Focusing primarily on student perceptions, this study will address the first two of these issues.

Method

The trial

Student experience was investigated during a trial implementation of an invigilated, online examination facilitated by a proctoring company. Although the study site is primarily a distance education institution with a substantial contingent of off-campus students, it also has a lesser number of on-campus students who undertake either blended or online learning to complete their courses. The purpose of the evaluation was to assess the suitability and usability of online secured testing technology for students, administrative and academic staff from the perspective of the user experience via a survey of their thoughts and observations regarding the testing set-up process, the testing process and the test environment. The evaluation method was essentially pre-determined by the request to tender. The evaluation was to be conducted using an opt-in/out survey, a post-exam survey and an invigilated online exam, with qualitative and quantitative data collected for analysis.

All students enrolled in an online first year psychology unit were invited to participate in the trial. The invitation email provided a link to an online survey instrument where the student could choose to opt in/out of the online exam trial and answer a short survey of 15 questions to outline the reasons for their choice. An Information Sheet for Participant and implied consent were made available on the first page of the web link. The survey was available online for 17 days. All students were informed prior to involvement that they were able to withdraw at any time during the project and could then complete the paper-based final assessment task under standard supervision conditions.

Six weeks prior to the examination date, participating students were provided with the software and hardware required for the online exam and assisted with its set-up. After successful set-up, participating students completed an online practice examination that could be undertaken multiple times. The practice exams were delivered on-demand and proctored (so as to replicate all aspects of the real exam except for the questions). Evaluation staff posed as students and undertook the practice exam.

Participants who had successfully completed the practice examination were guided through registration for the summative online examination. One week after the exam, a follow-up survey was sent to students who completed the exam online. The post-exam survey enquired about students’ experiences of setting-up and using the software during the practice sessions, as well as during the online exam trial, including the exam process and software performance and reliability. Both open and closed questions were posed in order to achieve a deeper understanding of reasons, background and context of answers.

At the end of the trial, semi-structured interviews were conducted with the academic coordinating the unit and all staff engaged in supporting students during the project.

The assessment task

The online assessment was simply the paper-based examination paper typically administered in this course translated to an online version, with no real change in examination techniques. It included multiple choice answer quizzes, short answer constructed responses and longer, short essay length constructed responses. Both the online task and the paper-based examination had a two hour time limit, to ensure consistency of examination conditions.

The software

The trial utilized a commercial web-based product via a proctoring company that provided live, online invigilation using remote video monitoring, keystroke biometrics, photo matching and system lockdown to transform any standard personal computer into a secure testing workstation. A webcam allowed a proctor to view students and the surrounding workplace for the duration of the task. During this period, participants were not permitted to move out of the vision field of the webcam and were only able to have in their possession a list of permitted materials that was common to both the online and supervised examination.

Data collection

The opt in/out survey instrument comprised three initial questions to establish demographic data and choice regarding whether the final assessment would be taken online. Respondents then completed 15 Likert-style questions that explored the reasons for their decision. A four point scale with descriptors ‘strongly agree, agree, disagree and strongly disagree’ was used; however, a ‘neither’ option was added to some items, if applicable. Respondents were given the opportunity to make further comments with a single open-ended question. The post-exam survey was similar in format and length, collecting both quantitative and qualitative data, but with a few more demographic and contextual questions, such as student’s computer skills and knowledge, software and hardware, internet connection and location for exam.

Results

The focus here is on the student surveys, although there is reference at times to other supporting evidence gathered during the various stages of the evaluation.

Opt-in/out survey

The age distribution of students who completed the survey is shown in Table 1. Of the 456 students enrolled in the target course, 221 (48.5 %) completed the initial survey and comprised 45 (20 %) males and 176 (80 %) females.

Table 1 Age distribution of survey respondents

Of the 221 respondents, 125 (57 %) agreed to participate in the trial and complete the final assessment task online. Table 2 summarises their reasons for preferring to take an exam online, ranking them from highest to lowest agreement. The ‘agree’ and ‘strongly agree’ responses have been combined and converted to a percentage of the total number of responses (125) to assist interpretation.

Table 2 Reasons students gave for preferring to take an online exam

The main reasons for being interested in taking an online exam included:

  • lower travel time and expense (94.4 %),

  • certainty of arriving at the exam on time (82.8 %),

  • reduced need for time off work (82.4 %),

  • greater comfort (82.4 %),

  • expected lower anxiety levels (76 %), and

  • decreased need for childcare (65.6 %).

About half thought there might be some advantage in using a keyboard and mouse rather than handwriting. About 40 % expected their performance to be better and thought there was less chance that their workspace would present physical distractions, such as poor lighting, heating or cooling. Simply wanting to try something new was only a motivation for about 30 % and, perhaps surprisingly, less than 15 % considered greater flexibility or being able to choose when to sit their exam to be a major factor in their choice. Five responded that their main reason was an interest in assisting with the research.

Table 3 lists the concerns regarding taking an exam online of 96 respondents who opted out of the trial. The most common concerns revolved around potential technical problems: interruptions due to technical difficulties (85.4 %) or unreliable internet connection (75 %). Other disruptions, such as people (65.6 %) or being distracted by their surroundings (59.4 %), were also fairly important deterrents, and about 56 % did not like the idea of working under a webcam for long periods of time. There was moderate (~30–40 %) concern about unfamiliarity with the technology, workspace requirements, lack of personal contact and inability to seek clarification during the exam. Very few people (<25 %) had privacy concerns or were daunted by having to set up their own webcam. One thought online exams seemed consistent with the online nature of the unit.

Table 3 Concerns of students who choose not to take an online exam

Post-exam survey

Only 29 students ultimately completed the final online assessment task. Due to the small number of participants responding to the post-exam survey, the quantitative data is not presented in detail (see James, 2013 for full results), but rather an overview of key tentative findings is outlined and reference is made to comments that relate to the student experience. Caution is advised as regards the robustness of the findings due to the small sample size.

Most students who successfully completed the online examination were in a metropolitan area, used a PC and judged themselves to be competent, although not very technical, computer users. The majority found the software easy to learn and to use, although there were mixed feelings about its user-friendliness. However, although software installation and use was little problem, there were difficulties with workspace and webcam set-up, establishing facial recognition parameters and maintaining a lengthy live video-feed: “….the invigilator stopped the exam because the camera feed wasn't coming through—took 40 min to fix, 40 min of lost time. Problem was at their end.” Establishing keystroke biometrics was less challenging for some, but not all. For example, one student commented, “the log immediately prior to the exam was difficult, the software did not recognise my face or key strokes it took 5 attempts.” Added to this were compatibility issues for Mac users. Student comments about technical problems demonstrate their frustration with the problems experienced:

“Not being able to use my MacBook Pro due to the software for keystrokes not being compatible with safari 6. Had to access a pc to do the exam.”

“I have tried and tried to do the practise test, without luck. I have spoken, emailed and online chatted with [computer company] support many times. The result of a number of days of attempts, using two different MacBooks, is that it just doesn’t work with a Mac”

“Really disappointed in the overall experience. Being online 20 mins early, having to waiting 10 mins before link was available, then it taking over an hour and half to get into the exam, followed by losing 40 mins plus during the exam and having to rush through it to make sure all questions were answered.

The quality of support available from the proctoring company in relation to correcting technical challenges was considered inadequate: “…the instructions did not cater for the issues encountered when using a mac - i.e., setting up the external web cam; the instructions did not advise me to use safari and not chrome for the keystroke recognition and despite a good internet speed the connection kept dropping out…”

Most confirmed their pre-exam perceptions of convenience, improved comfort and less anxiety. Overall, the majority rated the overall online exam experience using this software as good or excellent. When asked to nominate the best aspect of the online examination process, flexibility and convenience were the only aspects mentioned. Typical comments were “convenience, less cost on travel, less time needed (no travel time), can choose a date/time” or “being able to participate at a time that works for me, which provided greater focus”.

Comments in relation to the least liked aspect of the online assessment process resulted in four themes emerging: being observed, the facial recognition software, technical problems and being given conflicting information. Several commented that they found it disconcerting being told they had an illegal exam aid—a piece of blank paper and pens—insisting that the lecturer advised these were allowed. Lack of afternoon examination slots and the inability to go to the toilet during the exam also received comment.

Most indicated that they would take an online exam again in the future and they would recommend it to other students:

“Once [the software company] have sorted out their compatibility issues I would love to be involved in future testing.”

“Possibly, now I know all the issues that can come up, and the convenience of doing an exam at home, I'd definitely consider it.”

“Yes!!! practical, easy and convenient.”

“I’ve made students of other unis jealous by telling them about it”

Analysis and discussion

In some ways, the most telling numbers in this evaluation are the gross level statistics:

  • 456 in the course

  • 262 (57.5 % of cohort) followed the link to choose whether to opt in or out of trial

  • 221 (48.5 % of cohort) completed the opt in/opt out survey

  • 125 (27.4 % of cohort) agreed to participate

  • 106 (23.2 % of cohort/84.8 % of those who opted in) started the practice exam

  • 54 (11.8 % of cohort/50.9 % of students who started practice exam) finished the practice exam

  • 29 (6.3 % of cohort/27.4 % of students who started practice exam/53.7 % of students who completed practice exam) did the final exam online

The fact that less than half of those who responded to our invitation (or only 27.4 % of the entire class) agreed to participate in the trial suggests an overall reluctance to engage with the online assessment approach during a high stakes assessment. This is compounded by only 50.9 % of those who started the practice exam, having finished it, and, finally, only about half (53.7 %) of those choosing to proceed to sit the summative exam online. The high dropout rates associated with the online practice test indicate that substantial issues were experienced when engaging with the online environment in an assessment context.

The post-exam survey shows that about 30 % of those completing their exam online had a very ordinary or bad experience, but to that figure should also be added the people who started the trial exam and didn’t finish it and those who did the trial exam, finished it, but did not do the final exam online. When looking at the figures through this lens, the student experience is really presented in a very poor light. It is also the case that the story of the student experience of online exam software is not really told by the people who successfully completed the final exam online, but more so by the numbers involved who did engage with the trial process, but chose not to continue with it.

The opt-in/out survey should give a reasonable picture of the reasons why students do or do not wish to sit online exams. The results of the post-exam survey, on the other hand, are likely far less reliable and apt to give a biased and expectedly, positively skewed view of the online exam experience. The impression given by evaluation staff, administrative and support staff, email exchanges and problem logs is that most students who did not complete the practice exam and continue to undertake the final exam online had experienced technical difficulties. Much more informative would be an exit survey of those who dropped out throughout the process so as to be able to understand their experience, which presumably was not positive. This is especially important given they significantly outnumber those who completed the exam and, hence, the survey and their opinions could, therefore, substantially alter our picture of the student experience. Although it was possible to identify some of the reasons students may like to take online exams, identifying the factors negatively affecting the student experience during online exams cannot be fully elucidated using this methodology. In reality, what has been achieved is more a snapshot of a good online exam experience. The 30 % whose experience was mediocre most probably more closely reflect the majority experience.

There appear to be a finite set of perceived (perhaps predictable) advantages to and concerns about the introduction of online exams that should be taken into consideration. Many of the results presented here echo the findings of previous studies. The reduction in costs often associated with online study (Bartley & Golek, 2004; Jung, 2003) was identified as a principle reason for students in this study electing to complete the assessment task online. The perceived reduction in anxiety and examination stress also frequently identified support the findings of Clesham (2010). For students, the potential for technical issues or internet problems overshadow any other concerns with taking exams online. Similar concerns have been identified in many other studies (e.g., Valentine, 2002). Based on the outcomes of this study, these concerns are not unfounded.

Unexpectedly, concerns about security and privacy were minimal, although it is unknown whether this was because the implications of the theft or misuse of the personal identification data were not fully understood. Worries about distractions and nervousness about being watched by the webcam were mostly dissipated by the actual online exam experience or countered by perceived benefits.

Comments regarding suggestions for improvement to the online exam process reveal some of the major problem areas still to be addressed: better facial recognition and login procedures, better processes for establishing agreed rules to ensure consistency, more comprehensive help and, of course, improvements to the software so that it supports Mac computers and better communication of its limitations in relation to Macs.

However, even those students who experienced technical difficulties or discomfort did not appear adversely affected in their overall view of the process. As an example, one student commented, “slightly un-nerving since I couldn't see the person watching me; however, the benefits and convenience of sitting the exam online far outweighed the awkwardness.”

Conclusions

The findings from this study have limited generalization due to the participants only including first year psychology students. The primary conclusion that can be drawn is that students in the first year of tertiary study, many of whom would be inexperienced in the online education environment, have substantial challenges with the idea of major stakes examinations being online. While the advantages of lower cost and reduced assessment anxiety motivate some students to engage with online examinations, the majority are clearly concerned about technical difficulties and internet connectivity. The large reduction in participation following the practice online examination also indicates that students will disengage from the process very quickly when their experience is not satisfactory.

Where technology is employed as a part of a high stakes assessment process, it must be effective in performing the role assigned to it. While the facial recognition software used in this study to authenticate the identity of students performed well overall, having the software fail to positively identify even a small number of students or take multiple attempts to recognize a student, does not instill confidence.

Student satisfaction with online learning has been demonstrated to be strongly influenced by the amount of support available from academic staff (Alexander, 2001; Fredericksen, Pickett, Shea, Pelz, & Swan, 2000). Where institutions engage commercial organizations to provide and support the software used for online examinations, that commercial organization must provide high quality support.

Valentine (2002) describes the quality of online instruction as being based on preparation and an understanding of the needs of students—this is especially important in high stakes online assessment. Considering all the data from this study, particularly participant comments, it is apparent that some problems could, or should, be addressed prior to exposing students to an online assessment environment. It may be necessary to re-consider some aspects of exam design. For example, good practice (British Standard 23988) for e-examinations suggests that no online exam should last more than 90 min without a break and, if a longer exam is needed, it should be split into two parts with a break between. Most essential is thorough testing of the assessment environment to ensure that technical and internet connectivity challenges are identified and rectified prior to implementation. It is unacceptable practice having students in a remote location in a high stakes assessment situation dealing with the challenges described in this study. It is necessary to ensure appropriate design, procedures and pedagogies are developed and implemented before students are exposed to online summative assessment. Students also need adequate training and support to prepare for taking online examinations.

Until the reliability of ICT infrastructure improves, it is difficult to imagine wide-scale implementation of online, proctored, summative examinations in Australia. For now, secure examination with identity authentication remains a labour-intensive and costly pursuit. It may be time to stop searching for the elusive, fool-proof, automated authentication system and start considering other approaches that adopt different pedagogical models for assessing learning (Struyven et al., 2005; Weller, 2002) and change the culture of cheating, as well as lobbying and re-educating quality assurance agencies and accrediting organisations about appropriate alternatives to summative examinations as assessment of learning.

References

  • Alexander, S. (2001). E-learning developments and experiences. Education and Training, 43(4/5), 240–248.

    Article  Google Scholar 

  • Barron, J., & Crooks, S. M. (2005). Academic integrity in web-based distance education. TechTrends, 49(2), 40–45.

    Article  Google Scholar 

  • Bartley, S., & Golek, J. (2004). Evaluating the cost effectiveness of online and face-to-face instruction. Educational Technology & Society, 7(4), 167–175.

    Google Scholar 

  • Bates, T. (2014a). A review of MOOCs and their assessment tools, Online Learning and Distance Education Resources, November 8, 2014. Retrieved from https://www.ou.nl/Docs/Campagnes/ICDE2009/Papers/Final_Paper_101Walker.pdf

  • Bates, T. (2014b). The strengths and weaknesses of MOOCs: Part 2: learning and assessment, November 7, 2014. Retrieved from http://www.westga.edu/~distance/ojdla/Fall133/harmon_lambrinos_buffolino133.html

  • Bedford, D. W., Gregg, J. R., & Clinton, M. S. (2011). Preventing online cheating with technology: a pilot study of remote proctor and an update of its use. Journal of Higher Education Theory and Practice, 11(2), 41–58.

    Google Scholar 

  • Boyle, A. (2005). Sophisticated tasks in E-Assessment: What are they? And what are their benefits? Paper presented at 9th CAA Conference 2005. Retrieved from http://www.caaconference.com/pastConferences/2005/proceedings/BoyleA2.pdf

  • Brosnan, M. (1999). Computer anxiety in students: should computer-based assessment be used at all? In S. Brown, P. Race, & J. Bull (Eds.), Computer-assisted assessment in higher education (pp. 47–54). Birmingham: Kogan Page.

    Google Scholar 

  • Brown, G., Bull, J., & Pendlebury, M. (1997). Assessing student learning in higher education. London: Routledge.

    Google Scholar 

  • Caldarola, R., & MacNeil, T. (2009). Dishonesty deterrence and detection: How technology can ensure distance learning test security and validity. Proceedings of the European Conference on e-Learning (pp. 108–115).

    Google Scholar 

  • Chapman, G. (2006). Acceptance and Usage of e-Assessment for UK Awarding Bodies–A Research Study (pp. 101–103). Loughborough University: Proceedings of the 10th CAA International Computer Assisted Assessment Conference, 4 and 5.

    Google Scholar 

  • Chiesl, N. (2007). Pragmatic methods to reduce dishonesty in web-based courses. Quarterly Review of Distance Education, 8(3), 203–211.

    Google Scholar 

  • Clesham, R. (2010). Changing assessment practices resulting from the shift towards on-screen assessment in schools. Doctor of Education, University of Hertfordshire.

    Google Scholar 

  • Engelbrecht, J., & Harding, A. (2004). Combining online and paper assessment in a web-based course in undergraduate mathematics. Journal of Computers in Mathematics and Science Teaching, 23(3), 217–231.

    Google Scholar 

  • Englander, F., Fask, A., & Wang, Z. (2011). Comment on “The impact of online assessment on grades in community college distance education mathematics courses” by Ronald W. Yates and Brian Beaudrie. American Journal of Distance Education, 25(2), 114–120.

    Article  Google Scholar 

  • Foster, D., Mattoon, N., & Shearer, R. (2008). Using multiple online security measure to deliver secure course exams to distance education students: A white paper. Retrieved from https://www.ou.nl/Docs/Campagnes/ICDE2009/Papers/Final_Paper_101Walker.pdf

  • Fredericksen, E., Pickett, A., Shea, P., Pelz, W., & Swan, K. (2000). Student satisfaction and perceived learning with on-line courses: principles and examples from the SUNY learning network. Journal of Asynchronous Learning Networks, 4(2), 7–41.

    Google Scholar 

  • Grijalva, T. C., Nowell, C., & Kerkvliet, J. (2006). Academic honesty and online courses. College Student Journal, 40(1), 180–185.

    Google Scholar 

  • Harmon, O. R., Lambrinos, J., & Buffolino, J. (2010). Assessment design and cheating risk in online instruction. Online Journal of Distance Learning Administration, 13(3). Retrieved from http://www.westga.edu/~distance/ojdla/Fall133/harmon_lambrinos_buffolino133.html

  • Hembree, R. (1988). Correlates, causes, effects, and treatment of test anxiety. Review of Educational Research, 58(1), 47–77.

    Article  Google Scholar 

  • James, R. (2013). Kryterion Online Examination Software Trial: Evaluation of Student Experience. PO72 Online Examination Trial Project. Armidale: University of New England, dehub.

    Google Scholar 

  • James, R., McInnis, C., & Devlin, M. (2002). Assessing Learning in Australian Universities. Canberra: Australian Universities Teaching Committee.

    Google Scholar 

  • Johnson, G., & Davies, S. (2012). Unsupervised Online Constructed-Response Tests: Maximising Student Learning and Results Integrity (pp. 400–408). Wellington: Paper presented at the ascilite Conference.

    Google Scholar 

  • Jung, I. (2003). Cost-effectiveness of online education. In M. Moore & W. Anderson (Eds.), Handbook of distance education (pp. 717–726). London: Lawrence Erlbaum Associates.

    Google Scholar 

  • Khare, A., & Lam, H. (2008). Assessing student achievement and progress with online examinations: Some pedagogical and technical issues. International Journal on E-learning, 7(3), 383–402.

    Google Scholar 

  • McCabe, D. L. (2005). Cheating among college and university students: A North American perspective. International Journal for Educational Integrity, 1(1). Retrieved from http://www.ojs.unisa.edu.au/index.php/IJEI/article/view/14

  • New, J. (2013a). MOOC students to be identified with webcams, ecampus news, September 17th, 2013. Retrieved from http://www.ecampusnews.com/top-news/students-mooc-webcams-018/

  • New, J. (2013b). Has Coursera solved the catch-22 of for-credit MOOCs?, ecampus news, September 19th, 2013. Retrieved from http://www.ojs.unisa.edu.au/index.php/IJEI/article/view/14

  • Ricketts, C., & Wilks, S. (2002). Improving student performance through computer-based assessment: insights from recent research. Assessment & Evaluation in Higher Education, 27(5), 475–479.

    Article  Google Scholar 

  • Sandeen, C. (2013). Assessment’s Place in the New MOOC World, Research & Practice in Assessment, Volume Eight (Summer) (pp. 5–12).

    Google Scholar 

  • Schmelkin, L. P., Gilbert, K., Spencer, K. J., Pincus, H. S., & Silva, R. (2008). A multidimensional scaling of college students’ perceptions of academic dishonesty. The Journal of Higher Education, 79(5), 587–607.

    Article  Google Scholar 

  • Shaffer, S. (2012). Distance education assessment infrastructure and process design based on international standard 23988. Online Journal of Distance Learning Administration, 15(2). Retrieved from http://www.westga.edu/~distance/ojdla/summer152/shaffer152.html

  • Stobart, G. (2001). The validity of national curriculum assessment. British Journal of Educational Studies, 49(1), 26–39.

    Article  Google Scholar 

  • Struyven, K., Dochy, F., & Janssens, S. (2005). Students’ perceptions about evaluation and assessment in higher education: a review. Assessment & Evaluation in Higher Education, 30(4), 325–341.

    Article  Google Scholar 

  • Stuber-McEwen, D., Wiseley, P., & Hoggatt, S. (2009). Point, click, and cheat: frequency and type of academic dishonesty in the virtual classroom. Online Journal of Distance Learning Administration, 12(3), 1–10.

    Google Scholar 

  • Trenholme, S. (2006-2007). A review of cheating in fully asynchronous online courses: A math or fact-based course perspective. Journal of Educational Technology Systems, 35(3), 281–300.

  • Valentine, D. (2002). Distance learning: Promises, problems, and possibilities. Online Journal of Distance Learning Administration, 5(3). Retrieved from http://www.westga.edu/~distance/ojdla/fall53/valentine53.html

  • Watson, G., & Sottile, J. (2010). Cheating in the digital age: Do students cheat more in online courses? Online Journal of Distance Learning Administration, 8(1), 1–12. Retrieved from http://www.westga.edu/~distance/ojdla/spring131/watson131.html

  • Weller, M. (2002). Assessment issues on a web-based course, Assessment and Evaluation. Higher Education, 27(2), 109–116.

    Google Scholar 

  • Whitley, B. E. (1998). Factors associated with cheating among college students: a review. Research in Higher Education, 39(3), 235–274.

    Article  Google Scholar 

  • Wilkinson, S., & Rai, H. (2009). Mastering the online summative assessment life cycle. In R. Donnelly & F. Mcsweeney (Eds.), Applied e-learning and e-teaching in higher education (pp. 347–368). Hershey: IGA Global.

    Chapter  Google Scholar 

  • Winslow, J. (2002). Cheating an online test: methods and reduction strategies. In M. Driscoll & T. Reeves (Eds.), Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2002 (pp. 2404–2407). Chesapeake: AACE.

    Google Scholar 

  • Yates, R., & Beaudrie, B. (2009). The impact of online assessment on grades in community college distance education mathematics courses. American Journal of Distance Education, 23(2), 62–70.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rosalind James.

Additional information

Competing interests

The author declares that she has no competing interests.

Authors’ information

Dr Rosalind James was Director of dehub: Online and Distance Education Research Network from 2011 to 2014. Dr James has worked at Australia’s University of New England (UNE) for many years, as a Research Fellow with the DEHub Project and Project 2012: Flexible and Online, and before that as an academic mentor for transitional students and a course co-ordinator and lecturer in the foundational pathway course at UNE’s Teaching and Learning Centre. Rosalind comes from a background as a consultant and lecturer in Archaeology and Environmental Science and has also worked in diverse companies and government departments around the world as a senior manager and technical consultant in the commercial information and communications technology (ICT) arena. Her current research and publications interest is in implementation and integration of ICT in learning, policy and quality assurance in online learning, employability skills and academic professional development. Creativity and critical thinking are important avenues of enquiry that arose during her direction of a large collaborative project to develop a community education portal offering OER for lifelong learning. Dr James is an assessor for the Australian Government Office of Learning and Teaching and co-editor of the International Journal of Educational Technology in Higher Education (ETHE).

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

James, R. Tertiary student attitudes to invigilated, online summative examinations. Int J Educ Technol High Educ 13, 19 (2016). https://doi.org/10.1186/s41239-016-0015-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41239-016-0015-0

Keywords