- Research Article
- Open access
- Published:
A framework for assessing fitness for purpose in open educational resources
International Journal of Educational Technology in Higher Education volume 13, Article number: 3 (2016)
Abstract
Despite all the benefits claimed for open education resources (OER), studies reveal that these are used less than anticipated. One reason for this is potential users’ uncertainty over whether the products’ aims, design and processes are fit for the intended purposes. This paper proposes a simple-to-use framework for faculty and students to apply in determining whether OER have fitness for purpose in their teaching and learning. The selection criteria are based upon the benefits claimed in the literature for OER, MERLOT’s framework of evaluation criteria for OER selection and Merrill’s first principles of instruction. The criticality, feasibility and applicability of these criteria were reviewed by 207 OER researchers and users through a cross-regional online survey and subsequent consultations with a small team of experts familiar with researching and using OER. The final 25-item final framework was developed in accord with the agreed criteria.
Introduction
It is generally held that making openly licensed educational resources freely available online under an intellectual property license that permits their free use and re-purposing offers opportunities for people everywhere to share, use and reuse quality materials and tools. However, despite the many OER repositories and search engines to help locate these repositories, the take-up of OER has fallen short of expectations (Conole, 2012; Murphy, 2013).
This under-utilization is attributed to the confusing multitude of repositories and distribution channels on the Web that make it time consuming or impractical to locate OER (Leacock & Nesbit, 2007; Reynolds, 2012), to a lack of consistent metadata and consistency in repositories’ protocols (Groom, 2013) and to unclear licensing information making it difficult to distinguish OER from other digital content (Groom, 2013). But many researchers including Conole and Weller (2008), Dhanarajan and Abeywardena (2013), Ehlers (2011), HylŽn (2006) and McGill (2013) also attribute this low adoption rate to potential users’ uncertainty over the appropriateness of the content, instructional design, pedagogy and ease of use of these OER.
Downes (2004) stresses the need for some method of evaluating and selecting OER that have not been reviewed by some reliable independent agency. Many attempts have been made to do this, for example by Leacock and Nesbit (2007), Baya'a et al. (2009), Achieve (2011), Quality Matters Program (2011), Vladoiu (2011), Ehlers (2011), Camilleri and Tannhäuser (2012), McGill (2013) and Khanna and Basak (2013). However, Leacock and Nesbit (2007) posit that the more highly detailed approaches to evaluation are a significant barrier to OER use, Jung and Lee (2014) conclude that most OER assessment systems are too complex for easy use, and after reviewing the available frameworks evaluating OER, Kawachi (2013) concluded that many were no more than ad hoc subjective listings of whatever came to the authors’ minds at the time with little overarching organization.
In the light of these findings, the authors saw the need for a simple-to-use framework for evaluating OER for use by:
-
OER developers “marketing” their products.
-
Faculty, students and other potential users of OER assessing whether particular resources suited their requirements.
-
OER users assessing and informing others on whether these resources had met their requirements.
It was clear that the selection criteria in this framework had to be relevant to these various users’ needs, easy to understand, reliable and agreed to by experts in the field. It was decided to base these criteria upon:
-
The educational benefits claimed for OER in the literature.
-
MERLOT’s evaluation criteria for OER content.
-
Merrill’s first principles of instruction.
To arrive at an agreed set of criteria for evaluating and selecting OER, the authors conducted an inter-regional online survey of researchers and users familiar with OER, then subjected their recommendations to further review and refinement by a smaller group of experts, and finally consolidated the agreed criteria into a simple framework.
Fitness for purpose
The authors avoided employing the term “quality” in this study. As Harvey and Green (1993) observe, this word means five different things: excellence, consistency, fitness for purpose, value for money and transformational. Excellence and transformational are over-used techno-bureaucratic terms that may prove difficult to define and measure. Fitness for purpose is a far more readily understood and practical term. It refers to the fulfilment of certain expectations. The notion derives from the manufacturing industry but has been adopted by many educational quality agencies. For these reasons, the authors judged this to be the most appropriate term to use in evaluating OER.
An examination of the literature (Butcher, 2011; Dhanarajan & Abeywardena, 2013; Groom, 2013; McGreal, 2013; UNESCO/COL, 2011; Wiley, 2008) revealed that OER can serve seven principal purposes:
-
1)
Providing open, accessible and quality content for a wider community of teachers and learners.
-
2)
Sharing best practice and helping to avoid re-inventing the wheel.
-
3)
Helping developing countries improve and expand learning for development opportunities.
-
4)
Offering flexible non-formal and informal knowledge and skills accumulation pathways to formal study.
-
5)
Providing learning opportunities for geographically, socially or economically excluded students and non-traditional and work-based learners.
-
6)
Improving the quality of conventional and online education by achieving greater awareness of open and inclusive educational practices and varied perspectives on fields of study.
-
7)
Enabling collaboration between institutions, sectors, disciplines and countries.
As a first step in evaluating OER, it was decided that the creators or users would need to determine which, or which mix, of these purposes they had in mind.
Unbundling fitness for purpose
To work towards a simple set of selection criteria, the authors started with two existing, highly regarded frameworks: the MERLOT evaluation criteria for OER and Merrill’s first principles of instruction.
MERLOT’s evaluation criteria
Sperling (2011) and Anderson-Wilk and Hino (2011) regard MERLOT’s system for evaluating online scholarly multimedia publishing as the most relevant, rigorous and successful example of its type in the field. MERLOT is a curated collection of free and open online teaching, learning and faculty development services that is contributed to and used by an international education community. Its evaluation criteria have been adopted and used in a variety of contexts (Cechinel & Sánchez-Alonso, 2011; Leacock & Nesbit, 2007; Levin & Smith-Gratto, 2002). To be posted to the MERLOT site, an online material must be subject to a written review by two reviewers and receive an average of three stars (five being the highest) in regard to three dimensions: content, potential effectiveness of the teaching and learning materials and ease of use (MERLOT, 2014).
-
Content is assessed in terms of the correctness, significance and currency of the concepts/principles and their match with the learners’ characteristics and learning needs.
-
The potential teaching/learning effectiveness of the materials is judged by the inclusion of learning objectives, information on how the materials can be used by specific learner groups, the design of the materials and how these can be easily integrated into, and improve, a variety of courses and teaching/learning processes.
-
Ease of use by teachers and students is evaluated by considering accessibility, layout and design for effective learning, consistency in user interface and navigation, content presentation that matches learners’ prior knowledge and abilities and ease of re-use in other contexts.
Merrill’s five first principles of instruction
Bates (2011), Masterman and Wild (2011) and Groom (2013) stress that OER must be founded upon sound instructional principles. Merrill’s (2002) first principles of instruction were abstracted from a wide range of systematically reviewed instructional design theories, models and research. These principles are always true under appropriate conditions regardless of the methods or models that implement these principles (pp. 44-45):
-
Learners are engaged in solving real-world problems.
-
Existing knowledge is activated as a foundation for new knowledge.
-
New knowledge is demonstrated to the learner.
-
New knowledge is applied by the learner.
-
New knowledge is integrated into the learner’s world.
Methodology
A provisional set of selection criteria based on the benefits of OER as claimed in the literature, MERLOT’s three dimensions of quality and detailed review areas specified under each dimension, and Merrill’s first principles of instruction and application strategies proposed for each principle were developed by the authors in consultation with five faculty members and five researchers well versed in OER. The research proposal was then submitted for approval by the research ethics committee at the authors’ university and permission was duly granted.
The initial survey
Persons with the expertise and experience to judge the criticality and feasibility of the proposed selection criteria were identified by means of snowball sampling (inviting a pool of initial informants to contact other eligible reviewers through their social networks, who in turn make further connections (http://sociology.about.com/od/Types-of-Samples/a/Snowball-Sample.htm).
Firstly, invitations to participate in the survey were sent out to well-known researchers, leaders of open education movements and authors of books and articles on OER. These persons were also invited to recommend further individuals and organizations involved in OER such as MERLOT and the OpenCourseWare Consortium. These contacts in turn were asked to post requests for further reviewers through their various networks. This process enabled the recruitment of 207 reviewers, the characteristics of whom are shown in Table 1.
The survey instrument was then sent out to these reviewers. It comprised four sections:
-
Section 1 sought information on the reviewers’ locations, gender, principal roles and degree of experience in using OER in their own teaching and learning.
-
Section 2 invited the reviewers to rate the criticality and feasibility of 18 proposed OER selection criteria based upon MERLOT’s three dimensions and specific review questions under each dimension in terms of their own teaching and learning (using a scale from one to five, with 5 being the most useful).
-
Section 3 asked reviewers to rate the feasibility and usefulness of 10 proposed OER selection criteria based upon Merrill’s first principles of instruction and application strategies for each principle, again in terms of their own teaching and learning (using a scale from one to five, with 5 being the most useful).
-
Section 4 invited the reviewers to offer their own comments on the proposed criteria, how these might be improved, what further criteria would be useful and how else the fitness for purpose framework could be improved.
The initial survey findings
The authors analyzed the criticality and feasibility ratings and additional comments received from the 207 reviewers with the aim of determining which of the proposed criteria should be retained, revised or deleted. Selection criteria receiving ratings of >3.6 (an average of all 18 mean scores) by one or two measures were to receive further consideration by means of validation interviews.
Table 2 shows the criticality and feasibility ratings for the 18 suggested criteria based upon MERLOT’s three dimensions. In Dimension 1 (Content), three criteria (1, 2 and 6) were rated >3.6 on the 5-point scale, both in regard to criticality and feasibility, and three criteria (3, 4 and 5) >3.6 in terms of criticality only. In Dimension 2 (Potential Teaching/Learning Effectiveness), only one criterion (7) was rated >3.6 on both counts while two criteria (8 and 9) were rated >3.6 in regard to criticality only. In Dimension 3 (Ease of Use), criteria 13, 14 and 16 rated >3.6 for criticality and feasibility, and criteria 15 and 18 for criticality only.
A set of ANOVA tests revealed no significant regional, gender or experience in using OER differences in these ratings. This suggested that these criteria would be appropriate in any or all contexts. An analysis of the reviewers’ 89 sets of open comments confirmed that most of these criteria would be appropriate for most situations. It was however noted that many reviewers indicated that they used OER as supplementary rather than core teaching/learning resources, which might have influenced these opinions.
In terms of pedagogical appropriateness in the OER, as Table 3 shows, nine of the 10 items received a rating of >3.8 (an average of all 10 mean scores) on the 5-point scale (see grey shading). Item 7, “Providing practice/test/self-assessment items,” was only rated 3.49. However, after further consideration, the authors and the smaller panel of experts agreed that was important to retain this criterion because such activities are essential for informing the learners about what and how they should be learning and how well they are performing.
Again, the ANOVA tests revealed no significant regional, gender and preparedness differences. Comments by the reviewers reinforced the point that it was important that the pedagogical methods implicit in the OER should be appropriate for all learners regardless of age, gender, language, ethnicity, race, culture or religious conviction.
In the light of the survey findings, a second shorter version of the survey instrument (see Table 4) was developed for further review by a smaller number of researchers and users of OER. In this version, because they were considered in practice to be inseparable, the OER selection criteria and pedagogical strategies were combined in three categories (Legal and Technical; Content; Pedagogy). To ensure that the inputs represented different regions and cultures, comments sought by email and Skype were from five OER experts in Malaysia, Australia, South Africa, Germany and the USA. Their feedback led to further restructuring and rewording of the criteria. A further round of comments on the draft framework involved the same five experts plus an additional member in Swaziland. By these means, a consensus on selection criteria for the fitness for purpose framework was arrived at.
Designing the framework
The authors and three consultants who had participated in the initial survey and two subsequent rounds of validation then considered how the final set of criteria should be integrated, structured and worded to achieve a comprehensive but user-friendly fitness for purpose framework. It was agreed that this should contain 25 items in four dimensions: Purposes, Ease of Use, Content and Pedagogy (Table 5).
It was envisaged that users would:
-
1)
Determine which, or which mix, of the possible purposes matched their needs.
-
2)
Tick the boxes to indicate which of the criteria under ease of use, content and pedagogy were being met. OR
-
3)
Rate these items on a one-to-five basis (5 being the highest). AND
-
4)
Add short comments for their own use or to advise others.
It is not intended that all 25 criteria should necessarily be assessed in any one selection procedure – only those items that are judged to be critical in assessing fitness for purpose in the context of a particular course or teaching and learning activity.
Discussion
Because the evidence suggests that one of the factors inhibiting a wider use of OER is the lack of information on and doubts about their fitness for purpose, the authors sought to create a comprehensive but simple-to-use framework for various users by developing and testing selection criteria in regard to benefits, ease of use, content and pedagogy. The fact that the ANOVA tests revealed no significant regional, role, gender or degree of experience in using OER differences in the ratings suggested that these 25 selection criteria can be applied to a range of OER in a variety of contexts by:
-
1)
Developers of OER to help potential users comprehend the intended purposes and benefits, the nature of the content and the pedagogical or instructional design principles embedded in the resources.
-
2)
Course planners, faculty and students who can use this framework as an aide-memoire in considering their needs and judging whether particular OER will meet these requirements.
-
3)
Users, after using particular OER, for judging their quality, fitness for purpose, educational outcomes and other benefits, and (most importantly) providing feedback to the developers.
The study revealed that the five most critical selection criteria are:
-
1)
The OER accords with open content licenses (e.g., Creative Commons License) and is properly attributed.
-
2)
The OER can be easily reused, revised, remixed and shared with other materials to meet particular teaching/learning needs.
-
3)
The goals and purposes of the OER are easily understood.
-
4)
The content is accurate and up to date.
-
5)
The content covers educationally significant concepts and leads to deep understanding.
Some faculty and students may be unfamiliar with the issues of copyright and fair use. It would therefore also be advisable for first-time users to refer to the Guide to Open Content Licenses (http://www.theartgalleryofknoxville.com/ocl_v1.2.pdf) and Creative Commons website (http://creativecommons.org/licenses/). Having assured themselves that that the legal and technical requirements have been met, their next step should be to use this framework to assure themselves that the OER meets their requirements. They may also need to consider whether they have technical skills and means to reuse, edit and share the OER in question. For example, not everyone has the capacity to convert scanned pages to a text file, as opposed to editing documents in Word or PowerPoint materials.
Conclusion
Ehlers and Conole (2010) observe that the uses of OER have not yet reached a critical threshold in higher education. They attribute this to the current focus being on building access to digital content rather than on how OER can support innovation and change in teaching and learning. It is hoped that this simple-to-use assessment framework will encourage and support more and better use of OER.
The authors now invite colleagues to pilot test, apply and critique this framework, compare it with other tools and report back on its usefulness.
It is important to note that this study did not take account of “granularity”. OER range from entire courses and massive open online courses to small-scale learning materials, games, simulations, quizzes and other digital resources. So the conclusion that these items can be universally applied in all forms and sizes of OER and cover all of the required criteria still needs to be tested with a range of teachers in a variety of on-campus and off-campus contexts.
It is also important to ascertain whether this framework will be helpful for students.
References
Anderson-Wilk, M., & Hino, J. (2011) Achieving rigor and relevance in online multimedia scholarly publishing. First Monday, 16(12). Retrieved from http://firstmonday.org/ojs/index.php/fm/article/view/3762/3119
Achieve (2011) Rubrics for evaluation OER objects. Retrieved from http://www.achieve.org/files/AchieveOERRubrics.pdf
Bates, T. (2011, February 6) OERs: The good, the bad and the ugly [Web log post]. Retrieved from http://www.tonybates.ca/2011/02/06/oers-the-good-the-bad-and-the-ugly/
Baya'a N, Shehade HM, Baya'a AR (2009) A rubric for evaluating web-based learning environments. Br J Educ Technol 40(4):761–763. doi:10.1111/j.1467-8535.2008.00864.x
Butcher N (2011) A basic guide to open educational resources (OER)., Retrieved from http://unesdoc.unesco.org/images/0021/002158/215804e.pdf
Camilleri AF, Tannhäuser AC (2012) Open learning recognition: Taking open educational resources a step further., Retrieved from http://cdn.efquel.org/wp-content/uploads/2012/12/Open-Learning-Recognition.pdf?a6409c
Cechinel C, Sánchez-Alonso S (2011) Analyzing associations between the different ratings dimensions of the MERLOT repository. Interdiscip J E Learn Learn Objects 7:1–9
Conole G (2012) “Fostering social inclusion through open educational resources” [Editorial]. Distance Educ 33(2):131–134. doi:10.1080/01587919.2012.700563
Conole G., Weller M. (2008) Using learning design as a framework for supporting the design and reuse of OER. J Interact Media Educ, 5(1). Retrieved from http://oro.open.ac.uk/12150/1/jime-2008-05.pdf
Dhanarajan G, Abeywardena IS (2013) Higher education and open educational resources in Asia: an overview., Retrieved from http://eprint.wou.edu.my/49/1/Open%20educational%20resources.An%20Asian%20perspective.pdf#page=23
Downes, S. (2004) Quality standards: It’s all about teaching and learning? Presentation at NUTN, Kennebunkport, June 4, 2004. Retrieved from http://ww2.odu.edu/dl/demminger/nutn_test2/downes.pdf
Ehlers U-D (2011) Extending the territory: from open educational resources to open educational practices. J Open Flexible Distance Learn 15(2):1–10, Retrieved from http://journals.akoaotearoa.ac.nz/index.php/JOFDL/article/viewFile/64/46
Ehlers U-D., Conole G. C. (2010) Open education practices: Unleashing the power of OER. Paper presented at UNESCO workshop on OER, Windhoek, Namibia. Retrieved from http://efquel.org/wpcontent/uploads/2012/03/OEP_Unleashing-the-power-of-OER.pdf
Groom C (2013) A guide to open educational resources., Retrieved from http://www.jisc.ac.uk/publications/programmerelated/2013/Openeducationalresources.aspx#Finding%20and%20sharing%20open%20educational%20resources
Harvey L, Green D (1993) Defining quality. Assess Eval Higher Educ 18(1):9–34
HylŽn J (2006) Open education resources: Opportunities and challenges., Retrieved from http://www.oecd.org/dataoecd/5/47/37351085.pdf
Jung, I.S, Lee T. (2014) Quality assurance standards for e-ASEM OER in open and distance learning. e-ASEM report. Seoul: Korea National Open University.
Kawachi P (2013) Open educational resources: Other frameworks., Retrieved from http://www.open-ed.net/oer-quality/others.pdf
Khanna P, Basak PC (2013) An OER architecture framework: need and design. Int Rev Res Open Distance Learn 14(1):65–83, Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/1355/2427
Leacock TL, Nesbit JC (2007) A framework for evaluating the quality of multimedia learning resources. Educ Technol Soc 10(2):44–59, Retrieved from http://www.ifets.info/journals/10_2/5.pdf
Levin B, Smith-Gratto K (2002) Tasting fine wine online for MERLOT: Criteria for evaluating multimedia educational resources for learning and online reaching. In: Willis D, Price J, Davis N (eds) Proceedings of Society for Information Technology & Teacher Education International Conference 2002. AACE, Chesapeake, VA, pp 2372–2374
Masterman L, Wild J (2011) OER impact study., Retrieved from http://www.webarchive.org.uk/wayback/archive/20140614114910/http://www.jisc.ac.uk/media/documents/programmes/elearning/oer/JISCOERImpactStudyResearchReportv1-0.pdf
McGill L (2013) Quality considerations. Open Educational Resources infoKit., Retrieved from https://openeducationalresources.pbworks.com/w/page/24838164/Quality%20considerations
McGreal R (2013) Creating, using and sharing open educational resources., Retrieved from https://www.fosteropenscience.eu/sites/default/files/pdf/514.pdf
MERLOT (2014) MERLOT peer review information. Retrieved from http://info.merlot.org/merlothelp/index.htm (My MERLOT)
Merrill MD (2002) First principles of instruction. Educ Technol Res Dev 50(3):43–59, Retrieved from http://mdavidmerrill.com/Papers/firstprinciplesbymerrill.pdf
Murphy A (2013) Open educational practices in higher education: Institutional adoption and challenges. Distance Educ 34(2):201–217. doi:10.1080/01587919.2013.793641
Quality Matters Program (2011) Quality Matters Rubric Standards 2011-2013. Retrieved from http://www.moodlerooms.com/sites/default/files/slideshow/slides/kari_walters_qm_rubric.pdf
Reynolds, R. (2012, May 15) Obstacles to faculty adoption of OER and open textbooks [Web log post]. Retrieved from http://thelearninglot.blogspot.com.au/2012/05/obstacles-to-faculty-adoption-fo-oer.html
Sperling B (2011) Finding and using MERLOT and OER’s: a model of open education resource services., Retrieved from http://sloanconsortium.org/conferences/2011/aln/finding-and-using-merlot-and-oer%E2%80%99s-model-open-education-resource-services
UNESCO/COL (2011) Guidelines for OER in higher education. Retrieved from http://www.unesco.org/new/en/communication-and-information/resources/publications-and-communication-materials/publications/full-list/guidelines-for-open-educational-resources-oer-in-higher-education/
Vladoiu M (2011) Towards a quality model for open courseware and open educational resources., Retrieved from http://www.unde.ro/monica/papers/04-KMEL%202012-springerLNCS.pdf
Wiley, D (2008) OER handbook for educators. Retrieved from http://wikieducator.org/OER_Handbook/educator_version_one
Acknowledgements
This research was funded by the 2013 Grant-in-Aid for Scientific Research (Kakenhi) from the Japan Society for the Promotion of Society.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Jung, I., Sasaki, T. & Latchem, C. A framework for assessing fitness for purpose in open educational resources. Int J Educ Technol High Educ 13, 3 (2016). https://doi.org/10.1186/s41239-016-0002-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s41239-016-0002-5