Skip to main content
  • Research Article
  • Open access
  • Published:

A framework for assessing fitness for purpose in open educational resources

Abstract

Despite all the benefits claimed for open education resources (OER), studies reveal that these are used less than anticipated. One reason for this is potential users’ uncertainty over whether the products’ aims, design and processes are fit for the intended purposes. This paper proposes a simple-to-use framework for faculty and students to apply in determining whether OER have fitness for purpose in their teaching and learning. The selection criteria are based upon the benefits claimed in the literature for OER, MERLOT’s framework of evaluation criteria for OER selection and Merrill’s first principles of instruction. The criticality, feasibility and applicability of these criteria were reviewed by 207 OER researchers and users through a cross-regional online survey and subsequent consultations with a small team of experts familiar with researching and using OER. The final 25-item final framework was developed in accord with the agreed criteria.

Introduction

It is generally held that making openly licensed educational resources freely available online under an intellectual property license that permits their free use and re-purposing offers opportunities for people everywhere to share, use and reuse quality materials and tools. However, despite the many OER repositories and search engines to help locate these repositories, the take-up of OER has fallen short of expectations (Conole, 2012; Murphy, 2013).

This under-utilization is attributed to the confusing multitude of repositories and distribution channels on the Web that make it time consuming or impractical to locate OER (Leacock & Nesbit, 2007; Reynolds, 2012), to a lack of consistent metadata and consistency in repositories’ protocols (Groom, 2013) and to unclear licensing information making it difficult to distinguish OER from other digital content (Groom, 2013). But many researchers including Conole and Weller (2008), Dhanarajan and Abeywardena (2013), Ehlers (2011), HylŽn (2006) and McGill (2013) also attribute this low adoption rate to potential users’ uncertainty over the appropriateness of the content, instructional design, pedagogy and ease of use of these OER.

Downes (2004) stresses the need for some method of evaluating and selecting OER that have not been reviewed by some reliable independent agency. Many attempts have been made to do this, for example by Leacock and Nesbit (2007), Baya'a et al. (2009), Achieve (2011), Quality Matters Program (2011), Vladoiu (2011), Ehlers (2011), Camilleri and Tannhäuser (2012), McGill (2013) and Khanna and Basak (2013). However, Leacock and Nesbit (2007) posit that the more highly detailed approaches to evaluation are a significant barrier to OER use, Jung and Lee (2014) conclude that most OER assessment systems are too complex for easy use, and after reviewing the available frameworks evaluating OER, Kawachi (2013) concluded that many were no more than ad hoc subjective listings of whatever came to the authors’ minds at the time with little overarching organization.

In the light of these findings, the authors saw the need for a simple-to-use framework for evaluating OER for use by:

  • OER developers “marketing” their products.

  • Faculty, students and other potential users of OER assessing whether particular resources suited their requirements.

  • OER users assessing and informing others on whether these resources had met their requirements.

It was clear that the selection criteria in this framework had to be relevant to these various users’ needs, easy to understand, reliable and agreed to by experts in the field. It was decided to base these criteria upon:

  • The educational benefits claimed for OER in the literature.

  • MERLOT’s evaluation criteria for OER content.

  • Merrill’s first principles of instruction.

To arrive at an agreed set of criteria for evaluating and selecting OER, the authors conducted an inter-regional online survey of researchers and users familiar with OER, then subjected their recommendations to further review and refinement by a smaller group of experts, and finally consolidated the agreed criteria into a simple framework.

Fitness for purpose

The authors avoided employing the term “quality” in this study. As Harvey and Green (1993) observe, this word means five different things: excellence, consistency, fitness for purpose, value for money and transformational. Excellence and transformational are over-used techno-bureaucratic terms that may prove difficult to define and measure. Fitness for purpose is a far more readily understood and practical term. It refers to the fulfilment of certain expectations. The notion derives from the manufacturing industry but has been adopted by many educational quality agencies. For these reasons, the authors judged this to be the most appropriate term to use in evaluating OER.

An examination of the literature (Butcher, 2011; Dhanarajan & Abeywardena, 2013; Groom, 2013; McGreal, 2013; UNESCO/COL, 2011; Wiley, 2008) revealed that OER can serve seven principal purposes:

  1. 1)

    Providing open, accessible and quality content for a wider community of teachers and learners.

  2. 2)

    Sharing best practice and helping to avoid re-inventing the wheel.

  3. 3)

    Helping developing countries improve and expand learning for development opportunities.

  4. 4)

    Offering flexible non-formal and informal knowledge and skills accumulation pathways to formal study.

  5. 5)

    Providing learning opportunities for geographically, socially or economically excluded students and non-traditional and work-based learners.

  6. 6)

    Improving the quality of conventional and online education by achieving greater awareness of open and inclusive educational practices and varied perspectives on fields of study.

  7. 7)

    Enabling collaboration between institutions, sectors, disciplines and countries.

As a first step in evaluating OER, it was decided that the creators or users would need to determine which, or which mix, of these purposes they had in mind.

Unbundling fitness for purpose

To work towards a simple set of selection criteria, the authors started with two existing, highly regarded frameworks: the MERLOT evaluation criteria for OER and Merrill’s first principles of instruction.

MERLOT’s evaluation criteria

Sperling (2011) and Anderson-Wilk and Hino (2011) regard MERLOT’s system for evaluating online scholarly multimedia publishing as the most relevant, rigorous and successful example of its type in the field. MERLOT is a curated collection of free and open online teaching, learning and faculty development services that is contributed to and used by an international education community. Its evaluation criteria have been adopted and used in a variety of contexts (Cechinel & Sánchez-Alonso, 2011; Leacock & Nesbit, 2007; Levin & Smith-Gratto, 2002). To be posted to the MERLOT site, an online material must be subject to a written review by two reviewers and receive an average of three stars (five being the highest) in regard to three dimensions: content, potential effectiveness of the teaching and learning materials and ease of use (MERLOT, 2014).

  • Content is assessed in terms of the correctness, significance and currency of the concepts/principles and their match with the learners’ characteristics and learning needs.

  • The potential teaching/learning effectiveness of the materials is judged by the inclusion of learning objectives, information on how the materials can be used by specific learner groups, the design of the materials and how these can be easily integrated into, and improve, a variety of courses and teaching/learning processes.

  • Ease of use by teachers and students is evaluated by considering accessibility, layout and design for effective learning, consistency in user interface and navigation, content presentation that matches learners’ prior knowledge and abilities and ease of re-use in other contexts.

Merrill’s five first principles of instruction

Bates (2011), Masterman and Wild (2011) and Groom (2013) stress that OER must be founded upon sound instructional principles. Merrill’s (2002) first principles of instruction were abstracted from a wide range of systematically reviewed instructional design theories, models and research. These principles are always true under appropriate conditions regardless of the methods or models that implement these principles (pp. 44-45):

  • Learners are engaged in solving real-world problems.

  • Existing knowledge is activated as a foundation for new knowledge.

  • New knowledge is demonstrated to the learner.

  • New knowledge is applied by the learner.

  • New knowledge is integrated into the learner’s world.

Methodology

A provisional set of selection criteria based on the benefits of OER as claimed in the literature, MERLOT’s three dimensions of quality and detailed review areas specified under each dimension, and Merrill’s first principles of instruction and application strategies proposed for each principle were developed by the authors in consultation with five faculty members and five researchers well versed in OER. The research proposal was then submitted for approval by the research ethics committee at the authors’ university and permission was duly granted.

The initial survey

Persons with the expertise and experience to judge the criticality and feasibility of the proposed selection criteria were identified by means of snowball sampling (inviting a pool of initial informants to contact other eligible reviewers through their social networks, who in turn make further connections (http://sociology.about.com/od/Types-of-Samples/a/Snowball-Sample.htm).

Firstly, invitations to participate in the survey were sent out to well-known researchers, leaders of open education movements and authors of books and articles on OER. These persons were also invited to recommend further individuals and organizations involved in OER such as MERLOT and the OpenCourseWare Consortium. These contacts in turn were asked to post requests for further reviewers through their various networks. This process enabled the recruitment of 207 reviewers, the characteristics of whom are shown in Table 1.

Table 1 The reviewers

The survey instrument was then sent out to these reviewers. It comprised four sections:

  • Section 1 sought information on the reviewers’ locations, gender, principal roles and degree of experience in using OER in their own teaching and learning.

  • Section 2 invited the reviewers to rate the criticality and feasibility of 18 proposed OER selection criteria based upon MERLOT’s three dimensions and specific review questions under each dimension in terms of their own teaching and learning (using a scale from one to five, with 5 being the most useful).

  • Section 3 asked reviewers to rate the feasibility and usefulness of 10 proposed OER selection criteria based upon Merrill’s first principles of instruction and application strategies for each principle, again in terms of their own teaching and learning (using a scale from one to five, with 5 being the most useful).

  • Section 4 invited the reviewers to offer their own comments on the proposed criteria, how these might be improved, what further criteria would be useful and how else the fitness for purpose framework could be improved.

The initial survey findings

The authors analyzed the criticality and feasibility ratings and additional comments received from the 207 reviewers with the aim of determining which of the proposed criteria should be retained, revised or deleted. Selection criteria receiving ratings of >3.6 (an average of all 18 mean scores) by one or two measures were to receive further consideration by means of validation interviews.

Table 2 shows the criticality and feasibility ratings for the 18 suggested criteria based upon MERLOT’s three dimensions. In Dimension 1 (Content), three criteria (1, 2 and 6) were rated >3.6 on the 5-point scale, both in regard to criticality and feasibility, and three criteria (3, 4 and 5) >3.6 in terms of criticality only. In Dimension 2 (Potential Teaching/Learning Effectiveness), only one criterion (7) was rated >3.6 on both counts while two criteria (8 and 9) were rated >3.6 in regard to criticality only. In Dimension 3 (Ease of Use), criteria 13, 14 and 16 rated >3.6 for criticality and feasibility, and criteria 15 and 18 for criticality only.

Table 2 Criticality and feasibility ratings for the proposed selection criteria

A set of ANOVA tests revealed no significant regional, gender or experience in using OER differences in these ratings. This suggested that these criteria would be appropriate in any or all contexts. An analysis of the reviewers’ 89 sets of open comments confirmed that most of these criteria would be appropriate for most situations. It was however noted that many reviewers indicated that they used OER as supplementary rather than core teaching/learning resources, which might have influenced these opinions.

In terms of pedagogical appropriateness in the OER, as Table 3 shows, nine of the 10 items received a rating of >3.8 (an average of all 10 mean scores) on the 5-point scale (see grey shading). Item 7, “Providing practice/test/self-assessment items,” was only rated 3.49. However, after further consideration, the authors and the smaller panel of experts agreed that was important to retain this criterion because such activities are essential for informing the learners about what and how they should be learning and how well they are performing.

Table 3 Possible pedagogical applications of OER

Again, the ANOVA tests revealed no significant regional, gender and preparedness differences. Comments by the reviewers reinforced the point that it was important that the pedagogical methods implicit in the OER should be appropriate for all learners regardless of age, gender, language, ethnicity, race, culture or religious conviction.

In the light of the survey findings, a second shorter version of the survey instrument (see Table 4) was developed for further review by a smaller number of researchers and users of OER. In this version, because they were considered in practice to be inseparable, the OER selection criteria and pedagogical strategies were combined in three categories (Legal and Technical; Content; Pedagogy). To ensure that the inputs represented different regions and cultures, comments sought by email and Skype were from five OER experts in Malaysia, Australia, South Africa, Germany and the USA. Their feedback led to further restructuring and rewording of the criteria. A further round of comments on the draft framework involved the same five experts plus an additional member in Swaziland. By these means, a consensus on selection criteria for the fitness for purpose framework was arrived at.

Table 4 A revised version of OER selection criteria

Designing the framework

The authors and three consultants who had participated in the initial survey and two subsequent rounds of validation then considered how the final set of criteria should be integrated, structured and worded to achieve a comprehensive but user-friendly fitness for purpose framework. It was agreed that this should contain 25 items in four dimensions: Purposes, Ease of Use, Content and Pedagogy (Table 5).

Table 5 A framework for selecting OER on the basis of fitness for purpose

It was envisaged that users would:

  1. 1)

    Determine which, or which mix, of the possible purposes matched their needs.

  2. 2)

    Tick the boxes to indicate which of the criteria under ease of use, content and pedagogy were being met. OR

  3. 3)

    Rate these items on a one-to-five basis (5 being the highest). AND

  4. 4)

    Add short comments for their own use or to advise others.

It is not intended that all 25 criteria should necessarily be assessed in any one selection procedure – only those items that are judged to be critical in assessing fitness for purpose in the context of a particular course or teaching and learning activity.

Discussion

Because the evidence suggests that one of the factors inhibiting a wider use of OER is the lack of information on and doubts about their fitness for purpose, the authors sought to create a comprehensive but simple-to-use framework for various users by developing and testing selection criteria in regard to benefits, ease of use, content and pedagogy. The fact that the ANOVA tests revealed no significant regional, role, gender or degree of experience in using OER differences in the ratings suggested that these 25 selection criteria can be applied to a range of OER in a variety of contexts by:

  1. 1)

    Developers of OER to help potential users comprehend the intended purposes and benefits, the nature of the content and the pedagogical or instructional design principles embedded in the resources.

  2. 2)

    Course planners, faculty and students who can use this framework as an aide-memoire in considering their needs and judging whether particular OER will meet these requirements.

  3. 3)

    Users, after using particular OER, for judging their quality, fitness for purpose, educational outcomes and other benefits, and (most importantly) providing feedback to the developers.

The study revealed that the five most critical selection criteria are:

  1. 1)

    The OER accords with open content licenses (e.g., Creative Commons License) and is properly attributed.

  2. 2)

    The OER can be easily reused, revised, remixed and shared with other materials to meet particular teaching/learning needs.

  3. 3)

    The goals and purposes of the OER are easily understood.

  4. 4)

    The content is accurate and up to date.

  5. 5)

    The content covers educationally significant concepts and leads to deep understanding.

Some faculty and students may be unfamiliar with the issues of copyright and fair use. It would therefore also be advisable for first-time users to refer to the Guide to Open Content Licenses (http://www.theartgalleryofknoxville.com/ocl_v1.2.pdf) and Creative Commons website (http://creativecommons.org/licenses/). Having assured themselves that that the legal and technical requirements have been met, their next step should be to use this framework to assure themselves that the OER meets their requirements. They may also need to consider whether they have technical skills and means to reuse, edit and share the OER in question. For example, not everyone has the capacity to convert scanned pages to a text file, as opposed to editing documents in Word or PowerPoint materials.

Conclusion

Ehlers and Conole (2010) observe that the uses of OER have not yet reached a critical threshold in higher education. They attribute this to the current focus being on building access to digital content rather than on how OER can support innovation and change in teaching and learning. It is hoped that this simple-to-use assessment framework will encourage and support more and better use of OER.

The authors now invite colleagues to pilot test, apply and critique this framework, compare it with other tools and report back on its usefulness.

It is important to note that this study did not take account of “granularity”. OER range from entire courses and massive open online courses to small-scale learning materials, games, simulations, quizzes and other digital resources. So the conclusion that these items can be universally applied in all forms and sizes of OER and cover all of the required criteria still needs to be tested with a range of teachers in a variety of on-campus and off-campus contexts.

It is also important to ascertain whether this framework will be helpful for students.

References

Download references

Acknowledgements

This research was funded by the 2013 Grant-in-Aid for Scientific Research (Kakenhi) from the Japan Society for the Promotion of Society.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Insung Jung.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jung, I., Sasaki, T. & Latchem, C. A framework for assessing fitness for purpose in open educational resources. Int J Educ Technol High Educ 13, 3 (2016). https://doi.org/10.1186/s41239-016-0002-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41239-016-0002-5

Keywords