Skip to main content
  • Research article
  • Open access
  • Published:

Initial evidence to validate an instructional design-derived evaluation scale in higher education programs

Abstract

Instructional design has been a key strategy in ensuring the quality of higher education; therefore, it is essential to evaluate its effects on learning processes through the use of tools that comply with defined technical standards. Hence, the purpose of this study is to validate a scale used to evaluate the degree to which the students perceive the five aspects of instructional design: objectives, curricular content, learning activities, educational resources, and the existing evaluation strategy at the Costa Rica Institute of Technology (TEC). Through the use of an analytical and descriptive study, as well as qualitative and quantitative techniques, a Likert evaluation scale was designed. According to the results, the final version of the instrument consisted of 33 items and had a confidence coefficient of 0.923. In regards to the evidence associated to its validity, it was possible to verify the representability and relevance of the instructional design components and their factorial structure. The main conclusion obtained from this study is the importance of measuring, from a systemic perspective, the instructional design components, in classroom, blended learning and virtual courses with tools that are properly validated in order to guarantee the confidence level and valid interpretation of the results.

Introduction

In higher education, instructional design has been a basic planning strategy used to ensure that the pedagogic development of courses is organized and coherent. Indeed, one of the reasons for selecting this topic is that, in various studies that analyze and propose quality models, instructional design is considered a decisive factor in the educational experience of the students (Cabero, 2006a, 2006b; Correa & Paredes, 2009; Gil, 2004; Khlaisang, 2010). Instructional design is a tool that guarantees that there will not be any improvisation during course execution and, furthermore, it constitutes a guide that will also allow decisions to be made and actions taken related to unforeseen events that tend to occur in educational processes (Cabero, 2006a, 2006b). Other authors agree that instructional design is a success factor for a virtual or blended learning course (Bates & Sangrà, 2011; Jung, 2011; Sun, Tsai, Finger, Chen, & Yeh, 2008; Williams, Schrum, Sangrà, & Guárdia, 2004).

Thus, in 2011, the Costa Rica Institute of Technology (TEC) developed – on its technological platform – an application used to execute instructional design that included the five components thereof: objectives, curricular content, learning activities, educational resources, and evaluation strategy. The overall aim of doing so was to improve the pedagogic planning of the courses (Francesa, Espinoza-Guzmán, & Chacón-Rivas, 2012). This application was designed for use with the different study methods available at the university: face-to-face, blended and virtual courses. It was precisely within this context that the need to validate a tool for the evaluation of the instructional design components from the students’ point of view was detected.

At higher education level, there are many studies on learning and instructional design evaluation. Rovai, Wighting, Baker, and Grooms (2009), based on the meta analysis of 232 comparative studies performed by Bernard et al. (2004) indicated that there is a broad variability in the results achieved by students on distance (online) learning courses and classroom courses. Also, they established that there are different elements associated with educational effectiveness: pedagogic techniques, student characteristics and instructional models, among other factors. These researchers developed the Perceived Learning Scale tool consisting of 9 items used to measure the cognitive, psychomotor and affective aspects related to learning in face-to-face and virtual courses.

Other researchers developed a scale to compare demographic indicators and analyze the difference in the performance and satisfaction of the students on classroom courses as well as on virtual courses. The analyzed components corresponded to the students’ perceptions of their own learning processes, the experience obtained in the courses, and the instructor evaluation (Driscoll, Jicha, Hunt, Tichavsky, & Thompson, 2012).

Reyes (2012), on the other hand, presented a study that evaluates the quality of the online learning environment from the college students’ perspective; to achieve this, a Constructivist On-Line Learning Environment Survey (COLLES) was used, since it measures relevance, reflection and support from the tutors, support from classmates and interpretation of the learning process. The most relevant results obtained are related to significant learning strategies and learning objects and how these factors ensure knowledge acquisition.

Santoveña (2010) describes the process of creating a scale to measure the quality of online courses through the use of criteria such as general environment quality, didactic methodology, and technical quality when navigating the page and the technical quality and design of multimedia resources.

Considering that instructional design is a process, there is a series of steps that allow for the definition of key aspects. First, what must be learned, the materials to use and the quality of instruction must be determined. Second, the learning requirements, the goals, and the instructional materials and activities must be designed, as well as the follow-up strategies that have to be specified (Berger & Kam, 1996; Yakavetsky, 2003).

At TEC, the administrative office of “TEC Digital” has used the model proposed by Gil (2004) and Coronado L (2010): Presentación del Proceso del Diseño Instruccional del Tecnológico de Costa Rica, unpublished) to guide this process, as shown in Fig. 1.

Fig. 1
figure 1

Development process of TEC Digital’s instructional design, based on the proposals by Gil (2004) and Coronado L (2010): Presentación del Proceso del Diseño Instruccional del Tecnológico de Costa Rica, unpublished

TEC’s instructional design includes two components: a) the context or characterization of the course and the targeted student population, and b) the planning process for the instructional design. Within this context, aspects such as the number of participants; requirements; specific characteristics such as age, gender, knowledge and skills, among others; the course mode – classroom, blended or virtual; course characteristics – theory, theory-practical, laboratory work or fieldwork; level or degree at which the course must be taken; whether it is part of a study plan or if it is a course for continued professional training. In the second component, the instructional design is structured by objectives: content, learning activities, educational resources and evaluation strategies, which, when merged, create course content that aims to generate satisfactory learning experiences (Arjona & Blando, 2007). Although the available literature on virtual environments highlights the importance of instructional design when planning a virtual or blended learning course, its development must not be overlooked in face-to-face education.

Background

In the current standards for educational and psychological tests established by the American Educational Research Association (AERA), American Psychological Association (APA) and National Council on Measurement in Education (NCME), Sireci and Padilla (2014) determined that the validation of a tool requires a global effort, where different types of evidence that integrate a valid argument must be trusted, in order to support the use of the test for a particular objective.

As pointed out by Pérez-Gil, Chacón, and Moreno (2000), from a unified validity perspective, the meaning of the scoring used is what provides the rational base to arrive at relevant and representative criteria of the test’s content and establish a predictive hypothesis that can contribute to understanding the nature of the tool constructed. Within this context, for Carretero and Pérez (2005), the phases for a study focusing on the construction or adaptation of a test should be:

  • Study justification

  • Conceptual delimitation of the construct to be evaluated

  • Development and qualitative evaluation of the items

  • Statistical analysis of the items

  • Dimensioning study for the test developed (internal structure)

  • Reliability coefficient

  • Gathering external evidence to obtain test validity

In regards to the conceptual delimitation of this study, the following operational components of the instructional design were considered: objectives, curricular content, learning activities, educational resources and evaluation. According to Gil (2004), the instructional design is the scheme that allows the different processes involved in distance education to be organized, which detail the required technology and infrastructure, the necessary methods to provide the instruction based on the educational requirements; it should also allow for content selection and organization, and for the design of the learning and evaluation situations that satisfy these needs, always taking into account the learners’ characteristics and the expected learning outcomes. Additionally, Gustafson and Branch (2002) indicate that instructional design is a systemic, planned, synergic and structured process, which must be executed before providing a course or an educational activity.

The instructional design objectives consider the specific abilities and skills that the students will have developed by the time the educational program concludes: cognitive or intellectual, motor function, affective, social action and interaction. These are the start and end points of the educational process, because they constitute the foundations for developing evaluation processes (Gil, 2004).

Cabero (2006a) proposes that curricular content must be analyzed from a three-fold perspective – quality, quantity and structure – in order to ensure the content is pertinent, relevant and from a trustworthy source. In terms of quantity, there must be enough content for the target group, and it must be based on the established objectives. The structure corresponds to the way the content is displayed in the correct design and formats.

In learning activities, Herrera (2006) suggests that interventions should promote cognitive imbalance, high-level interaction and the development of thinking skills. This would encourage the students to go through the different learning stages of knowledge acquisition, skills development and critical thinking abilities.

Educational resources correspond to the series of resources used to encourage the teaching and learning process, as explained by Blázquez and Saénz (1988), in such a way that the resources and the learning means become the tools in this process. Educational resources affect educational process efficiency, as their development and creative use enhances the probability of students learning better.

According to Salinas (2008), the design of evaluation strategies is closely related to the learning methodology used. Depending on how the evaluation is proposed and designed, it can be either a judgment tool or a learning opportunity. From the latter perspective, evaluation is an educational intervention action focusing on improving and reconstructing knowledge, in a way that is aimed at reaching the proposed learning objectives.

These concepts and empirical evidence from previous studies formed the basis for the development of the evaluation tool, which also considered the characteristics of the higher education students.

Methodology

This is an analytical-descriptive study, in which the data collection process was based on quantitative and qualitative techniques, as established in the following procedure:

First stage

The design of a tool based on a Likert-type scale with five values, according to the operation matrix of the instructional design components. As per Wang, Willett, and Eccles (2011), the development and selection of the items was executed through the use of theoretical concepts as well as the revision made by experts, in order to obtain higher variability in the responses.

Second stage

The application of the judging technique to determine the level of representation and relevance of the variables to be measured. As explained by Sireci and Faulkner-Bond (2014), this is the most common method and it consists in the evaluation of matching answers, used to measure the degree to which the items correctly represent specific and significant content. As defined by Sireci and Faulkner-Bond (2014), the evidence obtained is one of the five ways validated by APA, AERA and NCME to establish the level to which the content of a test is congruent with its purposes. In order to ensure that the measured content represents the construct that is being measured, four aspects are considered: conceptuality, representativeness, relevance and appropriateness of the development process.

In this procedure, five experts from computer sciences, e-learning, education and psychology were selected through the snowball strategy. Each was trained individually and they were provided with an instruction guide, an evaluation sheet, and a matrix, similar to the one found in Table 1, so that they could judge the items based on the following categories: 1) coherent, keep without making any adjustments; 2) partially coherent, keep after making some adjustments and 3) incoherent, delete the item.

Table 1 Operational matrix for the measurement tool used for instructional design (2012)

Third stage

The use of a student focus group with the aim of exploring the meaning of different terms, ensuring the understanding of the instructions, outlining format errors and estimating the time required to apply the tool. As evidenced by Hamui-Sutton and Varela-Ruiz (2013), the technique promotes discussion and must therefore be developed within a study protocol in accordance with the objective. An interview guide and the required logistics must be established in order to successfully develop this technique. In it, the selected participants complied with being registered on blended and on-site courses from different degree programs.

Fourth stage

The pilot implementation of the first version of the tool was executed in October and November 2012 with a group of 58 students having characteristics similar to those of the target group. Later, a second test of the tool, in its enhanced version, was performed with another sample of 64 students in 2013. In both tests of the tools, the groups were selected considering the representation of face-to-face and blended courses from different degree programs.

For the statistical analysis of the results, IBM SPSS Statistics 0.19 software was used. In the test to determine the dimensionality of the tool, an inductive estimate with exploratory factorial analysis, also considered by Díaz and Fuentes (2007), was considered the most appropriate test. The main objective if this analysis was to find the combination of observed variables in a lower number of latent variables; therefore, it was necessary to reduce the dimensionality of the original variables while retaining most of the information provided (Cea, 2004). The author indicates that the analyses are important only when there is a correlation between the variables that is greater or equal to 0.30.

The second analysis of the results corresponded to a reliability coefficient, that is, determining the Cronbach Alpha coefficient (interclass correlation), which is defined as the accuracy of a test or measuring tool when applied to the same subject twice, that is, the results are consistent and coherent (Hernández, Fernández, & Baptista, 2010). This index is calculated through a variance-covariance matrix obtained from the values of the items, where the diagonal of the matrix provides the variance of each item while the rest of the entries represent the internal covariance between the groups of items (Cea, 1998; Oviedo & Campos-Arias, 2005). This is an index used to measure the internal consistency of a scale, in order to evaluate the level to which the items are correlated (Oviedo & Campos-Arias, 2005).

Results

In the first stage of the validation, two qualitative techniques were used – a panel of judges and a focus group – and the information was merged with the conceptual framework of reference. As explained by Hamui-Sutton and Varela-Ruiz (2013), this technique allows for the integration of bibliographical resources and evidence from previous studies.

Considering the criteria generated by the judges, changes were made to the way the items were phrased; also, two of the items were deleted and other items were reorganized. For example, the item: “Appropriate development based on the course program” was deleted from the evaluation tool, when the following observation was made by judge 4: “Will this be evaluated by the end of the course? If not, it cannot be properly evaluated by the student unless the student is retaking the course.” In order to verify the matching percentage between the observations made by the judges, Kendall’s Tau_b test, which measures the range of correlation, was used. The results showed a correlation of 0.277, with a confidence level of 0.05 % between judges 2 and 4 only (there was no statistical evidence for the rest of the experts).

Through the feedback provided by the focus groups, it was possible to substantially improve the accuracy of the items. For instance, terms like academic education, proficiencies, ethical principles, feedback, self-evaluation and self-learning, among other terms, were not well understood by the students. Hence, by discussing these concepts with the focus group, it was possible to come up with more comprehensive terms, such as academic studies required for the degree, capabilities, values, error analysis and deficiencies, value individual accomplishments and learning. On the other hand, it was determined that broadening the response range from a 5-level scale to a scale from 1 to 10 was necessary for the tool to be better accepted in the educational context and also for maximizing the response variability when using it.

In the second validation stage, a trial test was performed with 58 students, who were given the tool in October and November 2012. In the different studies focusing on the items and the tool’s measurement properties, reliability was estimated using the Cronbach Alpha coefficient. This analysis was performed using IBM SPSS Statistics 0.19 software, whose range varies between 0.0 and 1.0 (perfect confidence level). The initial coefficient of the scale was 0.841 and, following these results, two items were removed: item 27 with a correlation level of 0.04, and item 24 with a value of 0.08 (both items from the educational resources category). Therefore, the final version of the tool consisted of 33 items and presented a confidence coefficient of 0.923, resulting from a second trial test. According to Morales (2012), a high coefficient is clearly desirable when the differences between the subjects tested are legitimate and expected. For research purposes, as emphasized by Hernández et al. (2010), this value is acceptable, since it corroborates that the items are highly correlated.

In order to include additional valid evidence in this tool design, the principal axis method of factor extraction was performed, to test whether the five components of the design were represented in the proposed items, as shown in the scree plot graphs the eigenvalue against the factor number (Fig. 2).

Fig. 2
figure 2

Scree plot for the instructional design scale

This analysis also explained the variance with a lower number of factors when compared to the initial factors tested (Nunnally & Bernstein, 1995). The rotation method used was orthogonal (Varimax) and, according to the results from the second trial run, the variance percentage obtained from the five factors evaluated was 65.21 %. By grouping the items based on the factorial results for the factors (over 0.30), it was possible to detect several multidimensional items. This means that an item can be associated to more than one factor; however, all items were kept in the scale due to the significance of their content. As detailed by Carretero and Pérez (2005), the choice of keeping or removing an item must be based on a joint evaluation of all the statistical indices, and also considering the conceptual interests of the tool designed.

Finally, two statistical adjustment tests were used. First, the KMO (Kaiser-Meyer-Olkin) measure of sampling adequacy used to compare the magnitude of the correlation coefficients observed with the magnitude of the partial coefficients (Pardo & Ruiz, 2002), obtaining a value of 0.627. According to the authors, values under 0.6 are considered insignificant. Second, Bartlett's Test of Sphericity, which allowed the null hypothesis to be rejected, establishing that the factors are correlated and that the model is appropriate for explaining the data obtained.

Discussion and conclusions

In the evaluation for higher education learning, under any teaching modality, it is essential to consider all the factors involved in instructional design, regardless of the methodology used to develop it. This is so because, when the course ends, it is necessary to have indicators to evaluate course quality and to improve any deficiencies that may be found (Williams et al., 2004). As determined by earlier studies, there are scales such as COLLES (Reyes, 2012) and the one developed by Santoveña (2010) that incorporate diverse instructional design elements in order to rate the general quality of the course. While the proposed scale in this article focuses on measuring all the components or factors of the instructional design, understood as the systematic process organized according to course objectives, curricular content, learning activities, didactic resources and evaluation strategies.

This particular property of the designed scale is due to the absence of specific technological terms and to its focus on the educational process itself, which allows the comparison of different course modalities, as outlined by Berridi, Martínez, and García (2015), and Motii and Sanders (2014). Additionally, as with any other tool, it can be adjusted to the context of any institution, without altering the systemic vision of day-to-day pedagogic activities, and the instructor can obtain relevant information, from the students’ perspective, about the effectiveness of the general design and its specific components.

As established by Bolívar (2008), despite the fact that the positive perception of blended learning courses has increased over the years, it is still not possible to visualize a clear and definitive trend regarding the effectiveness of the instructional strategy. This justifies the need to continue researching into this field in order to remove any doubts and enhance the knowledge available on this subject matter. With this scale, educators can gather relevant information from the students’ point of view about the effectiveness of the general design and its specific components. As indicated before, and like any other tool, this one can be placed in the context of each institution, along with a protocol in order to standardize the requirements for its use, to obtain an appropriate interpretation of the results.

The experience of validating an evaluation scale for students’ perceptions of the instructional design, following a rigorous procedure, was important in order to guarantee the confidence of the results and valid interpretation. The relationships between the concepts of each component used in the design and the empirical data obtained from both trial runs were also demonstrated. The main limitation is the lack of evidence of external validation, due to the sampling size as well as its non-probabilistic selection. However, as with every validation process, it is possible to continue obtaining more evidence of the tool’s psychometric properties in order to strengthen its interpretation and, hence, its impact on the higher education environment. Finally, the researchers involved in this study consider it important to share this research outcome, which, together with its protocol for use, can be accessed via the following link: http://tecdigital.tec.ac.cr/servicios/investigacion/?q=protocolo_escala_percepcion_componentes_diseno_instruccional.

References

  • Arjona, M., & Blando, M. (2007). Diseño Instruccional, elemento clave en el desarrollo de cursos para Ambientes Virtuales de Aprendizaje. Retrieved from http://bibliotecadigital.conevyt.org.mx/colecciones/documentos/somece/51.pdf

    Google Scholar 

  • Bates, T., & Sangrà, A. (2011). Managing technology in higher education: Strategies for transforming teaching and learning. San Francisco: John Wiley & Sons.

    Google Scholar 

  • Berger, C., & Kam, R. (1996). Education 626: Educational Software Design and Authoring. Retrieved from http://www.umich.edu/~ed626/define.html

    Google Scholar 

  • Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L., Fiset, M., & Huang, B. (2004). How does distance education compare to classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74(3), 379–439. doi:10.3102/00346543074003379

  • Berridi, R., Martínez, J. I., & García, B. (2015). Validación de una escala de interacción en contextos virtuales de aprendizaje. Revista electrónica de investigación educativa, 17(1), 116–129.

    Google Scholar 

  • Blázquez, F., & Sáenz, O. (1988). Didáctica General. España: Editorial Anaya.

    Google Scholar 

  • Bolívar, C. (2008). El blended-learning: evaluación de una experiencia de aprendizaje en el nivel de postgrado. Investigación y Postgrado, 23(1), 11–36.

    Google Scholar 

  • Cabero, J. (2006a). Bases pedagógicas del e-learning. Universities and Knowledge Society Journal, 3(1), 1–10. Retrieved from http://www.redalyc.org/articulo.oa?id=78030102

  • Cabero, J. (2006b). La calidad educativa en el e-Learning: sus bases pedagógicas. Educación Médica (p. 9). Retrieved from http://scielo.isciii.es/pdf/edu/v9s2/original1.pdf

  • Carretero, H., & Pérez, C. (2005). Normas para el desarrollo y revisión de estudios instrumentales. International Journal of Clinical and Health Psychology, 5(3), 521–551. Retrieved from http://www.aepc.es/ijchp/articulos_pdf/ijchp-158.pdf

    Google Scholar 

  • Cea, M. (1998). Metodología cuantitativa: Estrategias y técnicas de investigación social. S.A, Madrid: Editorial Síntesis.

    Google Scholar 

  • Cea, M. (2004). Análisis multivariable: teoría y práctica en la investigación social. S.A, Madrid: Editorial Síntesis.

    Google Scholar 

  • Correa, J., & Paredes, J. (2009). Cambio Tecnológico, Uso de plataformas e-learning y Transformación de la Enseñanza en las Universidades Españolas: una perspectiva de los profesores. Revista de Psicodidáctica, 14(2), 261–277. Retrieved from http://www.redalyc.org/pdf/175/17512724007.pdf

    Google Scholar 

  • Díaz, C., & Fuentes, I. (2007). Validación de un cuestionario de razonamiento probabilístico condicional. Revista Electrónica de Metodología Aplicada, 12(1), 1–15. Retrieved from http://www.unioviedo.es/reunido/index.php/Rema/article/viewFile/9774/9517

    Google Scholar 

  • Driscoll, A., Jicha, K., Hunt, A. N., Tichavsky, L., & Thompson, G. (2012). Can online courses deliver in-class results? A comparison of student performance and satisfaction in an online versus a face-to-face introductory sociology course. Teaching Sociology, 40(4), 312–331. Retrieved from http://tso.sagepub.com/content/40/4/312.full.pdf+html

    Article  Google Scholar 

  • Francesa, A., Espinoza-Guzmán, J., & Chacón-Rivas, M. (2012). Hacia una herramienta para el diseño instruccional en educación superior. Madrid, Spain: Work presented in VII Conferencia Ibérica de Sistemas y Tecnologías de Información.

    Google Scholar 

  • Gil, M. C. (2004). Modelo de Diseño Instruccional para programas Educativos a Distancia. Revista Perfiles Educativos, XXVI(104), 93–114. Retrieved from http://www.redalyc.org/pdf/132/13210406.pdf

    Google Scholar 

  • Gustafson, K. L., & Branch, R. M. (2002). What is instructional design. Trends and issues in instructional design and technology (pp. 16–25).

    Google Scholar 

  • Hamui-Sutton, A., & Varela-Ruiz, M. (2013). La técnica de grupos focales. Metodología de investigación en educación médica, 2(1), 55–60. Retrieved from http://apps.elsevier.es/watermark/ctl_servlet?_f=10&pident_articulo=90219695&pident_usuario=0&pcontactid=&pident_revista=343&ty=95&accion=L&origen=zonadelectura&web=www.elsevier.es&lan=es&fichero=343v02n05a90219695pdf001.pdf

    Article  Google Scholar 

  • Hernández, R., Fernández, C., & Baptista, M. P. (2010). Metodología de la investigación (5th ed.). Lima: McGraw Hill.

    Google Scholar 

  • Herrera, M. (2006). Consideraciones para el diseño didáctico de ambientes virtuales de aprendizaje: una propuesta basada en las funciones cognitivas del aprendizaje. Revista Iberoamericana de Educación, 38(5), 2.

    Google Scholar 

  • Jung, I. (2011). The dimensions of e-learning quality: from the learner’s perspective. Educational Technology Research and Development, 59(4), 445–464.

    Article  Google Scholar 

  • Khlaisang, J. (2010). Proposed Models of Appropriate Website and Courseware for E-Learning in Higher Education: Research Based Design Models. In J. Sanchez & K. Zhang (Eds.), Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2010. Association for the Advancement of Computing in Education (AACE), Chesapeake (pp. 1520–1529).

    Google Scholar 

  • Morales, P. (2012). Análisis de ítems en las pruebas objetivas. Retrieved from http://educrea.cl/wp-content/uploads/2014/11/19-nov-analisis-de-items-en-las-pruebas-objetivas.pdf

    Google Scholar 

  • Motii, B. B., & Sanders, T. J. (2014). An Empirical Analysis of Student Learning Outcomes in an Introductory Microeconomics Course: Online Versus Face-To-Face Delivery Methods. Global Education Journal, 3

  • Nunnally, J. C., & Bernstein, I. J. (1995). Teoría psicométrica. Madrid: McGraw-Hill.

    Google Scholar 

  • Oviedo, H., & Campos-Arias, A. (2005). Aproximación al uso del coeficiente alfa de Cronbach. Revista Colombiana de Psiquiatría, XXXIV(4), 572–579. Retrieved from http://www.scielo.org.co/pdf/rcp/v34n4/v34n4a09.pdf

    Google Scholar 

  • Pardo, A., & Ruiz, M. A. (2002). SPSS 11. Guía para el análisis de datos. Madrid, Spain: McGraw-Hill.

    Google Scholar 

  • Pérez-Gil, J. A., Chacón, S., & Moreno, R. (2000). Validez de constructo, el uso de análisis factorial exploratorio-confirmatorio para obtener evidencias de validez. Psicothema, 12(2), 442–446. Retrieved from http://www.psicothema.com/psicothema.asp?id=601

    Google Scholar 

  • Reyes, N. (2012). Los Entornos Virtuales: Valoración de la Calidad desde la Perspectiva del Estudiante. In I. Mogollón (Ed.), Educación a Distancia: Encuentros (pp. 144–162). Sevilla: Protagonistas y Experiencias. GITE Univesidad de Sevilla.

    Google Scholar 

  • Rovai, A. P., Wighting, M. J., Baker, J. D., & Grooms, L. D. (2009). Development of an instrument to measure perceived cognitive, affective, and psychomotor learning in traditional and virtual classroom higher education settings. The Internet and Higher Education, 12(1), 7–13.

    Article  Google Scholar 

  • Salinas, J. P. (2008). Metodologías centradas en el Alumno para el Aprendizaje en Red. Barcelona: Editorial Síntesis S.A.

    Google Scholar 

  • Santoveña, S. (2010). Cuestionario de evaluación de la calidad de los cursos virtuales de la UNED. Revista de Educación a Distancia (p. 25). Retrieved from www.um.es/ead/red/25/

    Google Scholar 

  • Sireci, S., & Faulkner-Bond, M. (2014). Evidence based on test content. Psicothema, 26(1), 100–107. doi:10.7334/psicothema2013.256

    Google Scholar 

  • Sireci, S., & Padilla, J. L. (2014). Validating assessments: introduction to the special section. Psicothema, 26(1), 97–99. doi:10.7334/psicothema2013.255

    Google Scholar 

  • Sun, P. C., Tsai, R. J., Finger, G., Chen, Y. Y., & Yeh, D. (2008). What drives a successful e-learning? An empirical investigation of the critical factors influencing learner satisfaction. Computers & Education, 50(4), 1183–1202.

    Article  Google Scholar 

  • Wang, M.-T., Willett, J. B., & Eccles, J. (2011). The assessment of school engagement: examining dimensionality and measurement invariance by gender and race/ethnicity. Journal of School Psychology, 49, 465–480. Retrieved from http://www.sciencedirect.com/science/article/pii/S0022440511000240

    Article  Google Scholar 

  • Williams, P., Schrum, L., Sangrà, A., & Guárdia, L. (2004). Fundamentos del diseño técnico-pedagógico. Modelos de diseño instruccional en e-learning. Barcelona: Universitat Oberta de Catalunya.

    Google Scholar 

  • Yakavetsky, G. (2003). La elaboración de un módulo instruccional. Retrieved from http://www1.uprh.edu/gloria/publicaciones/comoelaborarunmoduloinstruccional.pdf

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tania Moreira-Mora.

Additional information

About the authors

Tania Elena Moreira-Mora

Tania Elena Moreira-Mora has a PhD in Education. She is currently working in the Psychology and Orientation Department of Costa Rica Institute of Technology as researcher and evaluator in the Admission Examination Committee. Additionally, she is an instructor in the Postgraduate Program in Education at the State Distance University of Costa Rica (UNED).

Julia Espinoza-Guzmán

Julia Espinoza-Guzmán, Computer Engineer at Costa Rica Institute of Technology, graduate in Education from UNED, Costa Rica, Master in Project Management at Universidad Politécnica de Cataluña and Master in Educational Technologies at the Instituto Tecnológico de Monterrey. At Costa Rica Institute of Technology, she coordinates the e-learning training processes, and she is a researcher and professor for the Computational Engineering and Information Technologies Management career programs.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Moreira-Mora, T., Espinoza-Guzmán, J. Initial evidence to validate an instructional design-derived evaluation scale in higher education programs. Int J Educ Technol High Educ 13, 11 (2016). https://doi.org/10.1186/s41239-016-0007-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41239-016-0007-0

Keywords