- Research article
- Open Access
- Published:
Development and validation of students’ digital competence scale (SDiCoS)
International Journal of Educational Technology in Higher Education volume 19, Article number: 30 (2022)
Abstract
Towards the transition to blended and remote education, evaluating the levels of students’ digital competence and designing educational programs to advance them is of paramount importance. Existing validated digital competence scales usually ignore either important digital skills needed or new socio-technological innovations. This study proposes and validates a comprehensive digital competence scale for students in higher education. The suggested instrument includes skills of online learning and collaboration, social media, smart and mobile devices, safety, and data protection. The scale was evaluated on a sample of 156 undergraduate and postgraduate students just before and at the beginning of the COVID-19 crisis. The final scale is composed of 28 items and six digital competence components. The evaluation study revealed valid results in terms of model fit criteria, factor loadings, internal validity, and reliability. Individual factors like the students’ field of study, computer experience and age revealed significant associations to the scale components, while gender revealed no significant differences. The suggested scale can be useful to the design of new actions and policies towards remote education and the digital skills’ development of adult learners.
Introduction
New digital trends and technologies are reshaping the way people work, communicate, and learn. According to the OECD report (OECD Skills Outlook, 2019; p. 11), “Countries’ preparedness to seize the benefits of digital transformation is largely dependent on the skills of their populations …” Today such skills are even more critical for teachers and students due to the COVID-19 crisis and the context of Emergency Remote Education (ERE). During the COVID-19 ERE transition, teachers and students shifted to fully online teaching and learning (OECD, 2019). The shift to ERE is heavily dependent on the individuals’ digital skills; hence evaluating their digital competence might be practically useful for educational institutions, pedagogy designers, and educational policy makers towards the design of efficient ERE strategies. Although recent studies have evaluated the usefulness of the educational technologies used in the context of ERE (Bond et al., 2021), the research on students’ digital skills or online readiness is still limited.
Digital competence (DC) traditionally reflects a person’s ability to use digital technologies in a critical, collaborative, and creative way; also, the person should have the knowledge, skills, and attitude to be perceived as having the competence on a domain (European Commission, 2019a; Marusic & Viskovic, 2018; Suwanroj et al., 2017, 2018). A student’s perceived digital competence reflects his/her Information and Communication Technologies (ICT)-based knowledge and skills that can be used to perform ICT-related tasks (Meng et al., 2019). Recent works confirm that students’ perceived ICT competence significantly affect their academic achievement (Park & Weng, 2020) and highlight the importance of understanding the global ICT trends on mobile, Internet and social media use (We Are Social & Hootsuite, 2020). The European Commission (2020) also reports that such skills of social media and mobile use should be included in the Digital Competence and New Skills Agenda.
Research shows that there are several ‘barriers’ in supporting young adults’ digital skills development; such barriers include the poor access to technology and limited support networks (Eynon & Geniets, 2016). The authors also explain that lack of experience and of digital skills decreases the levels of perceived usefulness of Internet in young people’s lives. Also, according to Cullinan et al. (2021), one-in-six higher education students are at risk of poor access to Internet, posing a significant barrier to attend their courses during the pandemic. The European Commission (EC, 2018a) admits that there is an urgent need to speed up the exchange of good practices in the field of adult digital education.
Attempting to measure and quantify the students’, teachers’ or citizens’ digital skills, several studies have developed methodologies to identify the key components of digital competence (e.g., All Aboard!, 2015; European Commission, 2019a). The newest version of the European Digital Competence Framework (DigComp 2.0) describes which skills are required to use digital technologies “in a confident, critical, collaborative and creative way to achieve goals related to work, learning, leisure, inclusion and participation in our digital society” (European Commission, 2019a). Several other frameworks suggest different versions (e.g., ESCO, 2019; Fraillon et al., 2019; UNESCO, 2018) of a digital competence framework, while recent studies attempt to extend the previous DC scales by including contemporary skills of critical thinking, communication, etc. (Peart et al., 2020).
However, these studies mainly concern the generic population and are not student oriented. Most important, the recently emerged digital skills regarding mobile/e-learning, mobile/e-commerce and social media activities are considered in a limited number of studies (e.g., Perifanou, & Economides, 2019b; Lee et al., 2015). Last, several research studies are focused on measuring the students’ digital skills across different contexts and regions using existing students’ DC frameworks, but only a few (e.g., Alarcón et al., 2020; Kuzminska et al., 2018) attempted to quantitatively evaluate or adjust the applied scales.
Motivated by the afore described research gap, this study seeks to quantitatively adjust and evaluate a recent instrument on the students’ DC, forming a validated students’ digital competence scale (SDiCoS) that can be applied in the context of remote education and university students. The suggested validated scale is based on a recently proposed framework and instrument (Perifanou, & Economides, 2019a, 2019b) which aims at measuring individuals’ digital skills and knowledge on today’s computer and Internet use, as well as social media and mobile activities. Also, since previous studies reported the effects of personal factors on students’ digital skill components (He & Zhu, 2017; Tømte & Hatlevik, 2011) and online learning (Yu, 2021), this study also seeks to explore the potential differences across the DC components based between different groups of students. Towards this goal, the main research objectives (ROs) are formed as follows:
-
RQ1: To develop and quantitatively validate a scale to measure the students’ digital competencies considering the context of remote education.
-
RQ2: To explore the significant differences in the students’ digital skills, between different groups of students including their gender, age, field of study and experience in computer use.
Overall, the findings can contribute towards the design of a comprehensive DC scale that considers recent technological trends, and concerns both undergraduate and postgraduate students’ competence items. Also, it might be practically useful towards the design and implementation of actions or policies to detect DC gaps and reinforce the adult learners’ digital competence in remote and blended learning.
Related works
Several previous studies examined the structure of digital competence models and instruments by applying statistical methods. Many of those studies (e.g., Oberländer et al., 2020; Tondeur et al., 2017; Touron et al., 2018) performed first and/or second order confirmatory factor analyses (CFA). Other studies performed exploratory factor analyses (EFA) to identify the main components that form digital competence scale (e.g., Internet skills scale, Technology/ICT Literacy, etc.) either for students’ (e.g., Lau & Yuen, 2014; van Deursen et al., 2016) or teachers’ digital skills (e.g., Siddiq et al., 2016; Touron et al., 2018).
Furthermore, much of the research in students’ DC regards the examination of structural relationships between the components (e.g., Aesaert et al., 2015; Hatlevik et al., 2015; Schmid & Petko, 2019) or it has been implemented out of the educational context, mainly focusing on the employment sector (e.g., Oberländer et al., 2020).
Table 1 selectively presents the scale size, components, and validation methods of previous quantitative studies that designed DC scales (either for students, teachers, or other individuals), in the context of higher, secondary, or primary education across different regions.
As depicted in Table 1, only a few studies have been validated in the population of undergraduate students and/or in European countries. Second, none of the cited studies has employed a partial least squares structural equation modeling (PLS-SEM) approach to identify or confirm a digital competence measurement scale, although PLS-SEM has been proved more reliable for applying confirmatory factor analyses, compared to Covariance-based (CB-SEM) approaches (Asyraf & Afthanorhan, 2013).
In the meanwhile, there have been several studies (Marusic & Viskovic, 2018; Suwanroj et al., 2017, 2018) that examined the structure of digital competence instruments by applying qualitative approaches (e.g., expert views and/or combined/review-based approaches). Recently, Perifanou and Economides (2019a, 2019b) proposed a comprehensive framework and an instrument consisting of 56-items to measure the students’ DC. The suggested instrument is informed by and extends previous popular DC frameworks (All aboard!, 2015; European Commission, 2019a; Fraillon et al., 2019; UK, 2019; UNESCO, 2018). Comparing to previous models, this instrument meets the today’s DC requirements by including skills related to the social media and mobile use.
Materials and methods
Instrument
The initial instrument of this study (Perifanou, & Economides, 2019a, 2019b) was composed of 56 items and four dimensions namely (i) Access, Search and Find, (ii) Use, Store, Manage, Evaluate and Delete, (iii) Communicate, Collaborate, and Share, and (iv) Create, Apply, Modify, Combine, Solve and Protect. The items in the four dimensions considered new digital innovations (e.g., social media and smart devices), as well as ethical and responsible behavior).
For the needs of this study, some items were adjusted through rephrasing or adding explanatory comments and examples. Five experts in the field of Technology Enhanced Learning (TEL) reviewed the instrument’s items regarding the wording, and the quality of the items, to minimize misperceptions. Then, after an initial PLS-SEM evaluation of the responded adjusted questionnaire, several items were removed due to low internal consistency scores, forming at last a six-component (by adding two components) instrument and 28 items. So, the initial 4 dimensions were adjusted to six components and the initial 56 items were reduced to 28 items. The proposed components are (1) Search, Find, Access (SFA); (2) Develop, Apply, Modify (DAM); (3) Communicate, Collaborate, Share (CCS); (4) Store, Manage, Delete (SMD); (5) Evaluate (EV); and (6) Protect (PR). The final instrument is presented in Appendix. All the items are measured on a 5-point Likert scale (1: strongly disagree to 5: strongly agree). The questionnaire’s used terms were explained to the participants as follows: “Smart device = smartphone, tablet, laptop, pc, camera, navigator, game console, smart TV, etc.; Object = document, picture, movie, software, app, etc.”.
Sample characteristics and data collection
During the period between January and February 2020, the DC questionnaire was distributed in a written form to students in two different undergraduate university courses (e-Commerce and e-Business, Information Systems in Management), and in April 2020 it was sent out online in three postgraduate programmes (Information Systems, e-Business & Digital Marketing, Law & Economics) in Greece. The second part of the survey (April, 2020) was conducting within the COVID-19 crisis and the school closure in Greece, hence all participants were already attending emergency remote courses. The remote courses were conducted through synchronous video lectures via the Zoom platform, and the courses’ materials were uploaded to the Open eClass online platform for asynchronous education. Open eClass is an open-source integrated e-course management system compatible with the international standard Sharable Content Object Reference Model (SCORM).
The questionnaire items were measured on a five-point Likert scale from “Strongly disagree” to “Strongly agree”. The questionnaire also asked for some social and academic information (gender, age, experience in mobile and computer use, average grade in last semester, etc.). The total population that was invited to participate in the survey voluntarily and anonymously was 300 students.
All participants were asked to consent for their volunteer and anonymous participation in the study. It was not possible to identify the identity of any respondent and all ethics standards were met according to the university internal committee. Several students did not complete the questionnaire and after eliminating the invalid answers the final working sample was 156 students, 80 undergraduates and 76 postgraduates. The respondents’ socio demographic characteristics are presented in Table 2.
Data analysis
Structural Equation Modelling (SEM) is considered as one of the most important statistical developments in social sciences (Hair et al., 2011). SEM elaborates in a comprehensive and efficient manner the relationships among multiple independent and dependent constructs (the structural model) simultaneously (Gefen et al., 2000; Hair et al., 2010). Moreover, SEM not only assesses the structural model but also evaluates the measurement model (Gefen et al., 2000, 2011). Researchers applying SEM can choose between a covariance base analysis (CB-SEM) or partial least squares (PLS-SEM) (Gefen et al., 2000; Hair et al., 2011). Recently researchers introduced methods that provide consistent PLS-SEM estimations that can be used complementary or alternatively to CB-SEM (Bentler & Huang, 2014; Dijkstra, 2014; Dijkstra & Henseler, 2015).
Contrary to previous studies in the literature that used mainly CB-SEM approaches, this study applied a hybrid PLS-SEM and CB-SEM approach to evaluate the suggested scale, in terms of internal consistency, composite reliability, convergence validity and discriminant validity. PLS-SEM was applied for the following reasons:
-
According to the suggestions of Bentler and Huang (2014), Dijkstra (2014), and Dijkstra and Henseler (2015) who proved that PLS-SEM can consistently mimic common CB-SEM approaches, PLS-SEM is an appropriate approach to study and validate the structure of a model. In this study, the primary scale validation is based on PLS-SEM CFA mainly because of the non-normality observed in the data (Shapiro & Wilk, 1965), the small sample size (Hair et al., 2014), and the adequateness of the method compared to CB-based approaches, as suggested in Asyraf and Afthanorhan (2013) and Rigdon (2012).
-
Furthermore, as recommended by Hair et al. (2011; p.144) a PLS-SEM approach should be implemented if “the goal is predicting key target constructs or identifying key ‘driver’ constructs” or if research is exploratory or an extension of an existing structural theory”. Contrary, a CB-SEM approach should be chosen if “the goal is theory testing, theory confirmation or comparison of alternative theories”. Although many researchers focus on comparing the differences of model estimations when using CB-SEM and PLS-SEM, both methods are complementary rather than competitive.
Based on the above, our methodological approach was based on the following steps:
-
i.
A PLS-SEM CFA was applied to primarily test for the model structure validation, using the software SmartPLS;
-
ii.
A CB-based CFA replication was applied, using Amos software, to further examine the results of factor loadings and model fit values;
-
iii.
A second-order CFA was conducted, using Amos software, to further validate the results and examine whether a broad latent factor of the students’ DC is composed by the six distinct DC factors.
Finally, to examine any significant differences among students across the DC components we conducted non-parametric statistical methods. We conducted a Mann–Whitney test to examine gender differences and Kruskal–Wallis tests to examine differences based on the students’ field of study and experience in computer use.
Results
Confirmatory factor analysis
The results of the PLS-SEM analysis suggest a good fit of the model on the values of NFI = 0.667 and Chi-Square = 843.442 according to the defined criteria of acceptance (Bryne, 2010; Hair et al., 2010; Kline, 2011). The value of Root Mean Square Error of Approximation (RMSEA = 0.088) indicated a score higher than 0.08 and less than 1.0 which is usually accepted as a good fit value, since a value of range between 0.05 and 1.00 are acceptable (Bandalos, 2018; Browne & Cudeck, 1992).
Also, the scores of the loading factors were highly valid (> 0.5) (Awang et al., 2010), and all the values of Cronbach alpha demonstrated internal consistency (Dijkstra & Henseler, 2015).
The bootstrapping results indicated that t (> 1.96) and p (< 0.01) values are all accepted and statistically significant. Composite reliability (CR) values indicate Internal consistency (Gefen et al., 2000) and average variance extracted (AVE) values indicate Convergent Reliability (Bagozzi & Yi, 1988; Chin, 2010; Fornell & Larcker, 1981), as depicted in Table 3.
As depicted in Table 4, the suggested students’ DC measurement model supports the discriminant validity between the constructs (Fornell & Larcker, 1981).
A CB-SEM approach was also applied using the AMOS software and the maximum likelihood estimation to reinforce or compare the findings. The CB-SEM analysis validated the factor loadings of all items although indicating lower values. The approach revealed good results in terms of the fitness of the model: χ2/df = 2.02, Probability level = 0.000, RMSEA = 0.080. However, the comparative fit index (CFI) = 0.84, the Tucker–Lewis fit index (TLI) = 0.80 and revealed values slightly lower that the suggested thresholds or marginally accepted (Bandalos, 2018; Browne & Cudeck, 1992; Carmines & McIver, 1981; Hoyle, 1995; Muthén & Muthén, 2012).
Table 5 illustrates the unstandardized and standardized parameter estimates; as depicted, the critical ratio (C.R.) of constructs is more than 1.96 and all estimates are all statistically significant at the alpha level of 0.000 (Hair et al., 2010).
A second order CFA analysis was finally conducted via the AMOS software. The results indicated a good fit of the SDiCoS model (Bandalos, 2018; Muthén & Muthén, 2012). RMSEA = 0.80, χ2/df = 2.04 and the p value is significant (p-value = 0.00). However, the increment fit indices (TFI = 0.84) show values below 0.9 and Hoelter values are below 200 indicating unsuitability of the sample size, mainly for a CB-based approach. Although CFI is not below 0.8 (= 0.83), it is accepted since “a value less than 0.10 or of 0.08 (in a more conservative version)” is a good fit of the model (Hu & Bentler, 1999). Overall, we can conclude that both the first and the second-order CBA models are generally considered as much valid as the PLS-SEM model that appears to indicate strong validity and reliability scores.
Student differences across the SDiCoS components
This study also examined the potential differences in students’ groups according to (i) gender, (ii) age, (iii) field of study (Programme) and (iv) experience of computer use, across mean scores across the six DC constructs, as defined in RQ2.
Interestingly, gender showed no significant differences. This finding agrees with recent reports regarding the digital skills of young adult females and males across Europe (European Commission, 2019b) although there is contradictory evidence as well (e.g., in He & Zhu, 2017). Moreover, since previous studies (e.g., Burnett et al., 2010; Terzis & Economides, 2012; Tzafilkou et al., 2016) revealed significant gender differences in perception and acceptance towards computer-related tasks, this study results are encouraging to the future of the worldwide endeavor to eliminate the permanently existing gender gap in computing (European Commission, 2018b). However, similar studies in secondary education students (Hinostroza et al., 2015) reveled no gender differences in computer related learning skills.
As presented in Table 6, age revealed one significant correlation (p < 0.05) with the factors of SMD and PR. Students between 25 and 35 revealed the highest levels in both constructs, while the youngest team of 18–24 expressed the lowest scores. This result implies that undergraduate students meet difficulties, or they lack the skills in protection and file management tasks and renders serious consideration since according to Eurostat (2020) younger Europeans (20–24) tend use Internet, text, and multimedia much more frequently than older groups (25–64), however they might lack some essential ‘out-of-Internet’ or ‘out-of-social-media’ skills like file management and file/data protection.
Although computer experience was significantly correlated to only component (SMD), the field of study showed several significant correlations in the components of DAM, CCS, SMD and PR. The post-graduate students in Digital Marketing expressed the highest scores across all the DC components, while the undergraduate programme of e-Commerce and e-Business showed the lowest values. However, most of the postgraduate students participated in the survey during the COVID-19 crisis, hence future research should examine whether this situation affected their responses and caused the difference in the groups-compared results.
Overall, the comparative results of this study can be generalized since the participants reflect a representative sample of higher education students in Greece, in terms of gender and age. Moreover, portions of different programmes (undergraduate and postgraduate) are considered in the study. However, different programmes (e.g., in different fields) or different regions might encounter significant differences in the students’ characteristics. Hence more research should be conducted on different student population to reinforce and validate the findings.
Discussion, implications, and limitations
The main objective of this study (RQ1) was to measure and validate SDiCoS, a new students’ digital competence scale encompassing several digital skills essential to the pre, during and post-pandemic context of ERE. The suggested model was based on a comprehensive instrument and framework designed by Perifanou and Economides (2019a, 2019b) which was informed by and extended previous DC frameworks (DIGCOMP, UNESCO, ESDF, ESCO, ICILS, etc.). The resulting six-factor and 28 items scale has been validated using a hybrid CFA approach combining SEM-PLS with CB-SEM CFA approaches and using SmartPLS and AMOS software. Results indicate that the PSL-SEM CFA produced valid values of construct validity and reliability and accepted model fit criteria, while the CB-SEM approach revealed a similar fit to the model (RMSEA = 0.08), but it scored lower factor loadings. These findings are in accordance with Asyraf and Afthanorhan (2013) who explained this issue via augmenting that PLS-SEM is more appropriate for CFA for not normally distributed data.
Compared to previous quantitative studies (e.g., Alarcón et al., 2020; Kong et al., 2019; Peart et al., 2020; Suwanroj et al., 2019; Touron et al., 2018; etc.) the present study is the only one presenting a hybrid approach where both PLS-SEM and CB-SEM approaches are implemented for CFA of scale validation.
Furthermore, contrary to previous studies that suggested too short (e.g., Lee et al., 2015) or quite long (e.g., Peart et al., 2020; Touron et al., 2018) instruments, the SDiCoS proposes a comprehensive model of six components and 28 items, providing a practical and easy to use instrument for future research on students’ DC. SDiCoS includes all the essential components as derived from previous popular frameworks, being adjusted to the present technological trends. SDiCoS is a validated scale of students’ digital competence that considers all six important skills components: (1) Search, Find, Access (SFA); (2) Develop, Apply, Modify (DAM); (3) Communicate, Collaborate, Share (CCS); (4) Store, Manage, Delete (SMD); (5) Evaluate (EV); and (6) Protect (PR).
Previous scales on students’ digital competence either ignore important components such as ‘Protect’ (e.g., Elstad & Christophersen, 2017; Koc & Barut, 2016; Lau & Yuen, 2014; Lee et al., 2015; Siddiq et al., 2016; Suwanroj et al., 2019), or take a completely different approach by considering components such as “parental ICT attitude” (Aesaert et al., 2015), “language integration” (Hatlevik et al., 2015), “Internet political activism” (Choi et al., 2017).
Furthermore, only few scales have been validated for undergraduate university students (Elstad & Christophersen, 2017; Koc & Barut, 2016; Kuzminska et al., 2018; Lee et al., 2015; Suwanroj et al., 2019), and postgraduate university students (Choi et al., 2017).
Finally, the current quantitative study is the only one (among the scale development and validation studies) carried out in a South European country (Greece) focusing on higher education students’ digital skills. Thus, the current study seeks to contribute to the students’ DC awareness across different regions and towards the design of homogenous students’ DC scales worldwide.
The SDiCoS scale is useful to reveal skill polarities and gaps in the students DC among the examined components. For example, as described in the results, younger students expressed lower perceived skills in protection and file management tasks, although they are more actively engaged in Internet and social media activities, compared to older age-groups of students.
SDiCoS would be useful to the following stakeholders:
-
a.
Policymakers and decision-makers at national, and international levels who are responsible for taking strategic decisions for education, digital technologies, employment, economy, etc.;
-
b.
Directors of formal and continuing education institutes who work on setting goals, measuring, providing training and certification regarding their students’ digital competence;
-
c.
Educators at educational institutes who design curriculum and syllabus for formal and informal training;
-
d.
Teachers, in service and in training, who would improve their digital competence and integrate digital technologies in their teaching practice;
-
e.
Teachers who would become aware of their students’ digital competence needs, and take appropriate actions;
-
f.
Researchers on the use of digital technologies, on individuals’ digital competence and digital skills.
-
g.
Instructional designers and educational institutions that plan to trace their teaching and learning strategies in the context of blended and online learning.
For example, SDiCoS could help policymakers who aim to identify students’ digital competence level to:
-
Design and organize educational adjustments and reforms, such as the emergent shift to remote education during the COVID-19 times or adjustments needed to the soft transition and/or maintenance to blended and online learning. In order to successfully design this shift, policymakers should know the level of students' (and teachers) digital competence (among other issues);
-
Design and financially support massive and specialized training on digital technologies to fight discrimination, digital divide, and non-inclusion of citizens with low digital competence, and boost innovation, employability, participation in the digital market, and digital society (e.g., e-commerce, e-banking, e-government). Although 82% of European individuals 16 to 24 years old have basic or above basic overall digital skills, only 60% of European individuals 25 to 64 years old have such skills (Eurostat, 2020). The suggested SDiCoS can be used to design short-term sessions or extra ICT training when needed, to assist young and older students in acquiring all the basic digital skills that they potentially lack.
Furthermore, the validated SDiCoS can serve internally, as a practical and useful tool to evaluate the students’ perceived digital competence in higher and continuing education institutions, including their knowledge and skills on recent technological trends like social media and mobile use.
One main limitation of this study is the small sample size for the CFA. Although, the sample size is efficient for the PLS-SEM approach, further research in encouraged on larger populations in the future. Also, the COVID-19 crisis emerged during the collection of the response. This situation has might affect the responses of the students that responded remotely duo to the school closure and further research should be conducted to explore the role of COVID-19 on the students’ DC perceived items. Furthermore, this study examined any DC’s differences with respect to gender, age, field of study and computer experience. Future researcher should investigate other factors that may affect DC or examine ERE specific components like skills in remote synchronous collaboration, and text-based online learning. Finally, it would be interesting to conduct future research at a later stage of the pandemic, to examine how and whether the students’ DCs have been improved.
Conclusions
This study develops and validates the SDiCoS scale to measure students’ digital competence. The proposed scale takes into consideration recent technological trends and previous studies on DC frameworks and provides the conceptual basis for understanding the main DC components in the context of remote education. The generated six-factor scale is composed of the following DC components: (1) Search, Find, Access; (2) Develop, Apply, Modify; (3) Communicate, Collaborate, Share; (4) Store, Manage, Delete; (5) Evaluate; and (6) Protect.
Regarding RQ1, the validity of SDiCoS was tested through both PLS-SEM and CB-SEM methods. The PLS-SEM based CFA approach demonstrated the SDiCoS validity, resulting in highly valid consistency and reliability, and accepted model fit criteria. A CB-SEM replication of the CFA and a second-order CFA was also conducted to complement, compare, and reinforce the findings.
Regarding RQ2, the statistical analysis indicated significant differences across the SDiCoS constructs between different groups of students, including their age, field of study and computer experience.
The SDiCoS model is usable for both undergraduate and post-graduate students in higher education and can be used to measure the students’ digital competence across the main DC components, concerning the recently emerged technological trends like remote/online education, social media, smart devices, mobile and safety skills.
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
References
Aesaert, K., Van Braak, J., Van Nijlen, D., & Vanderlinde, R. (2015). Primary school pupils’ ICT competences: Extensive model and scale development. Computers and Education. https://doi.org/10.1016/j.compedu.2014.10.021
Al Khateeb, A. A. M. (2017). Measuring digital competence and ICT Literacy: An exploratory study of in-service English language teachers in the context of Saudi Arabia. International Education Studies. https://doi.org/10.5539/ies.v10n12p38
Alarcón, R., Pilar Jiménez, E., & Vicente-Yagüe, M. I. (2020). Development and validation of the DIGIGLO, a tool for assessing the digital competence of educators. British Journal of Educational Technology. https://doi.org/10.1111/bjet.12919
All Aboard!. (2015). Towards a national digital skills framework for Irish higher education. Retrieved May 30, 2020 from https://www.teachingandlearning.ie/wpcontent/uploads/NF-2016-Towards-a-National-Digital-Skills-Framework-for-Irish-Higher-Education.pdf.
Asyraf, W. M., & Afthanorhan, B. W. (2013). A comparison of partial least square structural equation modeling (PLS-SEM) and covariance based structural equation modeling (CB-SEM) for confirmatory factor analysis. International Journal of Engineering Science and Innovative Technology (IJESIT), 2, 198–205.
Awang, Z., Afthanorhan, A., Mohamad, M., & Asri, M. A. M. (2015). An evaluation of measurement model for medical tourism research: The confirmatory factor analysis approach. International Journal of Tourism Policy, 6(1), 29–45. https://doi.org/10.1504/IJTP.2015.075141.
Bagozzi, R., & Yi, Y. (1988). On the evaluation of structural equation models. Journal of the Academy of Marketing Sciences, 16, 74–94. https://doi.org/10.1007/BF02723327
Bandalos, D. L. (2018). Measurement theory and applications for the social sciences. Guilford Publications.
Bentler, P. M., & Bonett, D. G. (1980). Significance tests and goodness of fit in the analysis of covariance structures. Psychological Bulletin. https://doi.org/10.1037/0033-2909.88.3.588
Bentler, P. M., & Huang, W. (2014). On components, latent variables, PLS and simple methods: Reactions to Ridgon’s rethinking of PLS. Long Range Planning, 47(3), 138–145.
Blayone, T. J. B., Mykhailenko, O., Kavtaradze, M., Kokhan, M., vanOostveen, R., & Barber, W. (2018). Profiling the digital readiness of higher education students for transformative online learning in the post-soviet nations of Georgia and Ukraine. International Journal of Educational Technology in Higher Education, 15(1), 37. https://doi.org/10.1186/s41239-018-0119-9
Bond, M., Bedenlier, S., Marín, V. I., et al. (2021). Emergency remote teaching in higher education: Mapping the first global online semester. International Journal of Educational Technology in Higher Education, 18, 50. https://doi.org/10.1186/s41239-021-00282-x
Browne, M. W., & Cudeck, R. (1992). Alternative ways of assessing model fit. Sociological Methods & Research, 21(2), 230–258. https://doi.org/10.1177/0049124192021002005
Bryne, M. B. (2010). Structural equation modeling with AMOS. New Jersey, USA: Lawrence Erbaum Associates Publisher.
Burnett, M., Fleming, S., & Iqbal, S. (2010). Gender differences and programming environments: across programming populations. In Proceedings of the 2010 ACM-IEEE international symposium on empirical software engineering and measurement.
Carmines, E. G., & McIver, J. P. (1981). Analyzing models with unobserved variables: Analysis of covariance structures. In G. W. Bohrnstedt & E. F. Borgatta (Eds.), Social measurement: Current issues (pp. 65–115). Sage Publications Inc.
Chin, W. W. (2010). How to write up and report PLS analyses. In V. Esposito Vinzi, W. W. Chin, J. Henseler, & H. Wang (Eds.), Handbook of partial least squares: Concepts, methods and applications (pp. 655–690). Springer.
Choi, M., Glassman, M., & Cristol, D. (2017). What it means to be a citizen in the internet age: Development of a reliable and valid digital citizenship scale. Computers and Education, 107, 100–112. https://doi.org/10.1016/j.compedu.2017.01.002
Cullinan, J., Flannery, D., Harold, J., Lyons, S., & Palcic, D. (2021). The disconnected: COVID-19 and disparities in access to quality broadband for higher education students. International Journal of Educational Technology in Higher Education, 18, 26. https://doi.org/10.1186/s41239-021-00262-1
Dijkstra, T. K. (2014). PLS’ Janus Face—Response to Professor Rigdon’s ‘rethinking partial least squares modeling: In praise of simple methods.’ Long Range Planning, 47(3), 146–153.
Dijkstra, T. K., & Henseler, J. (2015). Consistent and asymptotically normal PLS estimators for linear structural equations. Computational Statistics and Data Analysis, 81(1), 10–23. https://doi.org/10.1016/j.csda.2014.07.008
Elstad, E., & Christophersen, K.-A. (2017). Perceptions of digital competency among student teachers: Contributing to the development of student teachers’ instructional self-efficacy in technology-rich classrooms. Education Sciences, 7(1), 27. https://doi.org/10.3390/educsci7010027
ESCO. (2019). Digital competencies, European skills, competences, qualifications and occupations. Retrieved February 19, 2020 from http://data.europa.eu/esco/skill/aeecc330-0be9-419f-bddb-5218de926004.
European Commission. (2018a). Digital Education Action Plan (2018a–2020). Retrieved November 15, 2019 from https://ec.europa.eu/education/education-in-the-eu/digital-education-action-plan_en.
European Commission. (2018b). Increase in gender gap in the digital sector—Study on Women in the Digital Age. Retrieved February 1, 2021 from https://ec.europa.eu/digital-single-market/en/news/increase-gender-gap-digital-sector-study-women-digital-age.
European Commission. (2019a). The Digital Competence Framework 2.0. Retrieved November 15, 2020 from https://ec.europa.eu/jrc/en/digcomp/digital-competence-framework.
European Commission. (2019b). Women in Digital. Retrieved September 10, 2020 from https://ec.europa.eu/digital-single-market/en/women-ict.
European Commission. (2020). European skills agenda for sustainable competitiveness, social fairness and resilience. Retrieved May 30, 2020 from https://ec.europa.eu/commission/presscorner/detail/en/ip_20_1196.
Eurostat. (2020). Individuals' level of digital skills. Retrieved May 30, 2020 and January 10, 2021 from https://ec.europa.eu/eurostat/.
Eynon, R., & Geniets, A. (2016). The digital skills paradox: How do digitally excluded youth develop skills to use the internet? Learning, Media and Technology, 41(3), 463–479. https://doi.org/10.1080/17439884.2014.1002845
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50.
Fraillon, J. Ainley, J., Schulz, W., Duckworth, D., & Friedman, T. (2019). IEA international computer and information literacy study 2018 assessment framework. Retrieved November 15, 2019 from https://www.iea.nl/sites/default/files/2019-05/IEA%20ICILS%202018%20Assessment%20Framework-Final.pdf.
Gefen, D., Rigdon, E. E., & Straub, D. (2011). An update and extension to SEM guidelines for administrative and social science research. MIS Quarterly, 35(2), 3–14.
Gefen, D., Straub, D. W., & Boudreau, M.-C. (2000). Structural equation modeling and regression guidelines for research practice. Communications of the Association for Information Systems, 4(7), 2–77.
Hair, J., Black, W., Babin, B., & Anderson, R. (2010). Multivariate data analysis: A global perspective. In P. P. Hall (Ed.), Multivariate data analysis: A global perspective (7th ed., Vol. 7). Pearson.
Hair, J. F., Sarstedt, M., Hopkins, L., & Kuppelwieser, V. G. (2014). Partial least squares structural equation modeling (PLS-SEM): An emerging tool in business research. European Business Review. https://doi.org/10.1108/EBR-10-2013-0128
Hair, J. F., Sarstedt, M., Ringle, C. M., & Mena, J. A. (2011). An assessment of the use of partial least squares structural equation modeling in marketing research. Journal of the Academy of Marketing Science, 40(3), 414–433. https://doi.org/10.1007/s11747-011-0261-6
Hatlevik, O. E., Guomundsdóttir, G. B., & Loi, M. (2015). Digital diversity among upper secondary students: A multilevel analysis of the relationship between cultural capital, self-efficacy, strategic use of information and digital competence. Computers and Education, 81, 345–353. https://doi.org/10.1016/j.compedu.2014.10.019
Hinostroza, J. E., Matamala, C., Labbé, C., Claro, M., & Cabello, T. (2015). Factors (not) affecting what students do with computers and internet at home. Learning, Media and Technology, 40(1), 43–63. https://doi.org/10.1080/17439884.2014.883407
Hoyle, R. H. (1995). The structural equation modeling approach: Basic concepts and fundamental issues. In R. H. Hoyle (Ed.), Structural equation modeling: Concepts, issues, and applications. Sage.
Hu, L.-T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55. https://doi.org/10.1080/10705519909540118.
Kim, M., & Choi, D. (2018). Development of youth digital citizenship scale and implication for educational setting. Journal of Educational Technology & Society, 21(1), 155–171.
Koc, M., & Barut, E. (2016). Development and validation of New Media Literacy Scale (NMLS) for university students. Computers in Human Behavior, 63, 834–843. https://doi.org/10.1016/j.chb.2016.06.035
Kong, S. C., Wang, Y. Q., & Lai, M. (2019). Development and validation of an instrument for measuring digital empowerment of primary school students. In CompEd 2019—Proceedings of the ACM conference on global computing education (pp. 172–177). Association for Computing Machinery, Inc. https://doi.org/10.1145/3300115.3309523.
Kuzminska, O., Mazorchuk, M., Morze, N., Pavlenko, V., & Prokhorov, A. (2018). Digital competency of the students and teachers in Ukraine: Measurement, analysis, development prospects. In CEUR workshop proceedings (Vol. 2104, pp. 366–379). CEUR-WS.
Lau, W. W. F., & Yuen, A. H. K. (2014). Developing and validating of a perceived ICT literacy scale for junior secondary school students: Pedagogical and educational contributions. Computers and Education, 78, 1–9. https://doi.org/10.1016/j.compedu.2014.04.016
Lee, L., Chen, D. T., Li, J. Y., & Lin, T. B. (2015). Understanding new media literacy: The development of a measuring instrument. Computers and Education, 85, 84–93. https://doi.org/10.1016/j.compedu.2015.02.006
Marusic, T., & Viskovic, I. (2018). ICT competencies of students. Journal ITRO, 115(56), 13.
Meng, L., Qiu, C., & Boyd-Wilson, B. (2019). Measurement invariance of the ICT engagement construct and its association with students’ performance in China and Germany: Evidence from PISA 2015 data. British Journal of Educational Technology, 50(6), 3233–3251. https://doi.org/10.1111/bjet.12729
Mengual-Andrés, S., Roig-Vila, R., & Mira, J. B. (2016). Delphi study for the design and validation of a questionnaire about digital competences in higher education. International Journal of Educational Technology in Higher Education, 13, 12. https://doi.org/10.1186/s41239-016-0009-y
Muthén, L. K., & Muthén, B. O. (2012). Mplus user’s guide (7th ed.). Los Angeles.
Oberländer, M., Beinicke, A., & Bipp, T. (2020). Digital competencies: A review of the literature and applications in the workplace. Computers and Education, 146, 103752. https://doi.org/10.1016/j.compedu.2019.103752
OEDC. (2019). OECD Skills Outlook 2019. Retrieved November 15, 2015 from https://www.oecd-ilibrary.org/sites/df80bc12-en/index.html?itemId=/content/publication/df80bc12-en.
Park, S., & Weng, W. (2020). The relationship between ICT-related factors and student academic achievement and the moderating effect of country economic indexes across 39 countries: Using multilevel structural equation modelling. Educational Technology & Society, 23(3), 1–15.
Peart, M. T., Gutiérrez-Esteban, P., & Cubo-Delgado, S. (2020). Development of the digital and socio-civic skills (DIGISOC) questionnaire. Educational Technology Research and Development, 68, 3327–3351. https://doi.org/10.1007/s11423-020-09824-y
Perifanou, M., & Economides, A. (2019a). An instrument for the digital competence actions framework. In ICERI2019 Proceedings, vol. 1, pp. 11139–11145. https://doi.org/10.21125/iceri.2019.2750.
Perifanou, M., & Economides, A. (2019b). The digital competence actions framework. In ICERI2019 Proceedings, vol. 1, pp. 11109–11116. https://doi.org/10.21125/iceri.2019.2743.
Rigdon, E.E. (2012). Rethinking partial least squares path modeling: In praise of simple methods. In Long range planning (Vol. 45, pp. 341-358). https://doi.org/10.1016/j.lrp.2012.09.010
Schmid, R., & Petko, D. (2019). Does the use of educational technology in personalized learning environments correlate with self-reported digital skills and beliefs of secondary-school students? Computers and Education, 136, 75–86. https://doi.org/10.1016/j.compedu.2019.03.006
Shapiro, S. S., & Wilk, M. B. (1965). An analysis of variance test for normality (Complete Samples). Biometrika, 52(3/4), 591. https://doi.org/10.2307/2333709
Siddiq, F., Scherer, R., & Tondeur, J. (2016). Teachers’ emphasis on developing students’ digital information and communication skills (TEDDICS): A new construct in 21st century education. Computers and Education, 92–93, 1–14. https://doi.org/10.1016/j.compedu.2015.10.006
Suwanroj, T., Leekitchwatana, P., Pimdee, P. (2017). Investigating digital competencies for undergraduate students at Nakhon Si Thammarat Rajabhat University. In DRLE 2017 The 15th international conference faculty of industrial education and technology King Mongkut’s Institute of Technology Ladkrabang (vol. 27, No. 2, pp. 11–19).
Suwanroj, T., Leekitchwatana, P., & Pimdee, P. (2019). Confirmatory factor analysis of the essential digital competencies for undergraduate students in Thai higher education institutions. Journal of Technology and Science Education, 9(3), 340–356. https://doi.org/10.3926/JOTSE.645
Suwanroj, T., Leekitchwatana, P., Pimdee, P. Thiyaporn, K., &Thanongsak, S. (2018). Development of digital competency domains for undergraduate students in Thailand. International Journal of the Computer, the Internet and Management, 27(2).
Terzis, V., & Economides, A. A. (2012). Computer based assessment: Gender differences in perceptions and acceptance. Computers in Human Behaviour, 27(6, No), 2108–2122.
Tømte, C., & Hatlevik, O. E. (2011). Gender-differences in self-efficacy ICT related to various ICT-user profiles in Finland and Norway. How do self-efficacy, gender and ICT-user profiles relate to findings from PISA 2006. Computers and Education, 57(1), 1416–1424.
Tondeur, J., Aesaert, K., Pynoo, B., van Braak, J., Fraeyman, N., & Erstad, O. (2017). Developing a validated instrument to measure preservice teachers’ ICT competencies: Meeting the demands of the 21st century. British Journal of Educational Technology, 48(2), 462–472. https://doi.org/10.1111/bjet.12380
Touron, J., Martin, D., Navaro, A., & E., Pradas, S., & Invigo, V. (2018). Construct validation of a questionnaire to measure teachers’ digital competence (TDC). Revista Española De Pedagogía, 76(269), 25–54. https://doi.org/10.22550/rep76-1-2018-10
Tzafilkou, K., Protogeros, N., Charagiannidis, C., & Koumpis, A. (2016). Gender-based behavioral analysis for end-user development and the ‘RULES’ attributes. Education and Information Technologies, 22, 1–42.
UK. (2019). National standards for essential digital skills. Retrieved November 15, 2015 from https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/796596/National_standards_for_essential_digital_skills.pdf.
UNESCO. (2018). A global framework of reference on digital literacy for indicator 4.4.2. Information Paper, 51(51), 1–146. Retrieved June, 2020 from http://uis.unesco.org/sites/default/files/documents/ip51-global-framework-reference-digital-literacy-skills-2018-en.pdf.
van Deursen, A. J. A. M., Helsper, E. J., & Eynon, R. (2016). Development and validation of the Internet Skills Scale (ISS). Information Communication and Society, 19(6), 804–823. https://doi.org/10.1080/1369118X.2015.1078834
We Are Social & Hootsuite. (2020). Digital 2020: Global digital overview. Global Digital Insights, 247. Retrieved May 30, 2020 from https://hootsuite.com/en-gb/resources/digital-2020.
Yu, Z. (2021). The effects of gender, educational level, and personality on online learning outcomes during the COVID-19 pandemic. International Journal of Educational Technology in Higher Education., 18, 14. https://doi.org/10.1186/s41239-021-00252-3
Acknowledgements
Not applicable.
Funding
No funding was received.
Author information
Authors and Affiliations
Contributions
MP and AE designed the proposed instrument. KT statistically validated and adjusted the suggested instrument. All authors reviewed the related literature. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
See Table 7.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Tzafilkou, K., Perifanou, M. & Economides, A.A. Development and validation of students’ digital competence scale (SDiCoS). Int J Educ Technol High Educ 19, 30 (2022). https://doi.org/10.1186/s41239-022-00330-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s41239-022-00330-0
Keywords
- Adult learning
- Digital competence
- Digital skills
- Scale development
- Twenty-first century skills