Skip to main content

An associational study: preschool teachers’ acceptance and self-efficacy towards Educational Robotics in a pre-service teacher training program

Abstract

Purpose

This study explores pre-service preschool teachers’ acceptance and self-efficacy towards Educational Robotics (ER) during a university course, and also examines their perceptions of the course.

Methodology

This is a one-group intervention study with an associational research design that includes both quantitative and qualitative research methods: two pre-questionnaires and two post-questionnaires on pre-service teachers’ acceptance and self-efficacy towards ER, and participants’ training journals.

Findings

The results show that pre-service teachers’ acceptance and self-efficacy towards ER improved after they completed the ER teacher training course. There was a significant difference between the start and the end of the ER training in the pre-service teachers’ acceptance of ER in the areas of perceived ease of use, enjoyment and attitudes, and in their self-efficacy. The findings based on the training journals show that participants positively evaluated the course. The participants also provided suggestions for improving it, such as additional training sessions, resources and time for experimentation.

Value

Our study reveals the impact of an ER training program and showcases the importance of integrating ER in pre-service teachers’ education.

Introduction

Educational robotics (ER) is an educational tool (Frangou et al., 2008) that provides new and extended possibilities for learning (Shin & Kim, 2007). As previous literature has indicated, students can learn robotics, learn by robotics, and learn with robotics (Gaudiello & Zibetti, 2016). Learning robotics refers to students becoming familiarized with technology, engineering, and robotics. ER has many benefits in relation to engineering and programming skills (Barker & Ansorge, 2007; Nugent et al., 2009). Learning by robotics means that learners acquire knowledge of a certain subject through robotics, and thus acquire multidisciplinary benefits in mathematics (Barker & Ansorge, 2007; Hussain et al., 2006; Nugent et al., 2009), science (Barker & Ansorge, 2007) and other disciplines. Students learn with robotics when the learning and teaching process is supported by humanized robots that act as assistants. Integrating ER into the school curriculum should be promoted given that it benefits students’ learning across multiple disciplines, and facilitates the acquisition of twenty-first century skills, such as collaboration (Eguchi, 2013), computational thinking (Lee et al., 2011) and problem-solving (Highfield, 2010). Teachers influence the way ER is received by their pupils (Hussain et al., 2006) and therefore play an important role in its implementation in the classroom and integration in the curriculum. Providing teachers with specialized training programs in ER could contribute to ER technologies being introduced into the teaching and learning process. Moreover, student teachers’ responses to ER, such as their perceptions and self-efficacy, could be used to enrich current ER training initiatives.

Pre-service and in-service teachers’ perceptions of ER and the difficulties they encounter in ER classroom implementation are examined in several recent studies. For example, Karypi (2018) put in-service teachers’ perceptions in context by researching their views on ER integration and implementation in schools. Aksu and Durak (2019) also studied in-service teachers’ views on robotics but in the context of robotic tournaments, while Çiftçi et al. (2020) explored pre-service early-childhood teachers’ views on STEM education and their STEM teaching practices. Prior to these studies, Santos et al. (2016) researched in-service teachers’ beliefs, attitudes, and intention to use robotics in their future teaching, while Khanlari (2013) explored in-service teachers' perceptions of the effects of robotics on students’ personal skills and abilities. According to these studies, teachers hold positive views of ER and its impact on students’ learning. Teachers perceive robotics to have positive effects on students' lifelong learning skills (Khanlari, 2016), they consider that most students improve their skills, such as problem-solving, collaboration and creativity, through ER, and acquire engineering and programming skills (Schina et al., 2020; Theodoropoulos et al., 2017). Teachers also perceive that ER promotes students’ curiosity and engages their attention (Aksu & Durak, 2019). In addition, they consider that ER fosters positive attitudes towards STEM education, encourages independent and active learning, facilitates teaching, and provides opportunities for the development of students’ cognitive, social and communication skills (Karypi, 2018). However, there is only limited research on teachers’ self-efficacy towards ER, as current research is mostly focusing on students’ self-efficacy (Durak et al., 2019; Jäggle et al., 2020; Latikka et al., 2019; Leonard et al., 2016; Tsai et al., 2021) rather than the teachers’ self-efficacy. Interestingly Tsai et al. (2021), and Jäggle et al. (2020) propose developing tools for evaluating self-efficacy, for assessing students’ self-efficacy for learning robotics and measuring students’ self-efficacy in educational robotics activities, respectively. Future research should move in this direction, reinforcing teachers’ self-efficacy and measuring it, particularly in ER teacher training programs, in order to improve the structure and content of the training activities.

The present study addresses the need for conducting further research into teachers’ perceptions and self-efficacy towards ER in the context of teacher training. To be more precise, this study examines whether pre-service preschool teachers’ acceptance and self-efficacy towards ER change after they participate in the training program. The study also explores the pre-service teachers’ perceptions of the training program. The research questions are formulated as follows:

  • RQ1: To what extent did the ER teacher training program have an effect on pre-service teachers’ acceptance of ER?

  • RQ2: To what extent did the ER teacher training program have an effect on pre-service teachers’ self-efficacy in ER?

  • RQ3: What are the participants’ perceptions of the ER training program?

By addressing these research questions, the present work contributes to the research and education community in the following three ways:

  • The study places pre-service preschool teachers at the center of the ER teacher training research. As it was pointed out in our review (Schina, Esteve-González, et al., 2020), there are very few training programs held exclusively for preschool teachers (Bers et al., 2002, 2013; Caballero-González & Muñoz-Repiso, 2017). This training program is tailored to the specific needs of preschool teachers. Our study therefore addresses a gap in the present literature and could enrich the work of other researchers.

  • The study looks at pre-service teachers’ acceptance, self-efficacy, and perceptions throughout an ER training program. These variables are decisive when it comes to teachers’ classroom implementation of ER activities and ER curriculum integration. Our findings could be of use to policy makers who are considering implementing ER teacher training programs.

  • The present teacher training program could serve as an example of teacher education that could be replicated at other universities and teacher training institutions. Therefore, it could be of particular interest for institutions and instructors that intend to implement ER teacher training programs.

In the following section, the theoretical framework of our work will be presented in relation to teachers’ ER acceptance and self-efficacy. Then, the methodology of the study will be explained together with the context, population, training description, instruments and data analysis. The findings will be presented in the results section (Sect. 4). Finally, we compare our findings with the research results in the current literature in the discussion and conclusion section (Sect. 5).

Theoretical framework

In order to promote technology in education, it is recommended that specialized training be implemented. Teachers need to receive training to ensure that they can integrate technology into teaching in meaningful ways to support K-12 student learning (Casey et al., 2020). Effective training in technology integration focuses on content (including technology knowledge and pedagogy-related knowledge and skills), gives teachers opportunities for ‘‘hands-on’’ work, and addresses teachers’ needs (Hew & Brush, 2007). In the case of Digital Technologies (DT), such as robotic kits or robotic toys, apart from knowledge and experience, teachers should have a positive predisposition towards the new resources before teaching the classes in order to transmit positive impressions and enthusiasm to the learners. Hew and Brush (2007) recommend implementing professional development sessions to improve teachers’ perceptions of technological tools. Among teachers’ perceptions, this study will focus on teachers’ acceptance and self-efficacy towards ER as they are both crucial for teachers’ implementation of ER activities in their classroom teaching, and are not sufficiently researched in the current literature as yet.

Regarding teachers’ acceptance of robots in education, Chevalier et al. (2016) point out that teachers’ acceptance depends on the time they need to become acquainted with the robots and the robots’ appropriateness for the curriculum. Chevalier et al. (2016) also highlight that if teachers are provided with more training opportunities and pedagogical materials that can be used directly and are linked to the curriculum, their perceptions of usability of robots improve and therefore their acceptance of robotics in education also increases. Moreover, Conti et al. (2017) suggest that teachers would be more positive and accepting of robots in education if robots were cheaper. Similarly, according to the research of Park and Han (2016), teachers’ acceptance of robot-assisted learning environments mainly depends on the price of the robot. The teachers’ acceptance of robotics and other technological resources can be measured through different research instruments. The Technology Acceptance Model (TAM) (Davis, 1989) has been widely used in educational technology contexts, including ER; for example, it has been used to analyze teachers’ responses to open-ended questions to identify and determine their views regarding the perceived usefulness and perceived ease-of-use of floor-robots as a classroom technological tool (Casey et al., 2020). It has also been used to examine Computer Science teachers’ perceptions, beliefs, and attitudes on Computational Thinking (Fessakis & Prantsoudi, 2019). Subsequent to the Technology Acceptance Model (TAM) and its derived variations, the literature suggests the Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh et al., 2003) to measure teachers’ perceptions because this model integrates the previous TAM variations. Conti et al. (2017) applied the Unified theory of Acceptance and Use of Technology model to study the factors that may influence the teachers’ decision to use a robot as an instrument in their teaching practice. Zacharia et al. (2015) developed the Simulation Acceptance Model (SAM) to address the need for an instrument for researching teachers' beliefs, attitudes, and intentions to use simulations for educational purposes. Santos et al. (2016) used SAM to assess teachers’ beliefs, attitudes, and intentions to use the Lego Mindstorms software in their teaching. Later, Park and Han (2016) developed a variation of the Technology Acceptance Model, called the Robot Service Acceptance Model (RSAM), that is specialized in examining teachers’ views on robot-assisted learning environments with a cloud service platform.

As far as teachers’ self-efficacy is concerned, self-efficacy towards Digital Technologies (DT) and their classroom integration has been an issue of interest for a long time now in the field of education. Russell and Bradley (1997) expressed their concern regarding teachers’ lack of self-efficacy in DT, pointing out that “there is considerable evidence to suggest that schoolteachers in many countries are not confident in the use of computers”. To be more precise, Jones (2004) related this lack of teachers’ self-efficacy to their lack of competence in DT. To achieve high levels of self-efficacy in digital technologies, teachers’ competence should be improved, and this can be achieved through teacher training (Jones, 2004). A limited amount of studies have been carried out on self-efficacy in ER over the last decade (Hamner et al., 2016; Hodges et al., 2016; Jaipal-Jamani & Angeli, 2017; Liu et al., 2010; Santos et al., 2016). Their findings suggest teachers’ self-efficacy for teaching with robotics can be improved with an ER training program (Hamner et al., 2016; Jaipal-Jamani & Angeli, 2017; Liu et al., 2010). Interestingly, Hodges et al. (2016) found that teachers had high levels of self-efficacy towards the implementation of the new problem-based science curriculum throughout the entire professional development program. The results of the previous research are promising regarding the effect of training on teachers’ self-efficacy in ER. However, the studies’ limited sample sizes place in question the reliability and generalizability of their outcomes.

Methodology

A one-group intervention study (Creswell & Guetterman, 2019) was used as we aimed to examine the relationship of participants’ acceptance and self-efficacy towards ER and the change in their perceptions as a result of an ER teacher training program. The results of this quantitative study can provide insights into other, similar situations and cases and therefore assist in their interpretation (Cohen et al., 2007). The study was conducted using a associational design (Krause, 2018) to collect data. Associational research is appropriate for providing a context for dealing with many variables and studying their relationships and differences. In our study, there were two quantitative instruments (pre-post tests on acceptance and self-efficacy) and one qualitative technique (training journals on perceptions). These were applied in parallel within a short time during one university term. More information on the data collection instruments is provided in Sect. 3.2. (Research Instruments). The quantitative and qualitative results were analyzed separately, and the findings answer different research questions that are interpreted in the conclusions section of this paper (see Sect. 5). Our study measures the impact of an intervention. The ER teacher training program is evaluated in terms of participants’ acceptance and self-efficacy towards ER and participants’ perceptions.

Context, population and training description

The research was conducted in the framework of the university course entitled “Teaching and Learning of the Experimental, Social and Mathematical Sciences III” part of the degree in Preschool Education at the University of Rovira i Virgili. The university course is addressed to 4th year university students and gives a total amount of 6 ECTS credits. The present research study took place in February, March, and April 2020 in the framework of the research project “INTROBOT” and offered participants a 6-h training program in ER that was both on-site and online.

The population of the research study consisted of 90 pre-service preschool teachers. The average age of the participants was 22.9 (SD = 1.985). All pre-service preschool teachers in our population had previously carried out teaching practice as part of their university studies. The demographic profile of the participants is shown in Table 1. The convenience sample technique was used as it is a fast and economic way of sampling that allows easy access to available participants; however, it does not yield a representative sample of the target population (Cohen et al., 2007).

Table 1 Participants’ demographic profile

The ER training program consisted of three sessions, the first two sessions took place on the University premises during the last week of February and the first week of March 2020, while the 3rd session took place asynchronously online in the first week of April 2020 due to the COVID-19 pandemic (Table 2). The training was designed based on the constructivist learning approach and project-based learning. In the first session the pre-service teachers were introduced to Educational Robotics. The pre-service teachers were presented to the most widespread educational robotics resources, especially the ones used at a preschool level. In addition, the pre-service teachers were introduced to programming and to concepts such as algorithms, sequencing, debugging, and the instructor presented the definition of Computational Thinking (CT) as stated by Wing (2011). After this brief theoretical introduction, the Blue-bot robotic toy and its functions was presented to the pre-service teachers. They then had the chance to experiment with this resource in groups carrying out several scaffolded programming tasks and debugging challenges set by the instructor. After that, the pre-service teachers experimented with six different Blue-bot classroom projects and materials by carrying out in groups the interdisciplinary activities that addressed socio-economic issues and the protection of the natural habitat. Through these projects, they became familiarized with the interdisciplinary application of the Blue-bot robotic toy in preschool education and with the instructional materials required for implementing it. The second session of the training provided the pre-service teachers with guidelines on Blue-bot robotic toy classroom implementation activities and on the creation of instructional materials. After receiving the guidelines, the pre-service teachers formed groups and were asked to brainstorm on a Blue-bot project for preschool pupils on the following topic: “Vegetation and/or Wildlife in the region of Catalonia in Spain”. For the third training session, the pre-service teachers had to create a project on the above-mentioned topic, including a lesson plan and the teaching materials required for its implementation. In addition, they had to prepare a video presentation of their project in which they presented the learning objectives of their lesson plan, the teaching procedure, a description of the activities and the instructional materials elaborated for the purpose of the given lesson plan. The research team set a month’s interval between the second and the third session so that the pre-service teachers had enough time to work on the Blue-bot project and presentation. The third and last session took place online due to the COVID-19 pandemic. In this session, the pre-service teachers watched asynchronously the other groups’ presentations and evaluated them through an online questionnaire that the research team had elaborated based on evaluation criteria associated with the learning objectives, lesson plan description, teaching materials and a general evaluation. Apart from the other groups’ evaluation, the students had to complete a self-evaluation of their own work. The final grade depended on their 360° evaluation referring to the average of their self-evaluation, peer evaluation and teacher evaluation.

Table 2 Training content and research instruments

Research instruments

For the purpose of this research study, the pre-service preschool teachers who participated in the training sessions completed the following questionnaires (see Table 2 above):

  • A prequestionnaire (Q1_pre) and postquestionnaire (Q1_post) on the acceptance of ER, quantitative data.

  • A prequestionnaire (Q2_pre) and postquestionnaire (Q2_post) on self-efficacy for teaching robotics, quantitative data.

  • A journal on their perceptions of the training, qualitative data.

The first questionnaire (Q1_pre and Q1_post) was adapted from the TAM Diagnostic instrument (Davis, 1989) and more precisely from the Spanish version “Instrumento de diagnóstico del TAM” (Cabero & Perez, 2018). It is structured around five sections and uses a 7-point Likert scale ranging from Totally Disagree to Totally Agree. There are 15 items in the questionnaire, which are organized in five dimensions as follows (each dimension is the average of its items, see Table 5 in Appendix): four items on ER usefulness (U1-U4), three items collecting information on ER ease of use (F1–F3), three items on ER enjoyment (D1–D3), three items on attitudes towards ER use (A1–A3), and two items on intention to use it the future (I1–I2). The questionnaire items are provided in Appendix in the original language (Spanish). The main sections of the prequestionnaire (Q1_pre) and postquestionnaire (Q1_post) are exactly the same; however, in the prequestionnaire (Q1_pre) there are some additional demographic questions that collect supplementary information on the research sample. In the second questionnaire (Q2_pre and Q2_post) there are six items that collect information on the self-efficacy of pre-service teachers in relation to their ability to make efficient use of ER in the classroom as a teaching resource (Q1–Q6). This questionnaire was adapted from the Self-efficacy for Teaching Robotics Questionnaire in the research study of Jaipal-Jamani and Angeli (2017) and applies a 5-point Likert scale ranging from Totally Disagree to Totally Agree. The self-efficacy value is the sum of the 6 items (see Table 6 in Appendix). The questionnaire items are provided in Appendix in the original language (Spanish). Finally, to gain an insight into pre-service teachers’ perceptions of the training sessions delivered and their perceptions of ER as a teaching resource in preschool education, the participants were asked to complete a journal after each of the three training sessions following the instructors’ guidelines. The Q1_pre and Q2_pre questionnaires were completed at the beginning of the first training session in week 1. The Q1_post and Q2_post questionnaires were completed at the end of the third training session in week 6 to study whether the training program had had an effect on pre-service teachers’ acceptance and self-efficacy towards ER. The participants were asked to complete their training journals after each session in week 1, week 2 and week 6 because the objective was to collect feedback from the participants on each session held.

Data analysis

All data from questionnaires were transferred to SPSS 26.0 version and analyzed using descriptive statistics. As explained above, these instruments had been studied in previous research to be valid tools for measuring the desired construct. However, Cronbach Alpha was calculated for each dimension of the two questionnaires, both for pre- and post-questionnaire’s data (Cabero & Ruiz, 2018). Although some researchers admit that arithmetic operations cannot be performed in Likert-scale items, other experts (Jamieson, 2004; Sousa & Rojjanasrirat, 2010) affirm that if there is an adequate sample size (at least 5–10 observations per group) and if the data are normally or nearly normally distributed, parametric statistics can be used with Likert scale ordinal data. Furthermore, Norman (2010) provided evidence that parametric tests can be used with data from Likert scales, and give generally more robust results than nonparametric tests (Sullivan & Artino, 2013). Thus, mean and standard deviation (SD) were used as descriptive statistics, and paired samples t-test was used for comparing pre- and post-test results, the size effect was calculated (see Table 7 in Appendix) as a power analysis as the sample size was close to the minimum (Bujang et al., 2018). We used an enumeration process to carry out a content analysis of the qualitative data collected from pre-service teachers’ journals. The enumeration process counts categories and the frequencies of codes, analysis units, terms, words or ideas (Cohen et al., 2007). The content was analyzed on two levels: descriptive and inferential. Relationships among qualitative data were explored by tabling the frequencies and percentages of occurrences of categories (tabulation) and examining their connections (cross-tabulation). The content analysis was carried out by two coders. The coders decided together the codes to be used in the analysis and constructed the analysis categories. Then, the content was coded, and data were categorized in sequential order. Inconsistencies between the two coders were discussed and a consensus was reached for any differences in categorizing and a 100% unity of agreement was achieved.

Results

The results are presented in relation to the three research questions.

  • RQ1: To what extent did the ER teacher training program have an effect on pre-service teachers’ acceptance of ER?

First, the Cronbach's alpha test was run in order to measure the internal consistency or reliability of the questionnaire on the acceptance of ER (Q1) (Cohen et al., 2007). The test was run twice, once for the prequestionnaire and once for the postquestionnaire. The Cronbach Alpha for each dimension in the pre-test was: α (ER usefulness) = 0.885; α (ER ease of use) = 0.867; α (ER enjoyment) = 0.859 α (attitudes towards ER) = 0.687, and α (intention to use) = 0.889. The total number of items of the prequestionnaire was 0.890. For the postquestionnaire: α (ER usefulness) = 0.890; α (ER ease of use) = 0.798; α (ER enjoyment) = 0.905; α (attitudes towards ER) = 0.631, and α (intention to use) = 962, total was α = 0.911, which indicates a high level of internal consistency, except for the attitudes scale; figures that meet the results of Cabero and Perez (2018) research with a sample of 274 students.

Descriptive statistics conducted on the data from the pre-service teachers’ questionnaires showed an improvement in their acceptance of ER after taking part in the ER teacher training based on the data from the prequestionnaire (M = 89.54, SD = 10.28) and postquestionnaire (M = 93.76, SD = 10.07). This questionnaire items are in a 7-point Likert scale and among its items there is a negative-worded item (A2—I feel bored when I use the Blue-Bot). In the data analysis, the scoring scale has been reversed for this specific item. Pre-service teachers’ acceptance improved in all questionnaire items without any exception (Fig. 1). The two-tailed paired-sample t-test showed that there is a statistically significant difference between the prequestionnaire and the postquestionnaire in 7 out of the 15 items (see Table 7 in Appendix). First, the differences between the prequestionnaire and postquestionnaire are statistically significant (95% confidence level) in the section ease of use (F1—It is easy to use the Blue-bot, F2—Learning how to use the Blue-bot wasn’t a problem for me, and F3—Learning how to use the Blue-bot was clear and easy to understand), the effect size for this analysis (d = 0.461) was found to be near to Cohen’s convention for a moderate effect (d = 0.50), suggesting that the training had a relative positive impact on how pre-service teachers perceive how easy it is to use the ER resource. There are also statistically significant (95% confidence level) differences in the sections of enjoyment (D1—Using the Blue-bot is fun, and D2—I enjoyed using the Blue-bot)) with effect size for this analysis (d = 0.412) and attitudes (A1—Using the Blue-bot makes learning more interesting, and A2—I feel bored when I use the Blue-Bot), with also a moderate effect size (d = 0.342).

Fig. 1
figure1

Teachers’ ER acceptance pre/post-questionnaire

  • Research Question 2: To what extent did the ER teacher training program have an effect on pre-service teachers’ self-efficacy in ER?

First, the Cronbach's alpha test was run to measure the internal consistency or reliability of the questionnaires. The test was run twice, once for the prequestionnaire and once for the postquestionnaire. The Cronbach Alpha for the total number of items of the prequestionnaire was 0.855 and for the postquestionnaire it was 0.873, which indicates in both cases a high level of internal consistency for the self-efficacy scale.

The results showed an improvement in pre-service teachers’ perceptions after taking part in the ER teacher training. based on the prequestionnaire (M = 22.06, SD = 4.412) and postquestionnaire (M = 25.28, SD = 3.013). This questionnaire is in a 5-point Likert scale. The improvement in pre-service teachers’ self-efficacy was evident in all questionnaire items because all the means of all items increased in the postquestionnaire (Fig. 2). A two-tailed paired-sample t-test at a 95% confidence level showed that there is a statistically significant difference (t(89) = 7.016; p < 0.05) between initial self-efficacy (M = 22.06, SD = 4.412) and final self-efficacy (M = 25.28, SD = 3.013) both measured as the sum of the items of the pre- and the post-test (see Table 8 in Appendix).The effect size (d = 0.740) was found to be close to Cohen’s convention for a high effect (d = 0.80). In particular, statistical differences are observed in the following items: IT1—I feel confident that I have the skills necessary to use robotics for classroom instruction, IT3—I feel confident that I can help my students when they have difficulties with robotics, IT4—I feel confident about teaching students science using educational robots, IT5—I have sufficient knowledge about robotics to integrate it in the learning and teaching process and IT6—I have sufficient knowledge of computational thinking for the development of classroom robotics activities.

Fig. 2
figure2

Teachers’ ER self-efficacy pre/post-questionnaire

  • RQ3: What are the participants’ perceptions of the ER training program?

The data collected from pre-service teachers’ journals provide information on their perceptions of each training session (Table 3) and of the entire training program (Table 4) as an overall evaluation of the course.

Table 3 Pre-service teachers’ perceptions of training sessions 1, 2 and 3 (F = frequency)
Table 4 Pre-service teachers’ overall perceptions of the training (F = frequency)

The following themes emerged from the qualitative data for sessions 1 and 2 based on the pre-service teachers’ perceptions of each training session: the session was considered interesting, useful, entertaining, practical and helpful for participants’ collaborative work on the project (codes A–E, see Table 3). Examining session 1 closer, the most frequent code is A “The session was interesting”, which was counted 59 times in total in 90 journals, while code B “The session was useful” is also very frequent, counted 37 times. A pre-service teacher explained that “during the bachelor’s degree we did not experiment enough with digital technologies in education”. Another pre-service teacher reported that “in the first training session we saw first-hand how to integrate the ER resource into educational contexts”. In session 2, the most frequent code is E “The session supported participants’ collaborative work on the project”, which is found in 53 journals, which is more than half of the total number of journals. A pre-service teacher explained that “the session was very effective as we discussed our thoughts on the project with the rest of the group members and received guidelines and feedback from the trainers”. Code B “The session was useful” is also mentioned very frequently in the journals of session 2 (45 times). In both sessions, code D “The session was practical” is encountered 35 times in the journals. As the content and modality of Session 3 was online due to the COVID-19 pandemic, different codes were selected for the journals’ content analysis: “the presentation and evaluation of the projects were interesting and useful”; “the projects’ online evaluation was practical”; “the online self and peer evaluation helped us recognize the strong and weak aspects of our project and our peers’ projects”; “the self, peer and teacher evaluations were fair”; and “I’d prefer to try out the projects in class” (codes F–K, see Table 3). Code I was the most frequent and was found in 29 journals. A participant explained that “the peer online evaluation enabled our group to observe the aspects that we did not take into account in the creation process of our project”. Interestingly, code H “The projects’ online evaluation was practical” was counted only 9 times, whereas in session 1 and 2 the practicality of the sessions was mentioned 35 times each. By examining this closer, two participants reported that apart from completing the online evaluation rubric, they would like to provide direct feedback to their peers. The fairness of evaluation was also a topic discussed in the journals, with 19 participants stating that the self, peer and teacher evaluations were fair for the project evaluation, while seven participants expressed their concerns regarding the objectivity of the self and peer evaluations.

Examining the results of the qualitative analysis altogether, it was observed that there were considerably less codes counted in journal 3. In the first session the count of the selected codes was 145, in the second session the code count was 186, while in the third session the code count was 105. In line with this, the researchers who performed the content analysis reported that in the third session particularly, the participants’ journals were completed quickly, often without providing thorough answers. The training participants did not seem to be as meticulous as the research team expected in the journal of the third session. Although handing in the journals for each session was compulsory to complete the training, there were 6 pre-service teachers in our sample who did not hand in any of the journals.

The pre-service teachers’ perceptions and overall evaluation of the training program are summarized below in Table 4. The codes L–O refer to positive aspects of the training, while the codes P–S refer to the deficiencies observed by the participants. Regarding the positive aspects, the participants characterize the training as useful (code counted 48 times) and as interesting (code counted 46 times). Presenting their feedback in more detail, many of the participants report that the training was particularly useful for their professional future as teachers and should be part of their teacher education at the university. On the other hand, the deficiencies of the training reported have to do with the integration of additional ER resources in the training, additional time for experimentation with the resources, additional training sessions and the preference for completing the training on-site. The most frequent of all was the need for additional time for experimentation with the resources, counted in 31 journals, while the preference for completing the training on-site was also expressed quite often (22 pre-service teachers stated this).

Discussion and conclusions

The main aim of the study was to examine whether pre-service preschool teachers’ acceptance and self-efficacy towards ER change after they participate in the teacher training program, together with their perceptions of the training. The results of our study demonstrated an improvement in pre-service teachers’ acceptance of ER after taking part in the ER teacher training, regarding the ease of use of Blue-bot resources, enjoyment and their attitudes towards the Blue-bot resource. In parallel, pre-service teachers’ perceptions of the training collected from the training journals, were very positive. Pre-service teachers viewed the training as innovative, useful, entertaining and interesting, which was in accordance with the quantitative data on their acceptance of ER. Based on our results, pre-service teachers would be eager to accept the Blue-bot in their future teaching in preschool education institutions after engaging in this training. The results of our study go one step further than previous research in the field by offering substantiated quantitative results merged with qualitative data on pre-service teachers’ perceptions. Our study does not aim to obtain any sort of representativeness or generalization, but rather, its goal is to examine the given case. The results of our study complement the previous qualitative results of the research of Casey et al. (2020) who inferred that the 32 pre-service teachers enrolled in an undergraduate education course were positive about the perceived ease of use and usefulness of floor-robots as an educational tool. Our findings confirm the qualitative results of Casey et al. (2020) regarding the improvement in the perceived ease of use of the floor-robots after the course, with significant differences in quantitative data. In relation to the improvement in the teachers’ perception of the usefulness of ER presented in Casey et al. (2020), our study also infers an enhanced perception of usefulness after the training; however, we could not confirm this with significant differences in quantitative data. Nevertheless, the qualitative data collected from the pre-service teachers’ journals depict ER and the ER training as useful for pre-service teachers’ career and professional development and highlight the need for additional ER training sessions and familiarization with more ER resources.

The study results on self-efficacy showed a significant improvement in pre-service teachers’ self-efficacy after taking part in the ER teacher training. The results of our study are in line with those of Jaipal-Jamani and Angeli (2017), whose findings showed that engaging with robotics in a university course can improve pre-service teachers’ self-efficacy for teaching with robotics as well as their computational thinking skills. In addition, our study results are consistent with findings that suggest that teachers improved their self-efficacy for teaching with robotics due to the training received (Hamner et al., 2016; Liu et al., 2010). Although our results cannot be generalized, they enrich the current literature on self-efficacy towards ER, with quantitative data from a broader sample compared to previous research in this discipline. The improvement observed in pre-service teachers’ self-efficacy towards ER could possibly be related to their positive perceptions of ER and of the content and the structure of the training itself.

Nevertheless, although our quantitative data suggest that the participants seem to accept the Blue-bot resource and to improve their self-efficacy towards ER throughout the course, there is no possible way to confirm that pre-service teachers will introduce the Blue-bot or other ER resources in their future teaching contexts. Despite the difficulty of retrieving information on pre-service teachers’ actual classroom ER integration, we consider that the results of this research are very promising as pre-service teachers’ acceptance and self-efficacy seem to improve considerably as the training progresses. We expect that in future ER training programs consisting of additional sessions, involving more resources and more time for experimentation, the improvement in acceptance and self-efficacy will be even more noticeable. Finally, the major contribution of this study is based on the profile of the participants: the study population consists exclusively of pre-school teachers who are rarely included in research on teacher training in educational robotics.

The largest limitation of the study, as mentioned above, is the fact that the pre-service teachers participating in the training did not have the chance to implement the projects they designed in a classroom setting due to the COVID-19 pandemic. The participants did not have the opportunity to apply the knowledge acquired through the training in the preschool educational context because schools in the region remained closed for the rest of the school year after the breakout of the pandemic in March 2020. The general positive feedback received about the content and structure of the ER teacher training program encourages our research team to employ the constructivist learning approach and project-based learning in future training programs. However, in future editions and replications of this training program, we recommend incorporating the following suggestions, some of which were provided by the training participants in their training journals:

  • The training program could be extended by adding supplementary training sessions. The extension of the training could have a positive effect on participants’ acceptance, self-efficacy, and perceptions.

  • The training program could include familiarization and activities with additional ER and programming resources. For example, it could include sessions with other resources suitable for preschool education, such as Scratch Jr (Papadakis et al., 2016), KIBO (Bers et al., 2019), RoboTito (Gerosa et al., 2019), and Bee-bot (Di Lieto et al., 2017).

  • Participants should be given more time for experimentation with the ER resources and ER teaching materials. This would enable the participants to feel more comfortable and confident with the resources and enjoy the learning process without feeling that they need to hurry, which would possibly have a positive effect on their acceptance, self-efficacy, and perceptions.

  • In terms of the research design, pre-service teachers’ journals should be improved to provide richer qualitative results. That the frequency of codes counted dropped in the third session, suggests that the students completed the third journal in a rush without spending time on this task. Therefore, in future implementations of this study we recommend dedicating classroom time to this task or in the case of an online modality, set the deadline for handing in the journals shortly after the session.

  • In future implementations of this study, a larger sample should be included to increase the reliability of all the instruments and scales, having a sample greater than 300 could allow us to confirm and generalize the positive results and conduct a structural equation model study.

  • Future ER training programs could be conducted entirely or partly online in health emergencies like the COVID-19 pandemic. Online and blended versions of the training could incorporate additional features, such as immediate feedback to the students on robotics tasks and/or teaching material creation and online robotics simulations. The pre-service teacher education course presented in Moorhouse (2020) provides insights into the adaptations needed for an online course. The adaptations include making the VCS sessions obligatory, using small group discussions (breakout rooms), reinforcing the structure of the sessions, adding a preparation task with the session materials prior to the class, providing time for group discussion and feedback, recording the sessions, and combining synchronous and asynchronous teaching. As Sun et al. (2020) suggested, the universities should view the COVID-19 pandemic as an opportunity to reform the online education they offer by improving the course content, the digital technology employed and management. Therefore, in our context, the COVID-19 pandemic gives our university a chance to make reforms and rethink the content of teacher online education and the digital technology taught.

Apart from the recommendations provided above, the university teaching committee should reflect on the importance of ER in preschool teachers’ education and apply necessary reforms to integrate ER training into the teacher education curriculum.

Availability of data and materials

The datasets used and/or analyzed during the current study are available in the Zenodo repository https://doi.org/10.5281/zenodo.4553489.

Abbreviations

ER:

Educational robotics

DT:

Digital technologies

TAM:

Technology acceptance model

UTAUT:

Unified theory of acceptance and use of technology

SAM:

Simulation acceptance model

RSAM:

Robot service acceptance model

CT:

Computational thinking

References

  1. Aksu, F. N., & Durak, G. (2019). Robotics in education: Examining information technology teachers’ views. Journal of Education and E-Learning Research, 6(4), 162–168. https://doi.org/10.20448/journal.509.2019.64.162.168

    Article  Google Scholar 

  2. Barker, B. S., & Ansorge, J. (2007). Robotics as means to increase achievement scores in an informal learning environment. Journal of Research on Technology in Education, 39(3), 229–243. https://doi.org/10.1080/15391523.2007.10782481

    Article  Google Scholar 

  3. Bers, M. U., González-González, C., & Armas-Torres, M. B. (2019). Coding as a playground: Promoting positive learning experiences in childhood classrooms. Computers and Education, 138, 130–145. https://doi.org/10.1016/j.compedu.2019.04.013

    Article  Google Scholar 

  4. Bers, M. U., Ponte, I., Juelich, C., Viera, A., & Schenker, J. (2002). Teachers as designers: Integrating robotics in early childhood education. Information Technology in Childhood Education Annual, 2002(1), 123–145

    Google Scholar 

  5. Bers, M. U., Seddighin, S., & Sullivan, A. (2013). Ready for robotics: Bringing together the T and E of STEM in early childhood teacher education. Journal of Technology and Teacher Education, 21(3), 355–377

    Google Scholar 

  6. Blue-bot Homepage. Retrieved November 11, 2020, from https://www.terrapinlogo.com/blue-bot-family.html.

  7. Caballero-González, Y. A., & Muñoz-Repiso, A.G.V. (2017). Development of computational thinking and collaborative learning in kindergarten using programmable educational robots: A teacher training experience. Proceedings of the 5th international conference on technological ecosystems for enhancing multiculturality (TEEM 2017). New York: Association for Computing Machinery. https://doi.org/10.1145/3144826.3145353.

  8. Cabero, J., & Perez, J. L. (2018). Validación del modelo TAM de adopción de la Realidad Aumentada mediante ecuaciones estructurales. Estudios sobre Educación, 34, 129–153. https://doi.org/10.15581/004.34.129-153

    Article  Google Scholar 

  9. Casey, J. E., Pennington, L. K., & Mireles, S. V. (2020). Technology acceptance model: Assessing preservice teachers’ acceptance of floor-robots as a useful pedagogical tool. Technology, Knowledge and Learning. https://doi.org/10.1007/s10758-020-09452-8

    Article  Google Scholar 

  10. Chen, F., Curran, P. J., Bollen, K. A., Kirby, J., & Paxton, P. (2008). An empirical evaluation of the use of fixed cutoff points in RMSEA test statistic in structural equation models. Sociological Methods & Research, 36(4), 462–494. https://doi.org/10.1177/0049124108314720

    MathSciNet  Article  Google Scholar 

  11. Chevalier, M., Riedo, F., & Mondada, F. (2016). Pedagogical uses of thymio II: How do teachers perceive educational robots in formal education? IEEE Robotics and Automation Magazine, 23(2), 16–23. https://doi.org/10.1109/MRA.2016.2535080

    Article  Google Scholar 

  12. Çiftçi, A., Topçu, M. S., & Foulk, J. A. (2020). Pre-service early childhood teachers’ views on STEM education and their STEM teaching practices. Research in Science & Technological Education. https://doi.org/10.1080/02635143.2020.1784125

    Article  Google Scholar 

  13. Cohen, L., Manion, L., & Morrison, K. R. (2007). Research methods in education. (6th ed.). Routledge.

    Book  Google Scholar 

  14. Conti, D., Di Nuovo, S., Buono, S., & Di Nuovo, A. (2017). Robots in education and care of children with developmental disabilities: A study on acceptance by experienced and future professionals. International Journal of Social Robotics, 9(1), 51–62. https://doi.org/10.1007/s12369-016-0359-6

    Article  Google Scholar 

  15. Creswell, J. W., & Guetterman, T. C. (2019). Educational research. Planning, conducting and evaluating quantitative and qualitative research. (6th ed.). Pearson.

    Google Scholar 

  16. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly: Management Information Systems, 13(3), 319–339. https://doi.org/10.2307/249008

    Article  Google Scholar 

  17. Di Lieto, M. C., Inguaggiato, E., Castro, E., Cecchi, F., Cioni, G., Dell’Omo, M., Laschi, C., Pecini, C., Santerini, G., Sgandurra, G., & Dario, P. (2017). Educational robotics intervention on executive functions in preschool children: A pilot study. Computers in Human Behavior, 71, 16–23. https://doi.org/10.1016/j.chb.2017.01.018

    Article  Google Scholar 

  18. Durak, H. Y., Yilmaz, F. G. K., & Bartin, R. Y. (2019). Computational thinking, programming self-efficacy, problem solving and experiences in the programming process conducted with robotic activities. Contemporary Educational Technology, 10(2), 173–197. https://doi.org/10.30935/cet.554493

    Article  Google Scholar 

  19. Eguchi, A. (2013). Educational robotics for promoting 21st century skills. Journal of Automation, Mobile Robotics & Intelligent Systems, 8(1), 5–11. https://doi.org/10.14313/JAMRIS_1-2014/1

    Article  Google Scholar 

  20. Fessakis, G., & Prantsoudi, S. (2019). Computer science teachers’ perceptions, beliefs and attitudes on computational thinking in Greece. Informatics in Education, 18(2), 227–258. https://doi.org/10.15388/infedu.2019.11

    Article  Google Scholar 

  21. Frangou, S., Papanikolaou, K., Aravecchia, L., Montel, L., Ionita, S., Arlegui, J., Pina, A., Menegatti, E., Moro, M., Fava, N., & Monfalcon, S. (2008). Representative examples of implementing educational robotics in school based on the constructivist approach. Proceeding of the 2008 conference on simulation, modeling and programming for autonomous robots, (pp. 54–65).

  22. Gaudiello, I., & Zibetti, E. (2016). Learning robotics, with robotics, by robotics: Educational robotics. Wiley.

    Book  Google Scholar 

  23. Gerosa, A., Koleszar, V., Gómez-Sena, L., Tejera, G., & Carboni, A. (2019). Educational robotics and computational thinking development in preschool. Proceedings - 14th Latin American conference on learning technologies (LACLO 2019), (pp. 226–230). https://doi.org/10.1109/LACLO49268.2019.00046.

  24. Hamner, E., Cross, J., & Zito, L. (2016). Training teachers to integrate engineering into non-technical middle school curriculum. IEEE Frontiers in Education Conference (FIE), 2016, 1–9. https://doi.org/10.1109/FIE.2016.7757528

    Article  Google Scholar 

  25. Hew, K. F., & Brush, T. (2007). Integrating technology into K-12 teaching and learning: Current knowledge gaps and recommendations for future research. Educational Technology Research and Development, 55(3), 223–252. https://doi.org/10.1007/s11423-006-9022-5

    Article  Google Scholar 

  26. Highfield, K. (2010). Robotic toys as a catalyst for mathematical problem solving. Australian Primary Mathematics Classroom, 15(2), 22–28

    Google Scholar 

  27. Hodges, C. B., Gale, J., & Meng, A. (2016). Teacher self-efficacy during the implementation of a problem-based science curriculum. Contemporary Issues in Technology and Teacher Education, 16(4), 434–451

    Google Scholar 

  28. Hussain, S., Lindh, J., & Shukur, G. (2006). The effect of LEGO training on pupils’ school performance in mathematics, problem solving ability and attitude: Swedish data. Educational Technology and Society, 9(3), 182–194. https://doi.org/10.1016/j.cities.2017.11.001

    Article  Google Scholar 

  29. Jaipal-Jamani, K., & Angeli, C. (2017). Effect of robotics on elementary preservice teachers’ self-efficacy, science learning, and computational thinking. Journal of Science Education and Technology, 26(2), 175–192. https://doi.org/10.1007/s10956-016-9663-z

    Article  Google Scholar 

  30. Jamieson, S. (2004). Likert scales: How to (ab)use them. Medical Education, 38(12), 1217–1218. https://doi.org/10.1111/j.1365-2929.2004.02012.x

    Article  Google Scholar 

  31. Jones, A. (2004). A review of the research literature on barriers to the uptake of ICT by teachers. British Educational Communications and Technology Agency, (pp. 1–29).

  32. Karypi, S. (2018). Educational robotics application in primary and secondary education: A Challenge for the Greek Teachers Society. Journal of Contemporary Education, Theory & Research, 2(1), 9–14. https://doi.org/10.5281/zenodo.3598423

    Article  Google Scholar 

  33. Khanlari, A. (2013). Effects of robotics on 21st century skills. European Scientific Journal, 9(27), 26–36

    Google Scholar 

  34. Khanlari, A. (2016). Teachers’ perceptions of the benefits and the challenges of integrating educational robots into primary/elementary curricula. European Journal of Engineering Education, 41(3), 320–330. https://doi.org/10.1080/03043797.2015.1056106

    Article  Google Scholar 

  35. Krause, M. S. (2018). Associational versus correlational research study design and data analysis. Quality & Quantity, 52, 2691–2707. https://doi.org/10.1007/s11135-018-0687-8

    Article  Google Scholar 

  36. Latikka, R., Turja, T., & Oksanen, A. (2019). Self-efficacy and acceptance of robots. Computers in Human Behavior, 93, 157–163. https://doi.org/10.1016/j.chb.2018.12.017

    Article  Google Scholar 

  37. Lee, I., Martin, F., Denner, J., Coulter, B., Allan, W., Malyn-Smith, J., & Werner, L. (2011). Computational thinking for youth in practice. ACM Inroads, 2(1), 32–37. https://doi.org/10.1145/1929887.1929902

    Article  Google Scholar 

  38. Leonard, J., Buss, A., Gamboa, R., Mitchell, M., Fashola, O. S., Hubert, T., & Almughyirah, S. (2016). Using robotics and game design to enhance children’s self-efficacy, STEM attitudes, and computational thinking skills. Journal of Science Education and Technology, 25(6), 860–876. https://doi.org/10.1007/s10956-016-9628-2

    Article  Google Scholar 

  39. Liu, E. Z. F., Lin, C. H., & Chang, C. S. (2010). Student satisfaction and self-efficacy in a cooperative robotics course. Social Behavior and Personality, 38, 1135–1146

    Article  Google Scholar 

  40. Moorhouse, B. L. (2020). Adaptations to a face-to-face initial teacher education course ‘forced’ online due to the COVID-19 pandemic. Journal of Education for Teaching, 46(4), 1–3. https://doi.org/10.1080/02607476.2020.1755205

    Article  Google Scholar 

  41. Norman, G. (2010). Likert scales, levels of measurement and the “laws” of statistics. Advances in Health Sciences Education, 15(5), 625–632

    Article  Google Scholar 

  42. Nugent, G., Barker, B., Grandgenett, N., & Adamchuk, V. (2009). The use of digital manipulatives in K-12: Robotics, GPS/GIS and programming. Proceedings - Frontiers in Education Conference, FIE. https://doi.org/10.1109/FIE.2009.5350828

    Article  Google Scholar 

  43. Papadakis, S., Kalogiannakis, M., & Zaranis, N. (2016). Developing fundamental programming concepts and computational thinking with ScratchJr in preschool education: A case study. International Journal of Mobile Learning and Organisation, 10(3), 187–202

    Article  Google Scholar 

  44. Park, I. W., & Han, J. (2016). Teachers’ views on the use of robots and cloud services in education for sustainable development. Cluster Computing, 19(2), 987–999. https://doi.org/10.1007/s10586-016-0558-9

    Article  Google Scholar 

  45. Russell, G., & Bradley, G. (1997). Teachers’ computer anxiety: Implications for professional development. Education and Information Technologies, 2(1), 17–30. https://doi.org/10.1023/A:1018680322904

    Article  Google Scholar 

  46. Santos, I. M., Ali, N., Khine, M. S., Hill, A., Abdelghani, U., & Qahtani, K. A. (2016). Teacher perceptions of training and intention to use robotics. IEEE Global Engineering Education Conference. https://doi.org/10.1109/EDUCON.2016.7474644

    Article  Google Scholar 

  47. Schina, D., Esteve-González, V., & Usart, M. (2020). An overview of teacher training programs in educational robotics: Characteristics, best practices and recommendations. Education and Information Technologies. https://doi.org/10.1007/s10639-020-10377-z

    Article  Google Scholar 

  48. Schina, D., Usart, M., Esteve-González, V., & Gisbert, M. (2020a). Teacher views on educational robotics and its introduction to the compulsory curricula. Proceedings of the 12th international conference on computer supported education, (vol. 1, pp. 147–154). https://doi.org/10.5220/0009316301470154

  49. Shin, N., & Kim, S. (2007). Learning about, from, and with robots: Students’ perspectives. Proceedings - IEEE International Workshop on Robot and Human Interactive Communication. https://doi.org/10.1109/ROMAN.2007.4415235

    Article  Google Scholar 

  50. Sousa, V. D., & Rojjanasrirat, W. (2010). Translation, adaptation and validation of instruments or scales for use in cross-cultural health care research: a clear and user-friendly guideline. Journal of Evaluation in Clinical Practice, 17(2), 268–274. https://doi.org/10.1111/j.1365-2753.2010.01434.x

    Article  Google Scholar 

  51. Sullivan, G. M., & Artino, A. R. (2013). Analyzing and interpreting data from likert-type scales. Journal of Graduate Medical Education, 5, 541–542. https://doi.org/10.4300/jgme-5-4-18

    Article  Google Scholar 

  52. Sun, L., Tang, Y., & Zuo, W. (2020). Coronavirus pushes education online. Nature Materials, 19, 687. https://doi.org/10.1038/s41563-020-0678-8

    Article  Google Scholar 

  53. Theodoropoulos, A., Antoniou, A., & Lepouras, G. (2017). Teacher and student views on educational robotics: The Pan-Hellenic competition case. Application and Theory of Computer Technology, 2(4), 1. https://doi.org/10.22496/atct.v2i4.94

    Article  Google Scholar 

  54. Tsai, M. J., Wang, C. Y., Wu, A. H., & Hsiao, C. Y. (2021). The development and validation of the robotics learning self-efficacy scale (RLSES). Journal of Educational Computing Research. https://doi.org/10.1177/0735633121992594

    Article  Google Scholar 

  55. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). Human acceptance of information technology. International Encyclopedia of Ergonomics and Human Factors, 27(3), 425–478. https://doi.org/10.1201/9780849375477.ch230

    Article  Google Scholar 

  56. Wing, J. (2011). Research notebook: Computational thinking—What and why? The Link Magazine. Carnegie Mellon University.

    Google Scholar 

  57. Zacharia, Z. C., Rotsaka, I., & Hovardas, T. (2015). Development and test of an instrument that investigates teachers’ beliefs, attitudes and intentions concerning the educational use of simulations. Attitude measurements in science education: Classic and contemporary approaches. (pp. 83–117). Information Age Publishing.

    Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This research was carried out within the framework of the INTROBOT project: Introduction to educational robotics in the training of early childhood education teachers (Grant No. 07GI1920) and received funding from ICE (Institute of Education Sciences) of the University Rovira i Virgili. This project also received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 713679 and from the Universitat Rovira i Virgili (URV).

Author information

Affiliations

Authors

Contributions

All authors contributed extensively to the work presented in this paper and approved the final manuscript. V.E.G., M.U., and D.S. contributed to the study design and methodology. D.S. and M.U. analyzed and interpreted quantitative data. A.B. and C.V. analyzed and interpreted qualitative data. M.U. and D.S. contributed to the design of the data collection instruments. D.S. wrote the manuscript as part of her PhD thesis, with significant input from the rest of the authors. V.E.G., M.U., A.B., C.V.B. reviewed, edited, and approved the final manuscript. C.V.B. was in charge of the supervision, project administration, and funding acquisition. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Despoina Schina.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

See Tables 5, 6, 7, 8 and 9.

Table 5 Items of questionnaire 1
Table 6 Items of questionnaire 2
Table 7 Results of questionnaire 1
Table 8 Results of questionnaire 2
Table 9 Examples of coded journals from session 3 (week 6)

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Schina, D., Valls-Bautista, C., Borrull-Riera, A. et al. An associational study: preschool teachers’ acceptance and self-efficacy towards Educational Robotics in a pre-service teacher training program. Int J Educ Technol High Educ 18, 28 (2021). https://doi.org/10.1186/s41239-021-00264-z

Download citation

Keywords

  • Educational robotics
  • Preschool education
  • Pre-service teachers
  • Self-efficacy
  • Teacher acceptance
  • Teacher training
  • Teachers’ perceptions