Skip to main content

Towards modeling of human skilling for electrical circuitry using augmented reality applications

Abstract

Augmented reality (AR) is a unique, hands-on tool to deliver information. However, its educational value has been mainly demonstrated empirically so far. In this paper, we present a modeling approach to provide users with mastery of a skill, using AR learning content to implement an educational curriculum. We illustrate the potential of this approach by applying this to an important but pervasively misunderstood area of STEM learning, electrical circuitry. Unlike previous cognitive assessment models, we break down the area into microskills—the smallest segmentation of this knowledge—and concrete learning outcomes for each. This model empowers the user to perform a variety of tasks that are conducive to the acquisition of the skill. We also provide a classification of microskills and how to design them in an AR environment. Our results demonstrated that aligning the AR technology to specific learning objectives paves the way for high quality assessment, teaching, and learning.

Introduction

New technologies, such as augmented reality (AR), which superimposes virtual information into the physical world, provide unique hands-on capabilities to deliver educational content. In the last decade, there has been a surge of interest in acquiring knowledge through a minds-on and hands-on approach (Council, 2012; Williams et al., 2019). An AR display (e.g., a headset, a tablet or a mobile phone) provides the user with an interface to the virtual world, which enables interactions with physical objects. This virtual information is overlaid on objects and can demonstrate to users what cannot be perceived by their own senses and thus, learned (Iftene & Trandabăt, 2018) (e.g., pressure, temperature, voltage).

Great efforts have been made to develop and test empirical AR applications in education with positive results (Cai et al., 2014; Dunleavy et al., 2009; Hsiao et al., 2012; Lin et al., 2012; Rasimah et al., 2011), along with insightful design principles of AR implementation in the context of the classroom (Cuendet et al., 2013). However, we are aware of no efforts towards systematically structuring the AR learning content itself, breaking it down into bite-sized pieces, deciding what the learning content should look like, and how much of it is even necessary for the present context of a student.

As an educational tool, AR is not exempt from the design rules for multimedia learning (Mayer, 2019; Mayer & Moreno, 2003), thus it is important not to overwhelm the user with too much information, but only that which is essential and pertinent to the context. Further, for AR to be taken seriously as a learning tool, it will require design principles for creation of the learning content and for the use of the AR information being delivered with respect to students’ mastery of the content. In particular, these principles guide the understanding of how the AR medium can be leveraged for teaching and learning, by aligning the technology to learning outcomes.

This work is influenced by embodied cognition, which examines the ways in which our interactions with the physical world shape our cognitive experiences from a body-centric point of view (Wilson, 2002; Wilson & Golonka, 2013). These types of interactions, which in our case are promoted by AR technologies, can shape, clarify, and reinforce our cognitive processes in STEM areas, such as electrical circuitry (Fugate et al., 2018). In our work, in order to systematize the process of breaking down, aligning, and improving the AR content, we propose the use of Q-matrix theory, a cognitive assessment approach which evaluates the associations between questions/steps in a task and the microskills—smallest segmentation of knowledge—required to complete it (Tatsuoka, 1995, 2009). Q-matrix theory has been utilized previously for cognitive assessment in multimedia learning, but not in the context of developing AR curricula (Casalino et al., 2017; Desmarais et al., 2014; Wang & Jiang, 2018).

This paper investigates the use of Q-matrix theory as a design framework to develop an AR-based curriculum (Fig. 1). We focused on a young-adult population (18–34 years) without any prior knowledge on the subject area of electrical circuitry. The tasks chosen for the user studies involve different procedures; however, the microskills required to complete them are similar throughout the range of exercises. We will further expand on our reasoning for choosing this area in the task design section. Our contributions in this paper are as follows:

  1. (1)

    A modeling approach to systematically break down the knowledge conducive to the mastery of basic electrical circuitry using Q-matrix theory by aligning AR technology to learning outcomes.

  2. (2)

    Defining and selecting the microskills required to perform a variety of electrical circuitry tasks.

  3. (3)

    Design principles by microskill and findings of AR learning content implemented on an educational curriculum.

Fig. 1
figure1

Overview of our model to provide a user with basic mastery of electrical circuitry. (Left) Electrical circuitry as a wide body of knowledge with multiple concepts and electrical components, even at the basic level. (Right) We break down electrical circuitry into fundamentals or microskills–smallest segmentation of this knowledge–which are delivered by AR (phone-based), allow the user to perform a variety of tasks that are conducive to the acquisition of the skill

Definitions

The following definitions will be helpful to understanding recurring terminology in our educational context:

Skill. An ability which has been automated and operates largely subconsciously (Williams & Moran, 1989). It may be broken down into smaller, more manageable components or microskills.

Microskill. The specific ability, knowledge, aptitude or information required to perform a task. In cognitive assessment, the equivalent of a microskill is typically referred to as an "attribute" or piece of knowledge that a student may have acquired (Heller, 2019).

Knowledge space. The set of microskills or attributes proven to be acquired by a student upon a successful completion of a task (Köhn & Chiu, 2018; Stefanutti & Chiusole, 2017).

Task. The process or series of steps that are conducive to learning a skill, which can be decomposed into the interactions between users and equipment (Jonassen et al., 1998).

Item/step. The smallest segmentation of an action performed by a learner towards successful completion of a task (Heller et al., 2017).

Q-matrix. An assessment matrix defined by step-microskill associations required to perform a task (Cai et al., 2018). Ideally, an instructor as well as a learning sciences expert must be consulted to develop and elaborate a valuable matrix.

Related work

AR as a tool for education

AR has received much attention as a useful medium for educational content (Bower et al., 2014; Radu & Schneider, 2019; Walker et al., 2017). Much of this attention is due to the development of AR technology, which has positioned it to become widely available by deploying it on tablets and mobile phones (Sungkur et al., 2016). In terms of education settings, studies have shown that AR improves students’ learning achievement, learning motivation, and attitudes towards the materials (Akçayır & Akçayır, 2017; Chiang et al., 2014; Lu & Liu, 2015).

Additionally, AR can help students understand new material through multi-sensory learning, which can often facilitate a positive and playful attitude as students learn through playing with the materials (Kamarainen et al., 2013; Lu & Liu, 2015). Although educational AR has been mostly used in the context of informal learning, there is some evidence that it can increase high level critical thinking (Saltan & Arslan, 2017) and enhance spatial abilities (Lin et al., 2015). In the case of laboratory settings, AR allows students to try out the technology prior to handling lab equipment, perform some experiments within the virtual world, and can lower laboratory costs (Ferrer-Torregrosa et al., 2015).

As AR transitions from an informal learning tool to a formal learning tool, it is essential that AR content generation can follow some of the traditional principles for multimedia learning contents (Mayer & Moreno, 2003), to avoid some of the typical drawbacks that come with presenting too much information. In an AR environment, students may experience a cognitive overload due to the amount of material and the complexity of tasks (Cheng & Tsai, 2013). Thus, the next step to improve the quality of AR content, is to provide well-integrated, organized, and pertinent information (e.g., images, annotations, video tutorials) to improve students’ learning performance (Chiang et al., 2014). AR can benefit from properly organizing all learning components, such as overlaid objects and videos, which can help students with improved processing of the learning content (Yoon et al., 2012). While classroom orchestration principles have been explored and tested to design an AR learning environment (i.e., integration, awareness, empowerment, flexibility, and minimalism) (Cuendet et al., 2013), there has been insufficient exploration on how the learning content itself can be structured and filtered to comply with those principles. Our work essentially decomposes complex tasks into microskills and then translates those pieces into how they align with AR technology. Further, because we are focused on emphasizing pertinent information, we investigate whether giving users partial AR content and gradually decreasing it, can improve or retain performance.

Cognitive assessment using Q-matrix theory

Cognitive assessment has surfaced as a new model of educational measurement that combines psychometric standards with the objectives of formative assessment (Haberman et al., 2008; Leighton & Gierl, 2007; Roussos et al., 2007; Templin & Henson, 2010). The focus of cognitive assessment is on specific microskills, knowledge, and other characteristics that are necessary to perform tasks which are typically selected to assess a students’ abilities. Cognitive assessment tests are customized to evaluate students’ mastery of the learning content and provide immediate feedback on their strengths and weakness; thus, determining which microskills were learned or are in need of studying (Köhn & Chiu, 2018). Each set of acquired microskills per student determines the proficiency class of the evaluated student.

The entire set of associations between items/steps and microskills is represented in the Q-matrix of a selected task (Tatsuoka, 2009). The Q-matrix must be accurate and complete, which means it must provide all possible proficiency classes of the students (Chen et al., 2015; Chiu et al., 2009; Xu, 2017; Xu & Zhang, 2016). The goal of this method is to obtain a linear system, which allows the application of standard linear Boolean algebra techniques (Desmarais et al., 2012) and infer an un-observable knowledge space (what is going on in their minds) based on observable information in students’ responses. Typically, the Q-matrix has been used to assess students’ mastery based on multiple choice questions (e.g., mathematical, reading comprehension tests) (Buck & Tatsuoka, 1998; Tatsuoka, 1983, 1985, 1990, 1995). However, it has never been utilized in the context of AR, which brings an entirely new dimension to cognitive assessment (e.g., digital data vs. the real world). While multiple-choice tests require students to engage in cognitive tasks, an educational AR technology combines critical thinking and navigation within the virtual and physical world—hands-on and “minds-on” approach. In our case, students perform psychomotor tasks in an AR environment (e.g., select, manipulate, assemble, and interact with the environment), thus our landscape spans a brain-body-environment.

In this paper, we carefully curated the decomposition of the knowledge necessary to perform a variety of electric circuitry by aligning the microskills to the AR technology. Further, we initiate key directions for how we can use the knowledge space generated to formulate a high-quality AR curricula.

Modeling

Preparing an incidence Q-matrix

A Q-matrix maps the underlying processing skills necessary to complete a task, where the columns of the matrix represent the items or steps to complete a task and the rows represent the microskills required, or vice versa. The entries in each column are given a Boolean value (true = 1 or false = 0) depending on whether that microskill is required for the solution of that step. Thus, as a Boolean matrix, the Q-matrix is subject to the assumptions and theorem of Boolean algebra which we will expand on as we develop our user studies. The user studies will aid us to exemplify the content and procedure of its formulation, and to dispel some doubts due to abstraction. In this section, we will present a simple example on a basic LED circuit (Fig. 2). This example will be represented by a 3 by 3 Q-matrix.

  • Microskill 1 (MS1): Ability to understand current flow.

  • Microskill 2 (MS2): Ability to understand polarity.

  • Microskill 3 (MS3): Ability to understand circuit connections.

  • Step 1 (S1): Connect two resistors in series.

  • Step 2 (S2): Connect LED to resistors.

  • Step 3 (S3): Connect LED(−) to battery(−) and resistors to battery( +). The explanation of the microskills are the following (Osborne, 1983; Peppler et al., 2019):

Fig. 2
figure2

Basic LED circuit. Components: LED, 2 10 Ohms resistor in series, and 3 V batteries (Made with Fritzing)

MS1: Current flow is a closed loop around a circuit with a power source (e.g., 9 V battery) and a load (something to use up the energy, e.g., an LED). MS2: Polarity is the correct direction in which connections between components are made (e.g., connect battery(−) to LED(−)) so that current can flow. MS3: Connections are defined as the joining of electrical components to form a working circuit (e.g., a bulb, battery, and wires).

$$Q = \begin{array}{*{20}l} {} \hfill & {S_{1} } \hfill & {S_{2} } \hfill & {S_{3} } \hfill \\ {{\text{MS}}_{{1}} } \hfill & 1 \hfill & 0 \hfill & 1 \hfill \\ {{\text{MS}}_{{2}} } \hfill & 0 \hfill & 0 \hfill & 1 \hfill \\ {{\text{MS}}_{{3}} } \hfill & 1 \hfill & 1 \hfill & 1 \hfill \\ \end{array}$$

The Q-matrix showing the associations between the steps and microskills can be explained as follows going column by column: Column 1. Understanding of current flow (MS1) (i.e., connecting one resistor after another so that the same current flows through each) and connections (MS3) (i.e., understanding how to add the end of a resistor to another) are necessary to connect two resistors in a series (S1), but knowledge of polarity (MS2) is not necessary for this step because resistors are not polarized (i.e., orientation of the resistors is not relevant because current flows in both directions). Thus, the microskills required for S1 are 1 and 3, which are translated to the column entries (i11, i21, i31): 1, 0, 1. Column 2. Understanding of connections (i.e., understanding how to connect an end of the resistors to an end of the LED) is necessary to know how to connect LED to resistors, because a resistor is not polarized and the current flows in both directions from the ends of the resistors in series. Thus, the microskill required for S2 is only 3, which determines the column entries (i12, i22, i32) as: 0, 0, 1. Column 3. Understanding of current flow (i.e., closing effectively the current path of the circuit with the battery, resistors and LED), polarity (i.e., connecting LED(−) to battery(−) and available end of resistors( +) to battery ( +)), connections (i.e., understanding how to connect an end of battery cap to the available end of the resistors and other end of battery cap to available leg of the LED) are necessary to know how to connect LED(−) to battery(−) and resistor to battery( +). Thus, for S3 we need microskills 1, 2, and 3, which are represented by the column entries (i13, i23, i33): 1, 1, 1.

Validating the Q-matrix

When we prepared our Q-matrix, we evaluated the microskill-step associations. However, by looking at the Q-matrix, we realize that some microskills have more value than others. For example, in our previous Q-matrix, MS3 (ability to under- stand circuit connections) is necessary for all three steps, while MS1 (ability to understand current flow) is needed for two of the steps and MS2 (ability to understand polarity) is needed for only one step. These microskills have a hierarchy among them. This hierarchy is easy to visualize because we have a small 3 × 3 matrix; however, a typical matrix will be much larger and have multiple associations. The reachability matrix (R-matrix) is a K × K matrix that represents the associations among microskills. Each row of the R-matrix represents an item that satisfies the specified hierarchical structure. We obtain the following R-matrix from our Q-matrix:

$${\text{R}} = \begin{array}{*{20}l} {} \hfill & {{\text{MS}}_{{1}} } \hfill & {{\text{MS}}_{{2}} } \hfill & {{\text{MS}}_{{3}} } \hfill \\ {{\text{MS}}_{{1}} } \hfill & 1 \hfill & 1 \hfill & 0 \hfill \\ {{\text{MS}}_{{2}} } \hfill & 0 \hfill & 1 \hfill & 0 \hfill \\ {{\text{MS}}_{{3}} } \hfill & 1 \hfill & 1 \hfill & 1 \hfill \\ \end{array}$$

We will elaborate on how to fill the entries of the first row of our R-matrix by using boolean algebra, which compares two distinct rows (microskills) of our Q- matrix. Entry i11 = 1: Parent (higher hierarchy): (1,0,1) against child (lower hierarchy): (1,0,1), this is true because this row is compared against itself. i12 = 1: Parent: (1,0,1) against child: (0,0,1), this is true because both rows do not contradict each other. i13 = 0: Parent: (1,0,1) against child: (1,1,1), this is false because parent entry contains a 0, while the same child entry is a 1, and it does not make sense that a child would possess a microskill that the parent does not.

The R-matrix is the algebraic representation of the hierarchies between microskills, and it allows us to derive a tree diagram for a graphical representation of these hierarchies. If we observe Fig. 3 (left), it is an exact representation of the R- matrix: MS3 contains itself and also is the parent of MS1 and MS2. Similarly, MS1 contains itself and is a parent of MS2. Now, if we observe Fig. 3 (right), we see the final hierarchy in order of MS3, MS1, and MS2. We erase the self-containment symbols and also disregard the direct path from MS3 to MS2 because MS2 is a child of MS1. This final tree diagram is important to check if each microskill fits in the hierarchy of valuable concepts to teach the students. This validation loop (e.g., an instructor could start by the tree diagram, then the R-matrix, and finally produce the Q-matrix) of creating Q- and R- matrices and the tree diagram enables us to compare the original and the new Q-matrices, and make any modifications to the Q-matrices if necessary.

Fig. 3
figure3

Left: Initial tree diagram of microskills. Right: Final tree diagram of skills

Students’ knowledge space generated from the Q-matrix

After validating the Q-matrix, we need to collect the scores of an examination of the students’ knowledge on the material. In our case, because we are dealing with novices, we must present them with the knowledge of all the microskills prior to the test. Once they familiarize themselves with the learning contents and complete the test, we can collect their answers. To exemplify how to calculate the knowledge space, suppose that Student A scores as follows: S1 = 1 (correct), S2 = 0 (incorrect), S3 = 1 (correct). Since the student failed at S2, this means that column 2 of the original Q-matrix has changed from: 0-0-1 to 0-0-0. We refer back to our Q-matrix to calculate the value of each microskill by summing every entry of 1 in each row (MS). The mastery of each skill is the ratio of a MS value from the modified Q-matrix to a MS value from the original Q-matrix: MS1 = 2/2 = 1, MS2 = 1/1 = 1, MS3 = 2/3 = 0.66. Then, it is up to educators to decide what the cutoff value is for expertise of the students. In our case, our suggested cutoff value is 70% (0.70) (Tatsuoka, 2009), which means that any calculated expertise that falls below this value, is a microskill that needs further studying. These results mean that the knowledge space of Student A is MS1 and MS2, which means that the student is lacking MS3. We will provide the code we used to calculate the knowledge space of students. We have provided a general overview of Q-matrix theory; for more information refer to Tatsuoka (2009).

Design microskills in an AR environment

As we explained in our related work section, there is little reference on how to design the AR content for an educational curriculum, as most classroom implementations were done using an empirical approach and typically focused on how to integrate AR into the classroom, rather than how to customize the AR content itself. Thus, we decided to approach the design with an emergent coding approach (Blair, 2015), in which we clustered the types of microskills we could recognize in AR: (1) Perceptual, which refers to the time specific knowledge designed to attract the attention of the user and deliver visual information (Hoffmann et al., 2008; Kishishita et al., 2014; Lee et al., 2019; Rusch et al., 2013; Schwerdtfeger & Klinker, 2008; Steinberger et al., 2011; Volmer et al., 2018; Waldner et al., 2014); (2) Cognitive, which refers to the time specific knowledge to generate and collect information from the users’ working memory (Beheshti et al., 2017; Cai et al., 2014; Chan et al., 2013; Kapp et al., 2019; Knierim et al., 2018; Prilla, 2019; Strzys et al., 2017); (3) Motor, which refers to the time specific knowledge to properly perform an operation or process (Bhattacharya & Winer, 2019; Eckhoff et al., 2018; Gavish et al., 2015; Mohr et al., 2017; Wang et al., 2016; Webel et al., 2013; Westerfield et al., 2015). In Table 1, we go into further detail on the educational purposes for each type of microskill and guides on how to translate it into AR in terms of content design. We also provide some practical examples of AR in electronics in which the design techniques can be deployed. We anticipate that as learning content, any microskills must be accompanied by voice or text narration of the context.

Table 1 Microskills aligned as AR content. AR content design principles based on the type of identified microskill

Let us look at the microskills in our Q-matrix:

  1. (1)

    Current flow, can be conveyed through the animation of invisible phenomena. For example, long-format animation with electricity effects to show current, can represent this cognitive microskill.

  2. (2)

    Polarity, requires both understanding the direction of current and recognizing the shape of an object to indicate positive and negative terminals. For example, an AR animation to demonstrate current flowing through + and − terminals and overlaid information in the form of plus and minus signs to each terminal of the LED can represent this perceptual-cognitive microskill.

  3. (3)

    Understanding of circuit connections, requires manipulating components based on circuitry logic. For example, an AR interactive example can represent this motor microskill.

After following the design principles for what the microskills will look like, we have to determine how these will be presented to the users. In scaffolding methodology for multimedia, the technology fades away as the student completes the tasks and slowly becomes more independent. This process is a key aspect in aiding learning success (Chen et al., 2003; Hill & Hannafin, 2001; Huisinga, 2017; Marchand-Martella et al., 2013). If we select several tasks that are conducive to learning the skill (i.e., electrical circuitry), it follows that after each task, based on the student’s performance—whether each step was completed correctly or not—we can re-calculate the student’s knowledge space. This knowledge space determines which microskill the student is lacking, for example, MS3 (circuit connections). Thus, we will test two AR conditions: (1) PartialAR, which only presents the student with MS3, emphasizing on this knowledge gap; (2) Full-AR, which presents all microskills MS1, MS2, MS3, to let the student explore which AR content they want to review. Based on the design principles, we will prepare the microskills for several tasks, and based on our two AR conditions, we will evaluate our user studies.

The tasks

Electrical circuitry

Electrical circuitry was placed in an area of broader investigation as a part of the surge of interest in acquiring knowledge through a hands-on, minds-on approach (National Research Council 2012) (Council, 2012). Thus, we will use AR to encourage students to grasp the concepts "at hand" and visualize "hidden factors" (e.g., current, polarity, etc.) while making an operating circuit, and go beyond following a series of steps to build a working circuit. This is particularly prescient because misconceptions of how circuits work have been found even in undergraduates from physics and engineering courses (Fredette & Lochhead, 1980). Two of the researchers have had previous experience shadowing an IoT development course for undergraduates without previous back- ground on electronics. While some of the microskills were extracted from existing literature on elementary knowledge on electric circuits (e.g., current flow, polarity, connections, series, parallel) (Osborne, 1983, 1985; Osborne et al., 1991; Peppler et al., 2019; Shepardson & Moje, 1994), the remaining microskills were de- rived from class observation and scrapbooking (note-taking and pictures), mainly during the first five weeks of classes (3 h weekly). The two researchers, along with a learning sciences expert outlined the learning outcomes for each one of the microskills.

In Table 2, we give an in-depth explanation of the learning outcomes that we set as goals for each microskill. These microskills were selected from existing literature on basic concepts of electrical circuitry that are pervasively misunderstood (Osborne, 1983, 1985; Osborne et al., 1991; Peppler & Glosson, 2013; Peppler et al., 2016, 2019; Shepardson & Moje, 1994; Webb, 1992). The following meta-steps were not explicitly given to students, but these were meant for the researchers to prepare the general Q-matrix (Table 3) and to keep score of correct and incorrect steps by students. In such a way, after every task, we can re-calculate the knowledge space for each student.

Table 2 Microskills and learning outcomes
Table 3 Prepared Q-matrix for all selected tasks

Meta-steps

  1. A.

    Locate < insert list of components > for circuit assembly.

  2. B.

    Place and fit < insert microcontroller type and miscellaneous component(s) > into the breadboard.

  3. C.

    Find and interpret specs sheet of < insert microcontroller type and miscellaneous component(s) > to identify the digital and analog connectivity pins.

  4. D.

    Connect < LED cathode > to ground < rail of breadboard or pin of microcontroller > .

  5. E.

    Connect < LED anode > to < resistor > .

  6. F.

    Connect < resistor > to < digital pin of microcontroller > .

  7. G.

    Connect < digital pin(s) or analog pin(s) or power pin or ground pin of miscellaneous component > to < digital pin(s) or analog pin(s) or power pin or ground pin of microcontroller > . Note: The complexity of this step depends on the amount of required connections between the microcontroller and the miscellaneous components.

  8. H.

    Connect < power, ground of miscellaneous components > to < power, ground of microcontroller > .

  9. I.

    Connect ground and power < pin(s) of microcontroller > to ground and power < rails of breadboard > .

  10. J.

    Connect ground and power < battery terminals > to ground and power < rails of breadboard > .

The information inside <  > can be modified depending on the task at hand; however, the general structure of the steps is similar across tasks.

Study

We recruited 20 undergraduates (55% male, 45% female), ages ranging from 18 to 34 (Mean = 23.1, SD = 2.69) to participate in our studies. All participants reported no significant background in electrical circuitry or physics (Mean = 1.25, SD = 0.43) from a 1 (novice) to 5 (expert). Participants were split into two conditions (FullAR vs. PartialAR) and each student participated in an individual session 1 (2 h) and session 2 (2 h) of the user study. We scheduled each student for session 1 and 2 exactly one week apart from each other.

The microskills were delivered using AR technology as the fundamental knowledge that were required to complete the tasks, and which could be accessed at any time by the participants during the sessions. Apart from the two interactive examples (AR tutorials), which provided a series of procedural AR animations on how to assemble a working circuit, there were no additional instructions provided to students. Prior to each task, the researcher provided the students with a description of the outcome for each task (Fig. 4). For example: "You need to set up a circuit in which you use your ESP32 (microcontroller) so that every time you press down on your pushbutton, you turn on the LED". Every microcontroller had the uploaded code for the task and the specs sheet provided for each component contained the pin numbers from the microcontroller that had to be used.

Fig. 4
figure4

(1) Phone-AR setup, (2) procedural AR (example), (3) task 2: color sensor (green) turns on LED, (4) task 4: potentiometer controls LED

First session for PartialAR condition was as follows: Students started by exploring all 10 microskills + interactive example 1, followed by a test (Task 0), then once each student’s knowledge space was calculated, we exposed the participants to the microskills found lacking as they performed Task 1. For the FullAR group, similarly, participants started by exploring all 10 microskills + interactive example 1, followed by Task 0, then we again gave them access to all the microskills as they performed Task 1. For a detailed flowchart, see Fig. 5.

Fig. 5
figure5

Flowchart of the experimental timeline

Second session session for PartialAR condition was as follows: Prior to every task we re-calculated the new knowledge space of each participant and provided participants with the customized AR based on the microskills that were found to be lacking. For the FullAR group, participants performed Tasks 2–8 with all 10 microskills made available at each task.

The tasks and interactive examples presented to students were in the following order:

Session 1: Interactive example 1: Turn on LED when pushbutton is pressed down. Task 0: Turn on LED when ultrasonic distance sensor detects an obstacle (e.g., a hand) at a certain proximity. Task 1: Turn on LED when distance calculated by obstacle detector sensor and an obstacle (e.g., a hand) falls below a threshold.

Session 2: Interactive example 2: procedural instructions for Task 0. Task 2: Turn on LED when color detector sensor detects a green object in its path. Task 3: Turn on LED when the temperature detected by the humidity and temperature sensor reaches a threshold. Task 4: Control the intensity of the LED using the potentiometer. Task 5: Turn on LED and display a message on the LCD screen (e.g., “Hello World”). Task 6: Turn on LED when joystick is pressed down. Task 7: Connect three 100 Ohms resistors in series to sum up to a resistance of 300 Ohms, then use the pushbutton to turn on LED when pressed. Task 8: Connect four 1 kOhms resistors in parallel to lower the resistance to 250 Ohms, then use the pushbutton to turn on LED when pressed. Note that the microskills available for studying were available throughout the tasks depending on the AR condition (Fig. 6).

Fig. 6
figure6

Overview of microskills: (1) polarity, (2) current flow, (3) components. (4) Interactive example: procedural instructions of ultrasonic sensor control of LED

Results

Keeping score during the tasks

In order to re-calculate the knowledge space, the researchers took note of whether each student had completed each step correctly or not. Each step performed correctly adds a 1-point score to a participant’s total. We performed a one-way ANOVA test to compare the means between the two conditions (Full AR vs. Partial AR) on two sessions (1st session vs. 2nd session). There was a statistically significant difference between the two conditions as determined by one-way ANOVA (F(3, 36) = 49.61, p = 0.00). A Tukey post hoc test revealed that the average score—out of 10 points, which represents 10 microskills—for the PartialAR was statistically higher for the second session. There was a statistically significant difference between FullAR-1st session (3.7 ± 1.55) and PartialAR-1st session (5.7 ± 1.27) groups (p = 0.002), with the PartialAR condition enabling students to outperform the FullAR condition. We also determine statistically significant results between FullAR-2nd session (7.98 ± 0.51) and PartialAR-2nd session (9.34 ± 0.47) groups (p = 0.04). We found the PartialAR condition presented to students in the form of their re-calculated knowledge space enabled statistically significantly overall higher performance than FullAR in both the first and second sessions.

Assessment by microskill

We wanted a breakdown assessment of the participants’ performance for each microskill, to determine which microskills they found the most difficult, and whether these matched our observations. Our overall breakdown of their average score per microskill (MS) is as follows: MS1 = 9.94 ± 0.17, MS2 = 9.44 ± 1.16, MS3 = 9.89 ± 0.33, MS4 = 7.63 ± 2.16, MS5 = 8.44 ± 1.70, MS6 = 6.75 ± 2.96, MS7 = 8.06 ± 2.03, MS8 = 8.31 ± 1.83, MS9 = 8 ± 2.49, MS10 = 9.25 ± 1.26. We compared the lowest score of the two skills by performing a paired t-test between MS4 and MS6. MS6 was statistically significantly more difficult than the population normal performance score for all tasks, t(15) = 2.21, p = 0.02. We found no statistically significant difference between the following bottom two: MS6 and MS7 (p = 0.22). This is consistent with our observations that breadboard logic was the most difficult microskill, as the most common mistake took place when students would hesitate or get confused on how to ‘close’ the power and ground of the circuit in the power rails.

Think-aloud understanding of circuitry

Following the skilling part of our first sessions in which we presented the students with all relevant information, we deployed the think-aloud method (Olson et al., 2018) to evaluate the thought process and logic used to complete different circuit tasks. We repeated similar questions at the end of the first session and at the end of the second session. One of the researchers conducted the majority of the transcription, while another researcher coded 40% to achieve inter-reliability. There was moderately strong agreement between both of the researchers’ judgements, κ = 0.773 (95% CI), p < 0.0005.

The following are snippets of a conversation carried out between Researcher 1 (R1) and Participant 9 (P9) during the assessment part (task 2) of session 1:

  • R1: Do you have a sense of what your closed circuit will look like? What will happen?

  • P9: At the end of the circuit? I’m going to connect it to the < ultrasonic distance > sensor, and if all is well then the light is going to turn on (inserts resistor into the breadboard) R1: Yes, that’s the idea. Do you have a sense of why you are using a resistor in your circuit?

  • P9: To try to lessen the electricity that goes to the light, to avoid breaking the light (points to LED in breadboard).

  • R1: Do you have a sense of what the ends of the LED are telling you?

  • P9: Smaller part is the cathode and the longer part is the positive part. R1: Do you have an idea of how to connect your sensor to the board?

  • P9: Not too much, but I have to try to have the electricity go through all the board. I’m just not sure how to connect it, I guess (participant proceeds to multiple trials to connect the circuit).

  • R1: Is your circuit closed? Why?

  • P9: I don’t know…This is it? (participant hands it over to researcher).

Most participants still had many questions about the content after the first session, but the second session concluded with participants being capable of providing proper, coherent responses about their circuits. We will expand on these observations in the Findings section. For example, the following are snippets of a conversation carried out between Researcher 1 (R1) and Participant 15 (P15) towards the end of session 2 (task 8):

  • R1: So do you have a sense of how a closed circuit looks like?

  • P15: I have to go from positive charge to a negative charge. So I always know that I have to go from the power to the ground and everything has to be connected so that it will light up the LED…(explains her circuit in much greater detail).

  • R1: Do you know why we were using the resistor in the circuits? (points to the resistor in the board).

  • P15: So I know that < the resistor > controls the current, so, like, it would make sense to regulate how much current gets through and not break the LED. (points to the current going from ESP32 to resistor to LED).

  • R1: Do you know what the LED having a long side and a short side mean?

  • P15: Yeah, this shows the polarized sides, so that the long side shows the positive side and the negative side goes to ground–negative side goes to ground, positive side goes to power or load (holds ends of the LED and spreads them with hand).

  • R1: How do you check that your circuit is done? (no load is applied yet).

  • P15: I would make sure that if I have a sensor—like the color < sensor > —, have to make sure it goes to the pins, and so like, these pins < from sensor > go to these pins of the ESP32, then the ground or the power from that sensor is on the board for the positive and negative charge, and then I make sure that the LED starts from the resistor. So I make sure that everything has current. I think of it as a circle that I need to close. (points to all components in breadboard one by one).

Post-tasks multiple-choice examination

At the end the second session, we decided to test the students with a multiple-choice questionnaire, in order to evaluate their understanding of each microskill. The test included 10 questions, in which each question was meant to target one or more microskills. We compared the average score between the two conditions by performing a two-sample t-test assuming unequal variances and we found that there was no statistically significant difference between both groups, t(13) = − 0.412, p = 0.34. For example, some of the questions included: choose the schematic of a working circuit, choose the functionality of a resistor, read the value of the following resistor, which of the following is true about this series circuit?. We found that participants score for both conditions was higher than expected (70% cutoff), scoring an average of 8.8 ± 1.49 for the FullAR condition and 9.03 ± 0.69 for the PartialAR condition out of the 10 points for the questionnaire. This means that as students got more access to AR, the difference between conditions disappears.

Qualitative and quantitative findings

Adoption of a new vocabulary as evidence of learning

Building a circuit does not necessarily translate to understanding concepts, which is why we used the think-aloud method to follow and gain insights into students’ learning process. Researchers noticed that as students became more exposed to the concepts of electrical circuitry, they became more articulate and began adopting words they had no familiarity or use for, previously. After the first session, students had somewhat vague ideas about the concepts recently introduced and that was also reflected on how they answered the questions. Participants used vague words such as ‘thing’, ‘energy’, ‘light’ or pointed to objects to talk about the components, concepts they wanted to explain or often said that they were unsure of what was going on. For example, they phrased their statements as ‘I think that the resistor is used for…’ (P3) or responded a question with another question, such as ‘Yes, I kind of remember…what is the name of this?’ (P16, referring to the ESP32). Since this was an assessment session, we were not expecting them to fully under- stand or internalize the microskills, but it was useful to compare their answers to the knowledge space we calculated for each student. Students were found to be lacking several microskills (especially cognitive), which meant they needed studying more of the AR content.

Then towards the end of the second session, once participants had a chance to become familiar with their circuits and AR environment, we asked questions and told participants to walk us through their logic. The researchers observed that as participants successfully completed all the tasks, they also gained fluency on the concepts and the objects they were manipulating. For example, they adopted words like ‘voltage’, ‘closed loop’, ‘charge’, ‘current’, ‘anode’, ‘cathode’. Another important development was that as the participants became fluent with the new vocabulary and concepts, they were capable of faster troubleshooting of their circuits. For example, the most common mistakes throughout the tasks were related to breadboard logic (MS6), as participants would often forget to close the loop by connecting power or ground terminals to obtain their working circuits. Upon trying and failing to power their circuits, most participants first instinct was to check these types of connections.

Improvement of the AR content

Following the first assessment session, we were able to pick up on participants’ first impressions on the AR technology. Participants found AR to be useful and wanted to explore it to further understand electrical circuitry. We questioned them on what they liked and did not like from the AR content:

Occlusion and misalignment control

Some participants raised concerns about the breadboard shader (i.e., the virtual breadboard superimposed on the physical breadboard), because it occluded the physical breadboard and their hands, which made confusing to follow along the interactive examples (connections). Also, electrical components are quite small, which means that the object tracking in AR is not as accurate as it would be with larger objects. While the tracking accuracy using QR codes only presented minor misalignments, when combined with the shader occlusion, it enabled more mistakes during the examples because participants could not follow and match the pins. Thus, prior to the second session, we used some techniques to bypass these issues. For example, we created an invisible shader to keep a virtual model in which the components would fit—but would not occlude the hands—and we also added the symbology of the power rails and the numbers and letters typical to a breadboard (Fig. 7). Then, we used AR to project the pin numbers of a component (e.g., resistor to pin D23 of ESP32) in large letters. Another way to go around any confusion, was to encourage users to explore the zoom functionality of the AR technology in which they could simply read the letters and pins from components and match them during assembly, however the importance of matching specific pins between components was not obvious until we emphasized the name or number of each pin. These new design schemes were helpful to participants during session 2 and eliminated errors during the interactive examples, which were important to understand breadboard logic and connections.

Fig. 7
figure7

(Bottom) New transparent shader (virtual) overlayed on physical breadboard, power and ground rails, numbering and letters of breadboard are the non-transparent features. (Top) Previous solid shader, which had to be discarded due to occlusion issues

Interactive AR is better than embedded video

We had decided to provide one interactive example per session to enable participants to explore connections between components and how to manipulate them. These examples were considered complementary to further internalize the ability to understand connections, which was already explained in an embedded AR video. However, based on participants’ responses, we learned that these examples were essential rather than complementary, because participants understood connections only after following along and building a working circuit and manipulating components with their own hands. Thus, we needed to make sure that the interactive examples were easy-to-follow and that participants could explore electrical phenomena and the components. For example, we had to play with the transparency of the components, so that too many wires would not occlude each other and we only kept the terminal ends of the wires to not confuse the participants. This type of 3D object exploration is particular to AR technology and participants preferred being able to explore the components in this way.

Voice narration was key and analogies worked best for complicated concepts. According to participants’ responses, voice narration—which accompanied every microskill—was useful to provide long format context and explanation of the different concepts, and requested it to be included in the interactive examples for the second session. Also, analogies were described as extremely useful to understand new concepts. For example, several participants brought up how helpful it was using a water flow analogy to relate it to current flow (see Table 1), and how analogies helped them think of electrical circuitry in their own terms (e.g., a circuit as a circle).

Full AR vs. partial AR

As observed in the results section, partial AR enabled participants to have a superior performance in their overall score. Participants with full AR had access to all the microskills—even the ones they had already mastered—which typically meant that although participants could freely explore all the microskills, they were lost as to what specific knowledge they were missing since it was not emphasized among all the information. The group with PartialAR showed overall superior performance through their access to targeted microskills, which meant that after every task, they were directed to exactly the knowledge they had missed. However, as both groups continue exploring the learning content and completing more tasks, the difference between their scores (i.e., the gaps in their knowledge spaces) becomes insignificant as most students successfully finished the last task with almost no errors, and this is also observed in their written examinations (post-tasks). Thus, we can determine that PartialAR—scaffolded AR based on the missing gaps of participants’ knowledge spaces—can be particularly beneficial at the beginning of the learning process, as participants struggle to acquire new knowledge.

Aligning the microskills to AR design principles

We leveraged the expertise of the researchers—who were previously involved in electrical circuitry classes and workshops—to carefully select the microskills necessary to fulfill the variety of circuitry tasks. This part of the process is fundamental to create a Q-matrix, map the associations among the microskills and steps, and to validate the hierarchies among the microskills. The results showed that the microskills were accurate and sufficient by obtaining a high average score M = 8.96 for all participants. In order to decide how to best represent the microskills in AR, we referred to Table 1 to select whether the microskills were perceptual, cognitive, motor, or often a combination of these. For example, recognize components was best exemplified by highlighting each component on the breadboard (perceptual); current flow—which can be considered an invisible electrical phenomena—was best represent in long format by an animation with electrical effects (cognitive); connections required manipulating components based on circuitry logic motor, and we concluded that an AR embedded video followed by an interaction example worked best to educate the students. Our listed AR content design principles from Table 1 are not meant to be binding, but meant to be used as a guide to de- liver learning content to the students. We suggest considering those techniques which are coded specifically to deliver small segments of information—microskills—to the students.

Achieving learning outcomes in AR

Each microskill we selected was accompanied by at least two learning outcomes. Selecting learning outcomes for each microskill is not part of Q-matrix theory which generally determines the students’ knowledge space based on performing correctly each step of the task. However, setting achievable goals for each microskill allows us a concrete metric to test the knowledge of each student. In the case of the multiple-choice examination, we designed each question to address the learning outcomes of acquired knowledge we expected them to have. For example, one of the questions asked the students about the names in a list of 6 components. Every participant answered the question correctly and the overall average score for all participants was quite high.

Limitations in our experiments

Our experiments handled a relatively small sample size from an undergraduate population at a US university. We would need a much larger and diverse population to obtain a conclusive list of microskills that would be sufficient to enable novices to obtain the basic skill of electrical circuitry. However, the results were quite promising across tasks and examinations, and the authors would like to encourage similar experiments to be conducted in order to use our list as part of an electrical circuitry curriculum. Also, our model could potentially be applied to other multimedia successfully (e.g., video tutorials, 2D animations), which do not necessarily need to be embedded into an AR environment. However AR is a particularly useful tool capable of improving performance and spatial skills for highly spatial tasks (e.g., assemblies, connections, repairs), and we also observed from our own experiments that embedded video was not as effective as AR in order to explain some concepts, and that participants only learned them by making their circuits. When deciding whether AR is the right useful tool for a classroom, it is important to carefully analyze if the selected educational tasks would benefit from the use of AR.

Future work and potential of AR technology

We want to implement an AR-based curriculum for the next iteration of the electric circuits and IoT development course for undergraduates, in which two of the researchers will be instructing. Our workflow will be used to bring the novices to an elementary knowledge of electrical circuitry. This was one of the reasons we chose to use phone-based AR, so that we can make the learning material scalable and accessible to the students even prior to class or to long distance students. The first few iterations of this curriculum will be considered experimental, but it will help us continually refine the curriculum and the tools we will be using to teach the novices. Similarly, it would be interesting to see our workflow implemented in other subject areas (e.g., physics, chemistry, biology, etc.).

Conclusion

In this paper, we presented the use of Q-matrix theory as a design framework to develop an AR-based curriculum. This workflow systematically implements AR learning content using cognitive assessment into an education curriculum, in our case for mastery of basic electrical circuitry. Thus, we provided a list of suggested design principles to be used as a guide to deliver AR educational content. We evaluated the association between microskills—the smallest segment of knowledge—and steps, to complete diverse and complex tasks. In our evaluation, we demonstrated that scaffolded AR worked better when students were recently introduced to the novel concepts. In order to prove the learning of electrical circuitry in our participants, we used three types of evaluations: quantitative scores taken from each completed task, think-aloud method to follow their acquisition of new vocabulary and learning process, and a written examination (after second session) to verify their understanding of circuitry concepts. Thus, we proved that our workflow effectively leads to novices acquiring basic knowledge of electrical circuitry. Finally, we demonstrated that aligning the AR technology to specific learning objectives paves the way for high quality assessment, teaching, and learning.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

AR:

Augmented reality

MS:

Microskill

S:

Step

References

  1. Akçayır, M., & Akçayır, G. (2017). Advantages and challenges associated with augmented reality for education: A systematic review of the literature. Educational Research Review, 20, 1–11.

    Article  Google Scholar 

  2. Beheshti, E., Kim, D., Ecanow, G., & Horn, M.S. (2017). Looking inside the wires: Understanding museum visitor learning with an augmented circuit exhibit. In Proceedings of the 2017 Chi Conference on Human Factors in Computing Systems, pp. 1583–1594. ACM.

  3. Bhattacharya, B., & Winer, E. H. (2019). Augmented reality via expert demonstration authoring (areda). Computers in Industry, 105, 61–79.

    Article  Google Scholar 

  4. Blair, E. (2015). A reflexive exploration of two qualitative data coding techniques. Journal of Methods and Measurement in the Social Sciences, 6(1), 14–29.

    Article  Google Scholar 

  5. Bower, M., Howe, C., McCredie, N., Robinson, A., & Grover, D. (2014). Augmented reality in education—cases, places and potentials. Educational Media International, 51(1), 1–15.

    Article  Google Scholar 

  6. Buck, G., & Tatsuoka, K. (1998). Application of the rule-space procedure to language testing: Examining attributes of a free response listening test. Language Testing, 15(2), 119–157.

    Article  Google Scholar 

  7. Cai, S., Wang, X., & Chiang, F.-K. (2014). A case study of augmented reality simulation system application in a chemistry course. Computers in Human Behavior, 37, 31–40.

    Article  Google Scholar 

  8. Cai, Y., Tu, D., & Ding, S. (2018). Theorems and methods of a complete q matrix with attribute hierarhcies under restricted Q-matrix design. Frontiers in Psychology, 9, 1413.

    Article  Google Scholar 

  9. Casalino, G., Castiello, C., Del Buono, N., Esposito, F., & Mencar, C. (2017). Q-matrix extraction from real response data using nonnegative matrix factorizations. In International conference on computational science and its applications (pp. 203–216). Springer.

    Google Scholar 

  10. Chan, J., Pondicherry, T., & Blikstein, P. (2013). Lightup: an augmented, learning platform for electronics. In Proceedings of the 12th International Conference on Interaction Design and Children, pp. 491–494. ACM.

  11. Chen, Y.-S., Kao, T.-C., & Sheu, J.-P. (2003). A mobile learning system for scaffolding bird watching learning. Journal of Computer Assisted Learning, 19(3), 347–359.

    Article  Google Scholar 

  12. Chen, Y., Liu, J., Xu, G., & Ying, Z. (2015). Statistical analysis of Q-matrix based diagnostic classification models. Journal of the American Statistical Association, 110(510), 850–866.

    MathSciNet  MATH  Article  Google Scholar 

  13. Cheng, K.-H., & Tsai, C.-C. (2013). Affordances of augmented reality in science learning: Suggestions for future research. Journal of Science Education and Technology, 22(4), 449–462.

    Article  Google Scholar 

  14. Chiang, T.H.-C., Yang, S. J., & Hwang, G.-J. (2014). An augmented reality-based mobile learning system to improve students’ learning achievements and motivations in natural science inquiry activities. Educational Technology & Society, 17(4), 352–365.

    Google Scholar 

  15. Chiu, C.-Y., Douglas, J. A., & Li, X. (2009). Cluster analysis for cognitive diagnosis: Theory and applications. Psychometrika, 74(4), 633.

    MathSciNet  MATH  Article  Google Scholar 

  16. Cuendet, S., Bonnard, Q., Do-Lenh, S., & Dillenbourg, P. (2013). Designing augmented reality for the classroom. Computers & Education, 68, 557–569.

    Article  Google Scholar 

  17. Desmarais, M.C., Beheshti, B., & Naceur, R. (2012) Item to skills mapping: Deriving a conjunctive Q-matrix from data. In International Conference on Intelligent Tutoring Systems, pp. 454–463, Springer.

  18. Desmarais, M., Beheshti, B., & Xu, P. (2014) The refinement of a Q-matrix: Assessing methods to validate tasks to skills mapping. In Educational Data Mining.

  19. Dunleavy, M., Dede, C., & Mitchell, R. (2009). Affordances and limitations of immersive participatory augmented reality simulations for teaching and learning. Journal of Science Education and Technology, 18(1), 7–22.

    Article  Google Scholar 

  20. Eckhoff, D., Sandor, C., Kalkoten, D., Eck, U., Lins, C., & Hein, A. (2018). Tutar: Semi-automatic generation of augmented reality tutorials for medical education. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), pp. 430–431. IEEE.

  21. Ferrer-Torregrosa, J., Torralba, J., Jimenez, M., García, S., & Barcia, J. (2015). Arbook: Development and assessment of a tool based on augmented reality for anatomy. Journal of Science Education and Technology, 24(1), 119–124.

    Article  Google Scholar 

  22. Fredette, N., & Lochhead, J. (1980). Student conceptions of simple circuits. The Physics Teacher, 18(3), 194–198.

    Article  Google Scholar 

  23. Fugate, J. M., Macrine, S. L., & Cipriano, C. (2018). The role of embodied cognition for transforming learning. International Journal of School & Educational Psychology, 1–15, 274.

    Google Scholar 

  24. Gavish, N., Gutiérrez, T., Webel, S., Rodríguez, J., Peveri, M., Bockholt, U., & Tecchia, F. (2015). Evaluating virtual reality and augmented reality training for industrial maintenance and assembly tasks. Interactive Learning Environments, 23(6), 778–798.

    Article  Google Scholar 

  25. Haberman, S. J., von Davier, M., & Lee, Y.-H. (2008). Comparison of multidimensional item response models: Multivariate normal ability distributions versus multivariate polytomous ability distributions. ETS Research Report Series, 2008(2), 25.

    Google Scholar 

  26. Heller, J. (2019). Complete Q-matrices in general attribute structure models. PsyArXiv. https://doi.org/10.31234/osf.io/k82a5.

  27. Heller, J., Anselmi, P., Stefanutti, L., & Robusto, E. (2017). A necessary and sufficient condition for unique skill assessment. Journal of Mathematical Psychology, 79, 23–28.

    MathSciNet  MATH  Article  Google Scholar 

  28. Hill, J. R., & Hannafin, M. J. (2001). Teaching and learning in digital environments: The resurgence of resource-based learning. Educational Technology Research and Development, 49(3), 37–52.

    Article  Google Scholar 

  29. Hoffmann, R., Baudisch, P., & Weld, D.S. (2008) Evaluating visual cues for window switching on large screens. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 929–938.

  30. Hsiao, K.-F., Chen, N.-S., & Huang, S.-Y. (2012). Learning while exercising for science education in augmented reality among adolescents. Interactive Learning Environments, 20(4), 331–349.

    Article  Google Scholar 

  31. Huisinga, L. A. (2017). Augmented reality reading support in higher education: Exploring effects on perceived motivation and confidence in comprehension for struggling readers in higher education. Graduate theses and dissertations. https://doi.org/10.31274/etd-180810-5151.

  32. Iftene, A., & Trandabăt, D. (2018). Enhancing the attractiveness of learning through augmented reality. Procedia Computer Science, 126, 166–175.

    Article  Google Scholar 

  33. Jonassen, D. H., Tessmer, M., & Hannum, W. H. (1998). Task analysis methods for instructional design. Routledge.

    Book  Google Scholar 

  34. Kamarainen, A. M., Metcalf, S., Grotzer, T., Browne, A., Mazzuca, D., Tutwiler, M. S., & Dede, C. (2013). Ecomobile: Integrating augmented reality and probeware with environmental education field trips. Computers & Education, 68, 545–556.

    Article  Google Scholar 

  35. Kapp, S., Thees, M., Strzys, M. P., Beil, F., Kuhn, J., Amiraslanov, O., Javaheri, H., Lukowicz, P., Lauer, F., Rheinländer, C., et al. (2019). Augmenting kirchhoff’s laws: Using augmented reality and smartglasses to enhance conceptual electrical experiments for high school students. The Physics Teacher, 57(1), 52–53.

    Article  Google Scholar 

  36. Kishishita, N., Kiyokawa, K., Orlosky, J., Mashita, T., Takemura, H., & Kruijff, E. (2014). Analysing the effects of a wide field of view augmented reality display on search performance in divided attention tasks. In 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 177–186. IEEE.

  37. Knierim, P., Kiss, F., Schmidt, A.: Look inside: Understanding thermal flux through augmented reality. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), pp. 170–171 (2018). IEEE

  38. Köhn, H.-F., & Chiu, C.-Y. (2018). How to build a complete q-matrix for a cognitively diagnostic test. Journal of Classification, 35(2), 273–299.

    MathSciNet  MATH  Article  Google Scholar 

  39. Lee, H., Kim, H., Monteiro, D.V., Goh, Y., Han, D., Liang, H.-N., Yang, H.S., & Jung, J. (2019). Annotation vs. virtual tutor: Comparative analysis on the effectiveness of visual instructions in immersive virtual reality. In 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 318–327. IEEE.

  40. Leighton, J., & Gierl, M. (2007). Cognitive diagnostic assessment for education: Theory and applications. Cambridge University Press.

    Book  Google Scholar 

  41. Lin, H.-C.K., Hsieh, M.-C., Liu, E.Z.-F., & Chuang, T.-Y. (2012). Interacting with visual poems through AR-based digital artwork. Turkish Online Journal of Educational Technology TOJET, 11(1), 123–137.

    Google Scholar 

  42. Lin, H.-C.K., Chen, M.-C., & Chang, C.-K. (2015). Assessing the effectiveness of learning solid geometry by using an augmented reality-assisted learning system. Interactive Learning Environments, 23(6), 799–810.

    Article  Google Scholar 

  43. Lu, S.-J., & Liu, Y.-C. (2015). Integrating augmented reality technology to enhance children’s learning in marine education. Environmental Education Research, 21(4), 525–541.

    Article  Google Scholar 

  44. Marchand-Martella, N. E., Martella, R. C., Modderman, S. L., Petersen, H. M., & Pan, S. (2013). Key areas of effective adolescent literacy programs. Education and Treatment of Children, 36(1), 161–184.

    Article  Google Scholar 

  45. Mayer, R. E. (2019). How multimedia can improve learning and instruction. In The Cambridge handbook on cognition and education. Cambridge University Press.

  46. Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43–52.

    Article  Google Scholar 

  47. Mohr, P., Mandl, D., Tatzgern, M., Veas, E., Schmalstieg, D., & Kalkofen, D. (2017). Retargeting video tutorials showing tools with surface contact to augmented reality. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 6547–6558.

  48. National Research Council, et al. (2012) A framework for K-12 science education: Practices, crosscutting concepts, and core ideas. National Academies Press.

  49. Olson, G.M., Duffy, S.A., & Mack, R.L. (2018). Thinking-out-loud as a method for studying 11 real-time comprehension processes. In New methods in reading comprehension research, pp 253.

  50. Osborne, R. (1983). Towards modifying children’s ideas about electric current. Research in Science & Technological Education, 1(1), 73–82.

    Article  Google Scholar 

  51. Osborne, R. (1981). Children’s ideas about electric current, New Zealand Science teacher, 29. This work is also discussed in Osborne, R. and Freyberg, P. (Eds) (1985) Learning in science: The implications of children’s science, London, Heinemann, 21–6.

  52. Osborne, J., Black, P., Smith, M., & Meadows, J. (1991). Primary SPACE project research report: electricity. Liverpool University Press.

    Google Scholar 

  53. Peppler, K., & Glosson, D. (2013). Stitching circuits: Learning about circuitry through e-textile materials. Journal of Science Education and Technology, 22(5), 751–763.

    Article  Google Scholar 

  54. Peppler, K., Keune, A., & Wohlwend, K.E. (2016). Design playshop: Preschoolers making, playing, and learning with squishy circuits. In Makeology, pp. 97–110. Routledge.

  55. Peppler, K., Wohlwend, K., Thompson, N., Tan, V., & Thomas, A. (2019). Squishing circuits: Circuitry learning with electronics and playdough in early childhood. Journal of Science Education and Technology, 28(2), 118–132.

    Article  Google Scholar 

  56. Prilla, M. (2019). " i simply watched where she was looking at" coordination in short-term synchronous cooperative mixed reality. In Proceedings of the ACM on Human-Computer Interaction 3(GROUP), 1–21.

  57. Radu, I., & Schneider, B. (2019) What can we learn from augmented reality (ar)? In Proceedings of the 2019 CHI conference on human factors in computing systems. CHI ’19, pp. 544–154412. ACM. https://doi.org/10.1145/3290605.3300774.

  58. Rasimah, C. M. Y., Ahmad, A., & Zaman, H. B. (2011). Evaluation of user acceptance of mixed reality technology. Australasian Journal of Educational Technology. https://doi.org/10.14742/ajet.899

    Article  Google Scholar 

  59. Roussos, L.A., DiBello, L.V., Stout, W., Hartz, S.M., Henson, R.A., & Templin, J.L. (2007). The fusion model skills diagnosis system. In Cognitive diagnostic assessment for education: Theory and applications, pp 275–318.

  60. Rusch, M. L., Schall, M. C., Jr., Gavin, P., Lee, J. D., Dawson, J. D., Vecera, S., & Rizzo, M. (2013). Directing driver attention with augmented reality cues. Transportation Research Part F: Traffic Psychology and Behaviour, 16, 127–137.

    Article  Google Scholar 

  61. Saltan, F., & Arslan, Ö. (2017). The use of augmented reality in formal education: A scoping review. Eurasia Journal of Mathematics, Science & Technology Education, 13(2), 503–520.

    Google Scholar 

  62. Schwerdtfeger, B., & Klinker, G. (2008). Supporting order picking with augmented reality. In 2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, pp. 91–94. IEEE.

  63. Shepardson, D. P., & Moje, E. B. (1994). The nature of fourth graders’ understandings of electric circuits. Science Education, 78(5), 489–514.

    Article  Google Scholar 

  64. Stefanutti, L., & de Chiusole, D. (2017). On the assessment of learning in competence based knowledge space theory. Journal of Mathematical Psychology, 80, 22–32.

    MathSciNet  MATH  Article  Google Scholar 

  65. Steinberger, M., Waldner, M., Streit, M., Lex, A., & Schmalstieg, D. (2011). Context-preserving visual links. IEEE Transactions on Visualization and Computer Graphics, 17(12), 2249–2258.

    Article  Google Scholar 

  66. Strzys, M., Kapp, S., Thees, M., Kuhn, J., Lukowicz, P., Knierim, P., & Schmidt, A. (2017). Augmenting the thermal flux experiment: A mixed reality approach with the hololens. The Physics Teacher, 55(6), 376–377.

    Article  Google Scholar 

  67. Sungkur, R. K., Panchoo, A., & Bhoyroo, N. K. (2016). Augmented reality, the future of contextual mobile learning. Interactive Technology and Smart Education, 13(2), 123–146.

    Article  Google Scholar 

  68. Tatsuoka, K. K. (1983). Rule space: An approach for dealing with misconceptions based on item response theory. Journal of Educational Measurement, 20(4), 345–354.

    Article  Google Scholar 

  69. Tatsuoka, K. K. (1985). A probabilistic model for diagnosing misconceptions by the pattern classification approach. Journal of Educational Statistics, 10(1), 55–73.

    Article  Google Scholar 

  70. Tatsuoka, K.K. (1990). Toward an integration of item-response theory and cognitive error diagnosis. In Diagnostic monitoring of skill and knowledge acquisition, pp 453–488.

  71. Tatsuoka, K. K. (1995). Architecture of knowledge structures and cognitive diagnosis: A statistical pattern recognition and classification approach. Cognitively Diagnostic Assessment, 327–359

  72. Tatsuoka, K. K. (2009). Cognitive assessment: An introduction to the rule space method. Routledge.

    Book  Google Scholar 

  73. Templin, J., Henson, R. A., et al. (2010). Diagnostic measurement: Theory, methods, and applications. Guilford Press.

    Google Scholar 

  74. Volmer, B., Baumeister, J., Von Itzstein, S., Bornkessel-Schlesewsky, I., Schlesewsky, M., Billinghurst, M., & Thomas, B. H. (2018). A comparison of predictive spatial augmented reality cues for procedural tasks. IEEE Transactions on Visualization and Computer Graphics, 24(11), 2846–2856.

    Article  Google Scholar 

  75. Waldner, M., Le Muzic, M., Bernhard, M., Purgathofer, W., & Viola, I. (2014). Attractive flicker—Guiding attention in dynamic narrative visualizations. IEEE Transactions on Visualization and Computer Graphics, 20(12), 2456–2465.

    Article  Google Scholar 

  76. Walker, Z., McMahon, D. D., Rosenblatt, K., & Arner, T. (2017). Beyond pokémon: Augmented reality is a universal design for learning tool. SAGE Open, 7(4), 2158244017737815.

    Article  Google Scholar 

  77. Wang, Y., & Jiang, W. (2018). An automatic classification and clustering algorithm for online learning goals based on cognitive thinking. International Journal of Emerging Technologies in Learning (iJET), 13(11), 54–66.

    Article  Google Scholar 

  78. Wang, X., Ong, S., & Nee, A.Y.-C. (2016). Multi-modal augmented-reality assembly guidance based on bare-hand interface. Advanced Engineering Informatics, 30(3), 406–421.

    Article  Google Scholar 

  79. Webb, P. (1992). Primary science teachers’ understandings of electric current. International Journal of Science Education, 14(4), 423–429.

    Article  Google Scholar 

  80. Webel, S., Bockholt, U., Engelke, T., Gavish, N., Olbrich, M., & Preusche, C. (2013). An augmented reality training platform for assembly and maintenance skills. Robotics and Autonomous Systems, 61(4), 398–403.

    Article  Google Scholar 

  81. Westerfield, G., Mitrovic, A., & Billinghurst, M. (2015). Intelligent augmented reality training for motherboard assembly. International Journal of Artificial Intelligence in Education, 25(1), 157–172.

    Article  Google Scholar 

  82. Williams, E., & Moran, C. (1989). Reading in a foreign language at intermediate and advanced levels with particular reference to english. Language Teaching, 22(4), 217–228.

    Article  Google Scholar 

  83. Williams, T., Krikorian, J., Singer, J., Rakes, C., & Ross, J. (2019). A high quality educative curriculum in engineering fosters pedagogical growth. International Journal of Research in Education and Science, 5(2), 657–680.

    Google Scholar 

  84. Wilson, M. (2002). Six views of embodied cognition. Psychonomic Bulletin & Review, 9(4), 625–636.

    Article  Google Scholar 

  85. Wilson, A. D., & Golonka, S. (2013). Embodied cognition is not what you think it is. Frontiers in Psychology, 4, 58.

    Article  Google Scholar 

  86. Xu, G., & Zhang, S. (2016). Identifiability of diagnostic classification models. Psychometrika, 81(3), 625–649.

    MathSciNet  MATH  Article  Google Scholar 

  87. Xu, G., et al. (2017). Identifiability of restricted latent class models with binary responses. The Annals of Statistics, 45(2), 675–707.

    MathSciNet  MATH  Article  Google Scholar 

  88. Yoon, S. A., Elinich, K., Wang, J., Steinmeier, C., & Tucker, S. (2012). Using augmented reality and knowledge-building scaffolds to improve learning in a science museum. International Journal of Computer-Supported Collaborative Learning, 7(4), 519–541.

    Article  Google Scholar 

Download references

Acknowledgements

We wish to give special thanks to Yeliana Torres and Kaiwen Li for their help with figures and testing.

Funding

This work is partially supported by NSF under the grants FW-HTF 1839971, OIA 1937036, and CRI 1729486. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding agency.

Author information

Affiliations

Authors

Contributions

AV conceptualized and designed the work; participated in acquisition, analysis, and interpretation of data. ZL worked on content design and conceptualization. YK worked on acquisition, analysis, and interpretation of data. ZZ also worked on acquisition, analysis, and interpretation of data. KP has drafted the work, conceptualized it and substantively revised it. TR has drafted the work and substantively revised it. KR has drafted the work and substantively revised it. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Ana Villanueva.

Ethics declarations

Ethics approval and consent to participate

Purdue University Institutional Review Board has approved the study “Learning and Training Applications Using Augmented Reality” IRB #1906022313.

Consent for publication

Authors certify that the text and any pictures or videos published in the article will be freely available on the internet and may be seen by the general public. The pictures, videos and text may also appear on other websites or in print, may be translated into other languages or used for commercial purposes.

Competing interest

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1. Supplementary video explaining our paper.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Villanueva, A., Liu, Z., Kitaguchi, Y. et al. Towards modeling of human skilling for electrical circuitry using augmented reality applications. Int J Educ Technol High Educ 18, 39 (2021). https://doi.org/10.1186/s41239-021-00268-9

Download citation

Keywords

  • Modeling
  • Skills
  • Microskills
  • Q-matrix
  • Electrical circuitry
  • Circuits
  • Education
  • Knowledge space