Skip to main content
  • Research article
  • Open access
  • Published:

The influence of e-learning on exam performance and the role of achievement goals in shaping learning patterns

Abstract

Digital learning environments provide opportunities to support learning in higher education. However, it is yet unclear why and how learners use these opportunities. We propose that learners’ achievement goals and their beliefs regarding the instrumentality of e-learning tools to achieve those goals are predictive for learning behavior within digital learning environments. Furthermore, we assume learning behavior characterized by longer overall learning time, more distributed learning, and less learning delay predicts higher exam performance. To test these hypotheses, we analyzed log-file data of 91 university students who had used an intelligent tutoring system as exam preparation in a pre-registered study. Beyond the overall predictive validity of the intelligent tutoring system, we found a negative association between learning delay and exam performance. Achievement goals predicted learning time and time distribution, an association that was partly moderated by perceived instrumentality. This suggests that goals and beliefs are important puzzle pieces for understanding e-learning (behavior).

Introduction

The development and implementation of learning platforms and e-learning software is meant to benefit learners in their striving for knowledge as well as in their ability to perform well in learning assignments. One special form of software are intelligent tutoring systems which provide learners with skill-level-appropriate exercises and repetition material (Mousavinasab et al., 2021). Past research has shown those tools to bolster learning outcomes (Kulik & Fletcher, 2016; Ma et al., 2014) and to be as effective as human tutoring (VanLehn, 2011). Moreover, intelligent tutoring systems can reduce achievement gaps bound to a lack of learning opportunities as they are highly accessible (Hickey et al., 2020). As such tools are accessible through own devices and do not require students to be present in an actual classroom, the effective use of intelligent tutoring systems requires students’ willingness to engage in self-regulated learning. In this regard, little is known about why and how students use intelligent tutoring systems and how differences in usage explain differences in effectiveness of these systems. In particular, the interpretation of logfile data derived from such tools is a current research interest oftentimes raising more questions than answers (Baker et al., 2020). Hence, we aim to shed some light on the differences in learning behaviors within such tools based on psychological theories. In line with Expectancy-Value-Theory of achievement motivation (Eccles & Wigfield, 2020; Wigfield & Eccles, 2000), we assume that students’ interaction with intelligent tutoring systems depends both on their goals as well as their anticipation that the program will help them to attain these goals. If students are either not strongly motivated by personal goals to engage in learning or feel that the intelligent tutoring systems are not beneficial for their goal progress, they are likely to be less motivated to use such tools. When it comes to the way students engage with intelligent tutoring systems, the content of their respective goals might be of high importance. Here, achievement goal theory (Daumiller et al., 2019; Murayama et al., 2012) makes some clear prepositions on how certain goals affect the choice of learning strategies. In the following, we will further specify these ideas, which may spark new insights into how goals shape and motivate e-learning with intelligent tutoring systems. Here, we aim to put our proposed process model to the test to increase our knowledge of motivated action within intelligent tutoring systems.

Motivated action in e-learning environments

Out of the numerous e-learning environments, intelligent tutoring systems provide opportunities for practice testing and are as effective as human tutoring (Ma et al., 2014; Mousavinasab et al., 2021; VanLehn, 2011). Among other building blocks, such systems often provide exercises to improve learners’ knowledge. The application of such procedures is particularly impactful as retesting consolidates memory stronger than restudying (Roediger & Karpicke, 2006a, b). As a result, intelligent tutoring systems are potent in improving academic achievement, especially for low-performing students (Schwerter et al., 2022). A great merit of intelligent tutoring systems is that they allow for adaptive testing in terms of skipping and expanding well-understood content, as well as providing more repetition of exercises that are particularly challenging for the learner (Carpenter et al., 2012).

Initiating and maintaining learning activities always depends on learners’ self-regulation (Zimmerman & Schunk, 2011). This holds particularly true for digital learning systems as learners are meant to use them on their own and typically outside of environments that prompt learning (such as classrooms; Azevedo et al., 2011; Winters et al., 2008). Here, it is important to state that it not solely matters whether learners are capable of motivating themselves for putting in the hours but also how they distribute their learning time. Research on the effects of practice testing suggest that more distributed learning activities are more beneficial (Cepeda et al., 2006; Kornell, 2009; Rawson et al., 2015) and the literature on procrastination claims negative effects of delayed learning activities (Klingsieck, 2013; Steel, 2007).

Given the high importance of self-regulated learning as a precondition for the effective use of intelligent tutoring systems, it is crucial to strengthen our understanding of why and how students engage in the use of such tools (Baker et al., 2020). If students are not motivated to engage with them, even the best-designed platforms will fail to have any meaningful impact on actual learning. As such, (achievement) motivation seems to be key for a broader understanding of the ramifications of tutoring systems. Achievement Goal Theory may be particularly helpful in explaining how individual differences in achievement motivation shape learning patterns (Diseth & Kobbeltvedt, 2010).

Goals as drivers of different approaches to (E-)learning

Achievement Goal Theory postulates that the quality of achievement motivation can be differentiated into fundamentally different goals. These goals describe what individuals deem to be beneficial in their striving for competence (Elliot, 2005; Hulleman et al., 2010). The two main goal classes are mastery goals defined as the striving for competence through task mastery and learning as well as performance goals defined as the striving for competence through outperforming others (Dweck & Leggett, 1988; Elliot, 2005; Murayama et al., 2012). Researchers have further argued that each of these two goal systems encompasses two sub-facets that either describe the goal standard (i.e., level of comparison) or the goal standpoint (i.e., construal of competence, Korn et al., 2019): In this regard, mastery goals can be further differentiated into task goals (goal standard: comparison with task requirements) and learning goals (goal standpoint: feelings of competence emerge through personal growth). Performance goals, on the other hand, can be divided into normative goals (goal standard: comparison with performance of others) and appearance goals (goal standpoint: feelings of competence emerge through demonstration of abilities).

Regardless of whether researchers have used a fine-grained or a broad approach to defining achievement goals, they typically agree that it is beneficial to differentiate achievement goals further in terms of whether they reflect approach or avoidance tendencies. Additionally, they agree that avoidance goals typically lead to less beneficial outcomes than approach goals (Elliot & Harackiewicz, 1996; Murayama et al., 2011). While this distinction has certainly furthered our understanding regarding the impact of performance goals (Elliot & Church, 1997; Elliot & McGregor, 2001), it is still up to debate if particularly learning avoidance goals (i.e., striving to avoid learning less than possible) form a meaningful goal system that connects to psychological functioning in learning situations (Daumiller & Dresel, 2020; Lee & Bong, 2016). In contrast, one form of avoidance motivation that certainly impacts motivated action in terms of disengagement is the striving for work avoidance, which is often conceptualized in terms of (work avoidance) goals alongside mastery and performance goals (King & McInerney, 2014).

Besides describing achievement motivation, achievement goals have proven influential in predicting patterns of learning. More specifically, mastery goals are in general associated with deep processing and persistence in learning activities (Diseth & Kobbeltvedt, 2010; Liem et al., 2008; Sideridis & Kaplan, 2011) but also with lowered levels of procrastination (Howell & Buro, 2009). This is likely because students that have strong mastery goals are motivated to learn, understand and develop their competencies, which is directly tied to investing effort in learning activities.

In contrast, students who engage in performance goals are supposedly less keen to truly understand the subject matter and more interested in attaining a good grade. While results on the choice of learning strategies are generally mixed (Diseth & Kobbeltvedt, 2010; Liem et al., 2008), it stands to reason that performance goals will lead to stronger “learning to the test” and as such motivate massed learning instead of in-depth long-term learning. Empirical findings suggest that if this general tendency for delayed learning is combined with avoidance motivation (as in performance avoidance goals), students become more likely to engage in maladaptive delayed learning instead of distributing learning activities (Martinie et al., 2022). This negative effect of avoidance motivation is probably most prominent in work avoidance goals, which have – in line with the core of the construct – shown clear positive associations with task disengagement (King, 2014; King & McInerney, 2014) and procrastination (Wolters, 2003).

Although the described associations speak for the predictive power of achievement goals for the engagement in learning activities in general, research has been predominantly confined to traditional learning activities. Indeed, only very few studies (i.e., Garcia- Marquez & Bauer, 2021; Hakulinen & Auvinen, 2014) have investigated the impact of achievement goals during e-learning activities. In general, one may assume that the associations that have been pointed out could be generalized to the e-learning domain in terms of clean-cut main effects of achievement goals. However, we want to argue that focusing on a specific learning environment (here: solitary e-learning) instead of aggregating learning activity brings a new question to the table: how learners allocate their resources in terms of time and effort. Personal engagement in e-learning takes time, which could also be spent in more traditional learning arrangements such as collaborative student groups, dyadic rehearsal, text reading or solitary remote learning.

Even if learners have strong mastery goals, their ability to engage in different learning arrangements is limited and as such, they have to make a choice between all possible options to distribute their learning time. This behavioral choice cannot be explained through the achievement goal framework alone. To solve this conundrum, we propose to integrate reflections on the impact of achievement goals on learning behavior into Expectancy-Value-Theory of achievement motivation (Eccles & Wigfield, 2020; Wigfield & Eccles, 2000), which is well suited to explain decision-making in achievement situations.

Missing links: goal striving and the expected instrumentality of e-learning for goal attainment

According to Expectancy-Value-Theory of achievement motivation, the choice between behavioral alternatives in learning situations is determined by the value of certain achievement outcomes as well as the expectancy of whether certain behaviors are instrumental in facilitating these aspired outcomes (Eccles & Wigfield, 2020; Wigfield & Eccles, 2000). Achievement goals mostly carry information on aspired outcomes but not on expectancies regarding the instrumentality of certain behaviors (Daumiller et al., 2020; Dresel & Hall, 2013). Expectancies, however, are of fundamental importance in the impact of values on behavior. They are deemed so crucial, that expectancy-value-theory originally considered both terms to be interlocked in a multiplicative rather than in a summative fashion (Nagengast et al., 2011). This implies that values and expectancies cannot be fully compensated through the respective other factor. In other words, if a certain outcome such as true understanding has high value (i.e., strong mastery approach goals) but the individual has low expectancy to attain that value through engagement in a certain behavior, the individual will disengage from this behavior despite the high value.

In the context of e-learning, this implies that even students with strong mastery goals might not engage in e-learning to any great extent if they feel that e-learning is not instrumental in developing own competencies. Conversely, the expectancy that e-learning can help reduce workload could even diminish the negative effects of work avoidance goals on engagement with (e-)learning activities. The reasoning behind the latter interaction is that if individuals perceive the use of intelligent tutoring systems as easy and time-saving with regard to more straining activities, intelligent tutoring systems might be deemed valuable for the attainment of work avoidance goals.

To clarify, these elaborations on the importance of expectancies are not meant to imply that the main effects of achievement goals on e-learning behavior are fully moderated. This is due to two reasons: First, it is unlikely that the net expectancy of the value of engaging in e-learning behavior as a means of goal attainment will be considered to be zero. More realistically, the expectancy will likely be deemed higher or lower compared to other behaviors, which could be engaging in different learning activities in terms of mastery goals or abstaining from learning altogether in terms of work avoidance goals. Second, as already pointed out, achievement goals influence the net amount of time and effort that individuals are willing to invest in their overall learning process. Mastery goals are associated with deep long-term engagement (Diseth & Kobbeltvedt, 2010; Liem et al., 2008) and work avoidance goals are associated with disengagement (King, 2014; King & McInerney, 2014). The idea that higher net learning time and effort enables individuals to divert more time between different learning activities should resonate in main effects of achievement goals even if we consider the depicted interactions with expectancies.

Taken together, the existing literature provides evidence for complex relationships between achievement goals, expected instrumentality and e-learning behavior. We summarize our conceptual model based on these considerations in Fig. 1.

Fig. 1
figure 1

Conceptual model of the interplay of achievement goals, learning behavior and exam success

Research questions

At this point, it is largely unclear what motivates students to engage in e-learning behavior and particularly how different goals may impact the effectiveness of this behavior. Based on the described literature on beneficial learning patterns, we investigate whether such patterns within an intelligent tutoring system predict exam success (RQ1). It is reasonable to assume that deep processing (i.e., learning times, which are distributed over a large time span; Soderstrom & Bjork, 2015), as well as low tendencies to delay learning activities (Steel, 2007), facilitate optimal learning. For this reason, we assume higher exam success to be predicted by higher overall learning time, more distributed learning and less delayed learning.

Our second research question is how achievement goals shape learning behavior in an intelligent tutoring system and impact exam performance (RQ2). Considering the existing literature, we assume that mastery approach goals motivate learners to use intelligent tutoring systems in a way that facilitates deeper learning (more invested learning time, which is distributed rather than massed, and less delayed) predicting higher exam success. In contrast, we assume that performance avoidance, and work avoidance goals are associated with patterns of surface learning (massed learning and more delayed learning) and that work avoidance goals also relate to an overall reduced amount of learning time, predicting lower exam success. The literature on the impact of performance approach goals on patterns of learning is less clear. Here, we mostly assume that such goals will lead to more massed learning as they focus students on short term benefits rather than on the positive effects that spaced learning have on long-term learning (Hopkins et al., 2016). But this short-term benefits might still relate to higher exam success. Moreover, we were interested in examining whether achievement goals could explain incremental variance in exam success beyond the prediction of learning behavior and vice versa (RQ3).

Besides these main effects of achievement goals, we assume that the actual use of intelligent tutoring systems also depends on whether individuals think that using intelligent tutoring systems assists them in achieving their respective goals. This expected instrumentality of intelligent tutoring systems in goal striving is meant to moderate the postulated main effects of achievement goal content (RQ4). Finally, we also assume that achievement goals are indirectly tied to later performance through their association with patterns of learning (RQ5).

On a more exploratory note, we differentiate both mastery goals in terms of task goals (focus on mastering the study content) versus learning goals (focus on competence enhancement) as well as performance goals in terms of normative goals (focus on outperforming others) and appearance goals (focus on competence demonstration). We had no a priori differential hypotheses for the different classes of mastery goals or performance goals respectively. The same is true for task avoidance goals, which we also assessed and investigated. All our differential hypotheses have been preregistered under https://aspredicted.org/4SY_B4QFootnote 1.

Method

Sample & design

We conducted a field experiment using self-reported achievement goals and expectancy beliefs, learning data received from a digital learning system and exam performance. We assessed data from 91 German university students (83.5% female, 15.4% male, 1.1% diverse) using an intelligent tutoring system for exam preparation for a statistics exam in a psychology undergraduate course. Participants were users of the intelligent tutoring system (Siebert & Janson, 2018). This web-based software provides practice exercises with corrective feedback (Naujoks et al., 2022; Roediger & Karpicke, 2006a, b). The software has two main features: variability and adaptivity. Variability refers to the software generating variable arithmetic problems (based on random values) as well as the fact that it randomly selects phonetically similar but inverted answer options for multiple choice questions. This way, learners cannot rely on the recognition of exercises for repeating success but rather have to understand the underlying concepts of the exercises. This is coupled with the adaptivity of the software, which facilitates adaptive testing based on learners’ likelihood of correctly solving exercises. The overall progress by means of repeated success is displayed to learners in form of an aggregated learning index. On average, participants spent 19.66 h with the learning software (SD = 11.5). At the learning onset with the intelligent tutoring system, we collected the self-reports on achievement goals and expectancy beliefs (in this order). Exam performance was matched upon informed consent after the end of the semester.

Measures

Achievement goal orientations

To measure interindividual differences in achievement goal orientations, we used a questionnaire developed by Daumiller and colleagues (2019). The scale differentiates between mastery and performance goals as well as approach and avoidance dimensions. Mastery goals are further differentiated into task goals (“I would like to complete the individual requirements very well”) and learning goals (“I would like to constantly improve my skills”). Performance goals are differentiated into appearance goals (“I would like people to notice how good I am”) and normative goals (“I would like to be better than my fellow students”). We did not include the subscales for learning avoidance goals and relational goals as we had no assumptions regarding the associations with learning behavior or exam performance. To further reduce the length of the initial assessment battery, we decided to measure every facet with only three of the four items and left out the last item of each scale. We asked participants to indicate to what extent the statements apply to them and the subject they are learning for with the software (statistics) on a scale with the endpoints “not agree at all” (1) and “fully agree” (7). We observed internal consistencies ranging from α = .79 to .93 for the different scales.

Expectancy beliefs on instrumentality

We adapted the items used to measure achievement goals (Daumiller et al., 2019) to inquire expectancies on the expected instrumentality of the intelligent tutoring system for achievement goal striving. More specifically, we asked the participants to provide an assessment regarding whether the intelligent tutoring system would be helpful for attainment of the different achievement goals. We collected participants’ answer on a scale with the endpoints “[Name of the intelligent tutoring system] will not be helpful to achieve this goal” (1) and “[Name of the intelligent tutoring system] will be very helpful to achieve this goal” (7). It is noteworthy that the assessment of expectancy beliefs did not depend on whether participants endorsed the respective goals. Participants were asked to provide their assessment regardless of whether they themselves strived for the respective goals. The measures reached internal consistencies ranging from α = .87 to .97 depending on the subscale.

Learning behavior

Out of the logfiles of the software, we computed three indices that were meant to characterize learning behavior: (1) Overall learning time was measured as the total time that participants spent engaging in e-learning activities using the software after acquiring it until the exam. It is important to note that we computed learning time as differences between time stamps between first and last activity in the software during a learning session. Learning sessions were automatically terminated when users were absent longer than 20 min. Hence the index does not incorporate longer periods of time without actual learning activities. (2) Distribution of learning was measured using the standard deviation of time spent on learning activities each day. We aggregated learning time per day over the respective number of days on which the participants could have used the software after their initial learning onset and computed the standard deviation for this time for every participant. For better comprehensibility, we inverted this measurement to have an indicator that reflects distributed learning as such a learning behavior is reflected by lower standard deviations of average learning time than massed learning. (3) Learning delay was indicated by the number of days that were left to the exam when participants reached 50% of their cumulated individual learning activities. Less time left indicated higher learning delay (i.e., later onset on the majority of learning activities). Hence, we also inverted this measurement for better comprehensibility. Contrary to our preregistration we did not aggregate the logfiles at the level of weeks but rather on the daily level. With the decision to aggregate learning activities per day, we obtained more variance in the indicators and a better understanding for differences in learning behavior.

Analyses

We computed latent structural equation models using Mplus Version 8.6 (Muthén & Muthén, 19982017) to test our hypotheses on the associations between achievement goals, expectancies, learning behavior and exam performance. In these models, achievement goals and expected instrumentality were estimated as latent constructs, whereas learning behavior and exam performance were included as manifest scores. We used the Maximum Likelihood estimator with robust standard errors (MLR) to conduct the models and the full information maximum likelihood procedure (FIML) to handle missing data(see Fig. 2).

Fig. 2
figure 2

Structural equation model for the associations of achievement goals, learning behavior and exam success Note Upper panel: l = learning achievement goals, t = task approach goals, a = appearance approach goals, n = normative approach goals. Lower panel: w = work avoidance goals, t = task avoidance goals, a = appearance avoidance goals, n = normative avoidance goals. *p < .1, ** p < .05, *** p < .01

As a first step, we ran a basic model for each achievement goal (8 models in total) that estimated main effects for achievement goals and expected instrumentality on the proposed learning behavior and exam performance. Within these models, we estimated direct effects of achievement goals and expectancies on learning behavior and exam performance (RQ2-3) as well as direct effects of learning behavior on exam performance (RQ1). We also calculated indirect effects to investigate whether motivation was related to performance through learning behavior using bootstrapping (RQ5). The model fit of these models was evaluated according to the recommendations by Hu and Bentler (1999). As such, we used a combination of misfit (SRMR, RMSEA) and fit indices (CFI) to distinguish between an acceptable model fit (SRMR ≤ 0.10, RMSEA ≤ 0.08, CFI ≥ 0.90) and a good model fit (SRMR ≤ 0.05, RMSEA ≤ 0.05, CFI ≥ 0.95). The reported model fit was also used as an approximation of the trustworthiness of the following moderation models (see below).

In a second step, we included latent interaction terms indicating the moderation effect that we hypothesized for expected instrumentalities on the association between achievement goals and learning behavior as well as exam performance (RQ4). Noteworthy, Mplus does not provide sufficient information on the goodness of fit for models including latent interaction terms (Marsh et al., 2012), which is why we have to rely on the information derived from the base models to evaluate the overall fit.

Results

The descriptive statistics, internal consistencies as well as zero-order correlations of the achievement goals, learning behavior and exam performance can be seen in Table 1. A complete table of zero-order correlations including the expectancy beliefs is included in the electronic supplement. It should be noted that we observed positive and often substantial intercorrelations between participants’ achievement goals and their perceived instrumentality of the intelligent tutoring system for the respective goal (r = .16 – .68). Yet the constructs were distinct enough (mean shared variance = 26%) to indicate that they did measure different aspects of achievement motivation. For exam performance, we used the raw points exam, where participants on average achieved 170 points with a considerable variation (SD = 32.18). About 51% of the sample did not give consent to match exam performance. Also, we observed that the learning index of the software was correlated with exam success, r = .38, p = .01.

Table 1 Descriptive statistics of achievement goals and learning parameters

Main analyses

When evaluating the model fit, we computed latent structural equation models using Mplus8 (Muthén & Muthén, 2017) to test our hypotheses on the associations between achievement goals, learning behavior and exam performance. We used full information maximum likelihood estimators to also include cases with missing values in the exam performance variable. For each achievement goal, we ran a basic model including the main effects of the achievement goals and the instrumentality beliefs on the proposed learning behavior and exam performance. In addition, we ran a moderation model including the interaction term, correspondent fit measures are printed in Table 2. We found that only the model for task avoidance goals (SRMR = 0.05, RMSEA = 0.14, CFI = 0.88) did not reach the pre-defined cutoff values in two out of three of the inspected model fit indices. All other models reached the critical values of CFI > 0.90 and SRMR < 0.10. It is noteworthy, that three models (appearance approach goals, normative avoidance goals, work avoidance goals) exceeded the threshold of RMSEA < 0.08. Yet, we have to consider that weak deviations in the RMSEA (< 0.14 for all models) are not sufficient to detect model misfit as this index overrejects models particularly in small sample sizes (Hu & Bentler, 1999). In general, we can thus conclude that our models fitted the data well but that the results from the model on task avoidance goals have to be considered with caution. A table including the exact values of the fit indices for all eight models can be found in the electronic supplement.

Table 2 Descriptive statistics of achievement goals and learning parameters

The results of our moderated structural equation models are displayed in Fig. 1 and the indices and p-values of all models are presented in the electronic supplement. We found no significant association between achievement goals and exam performance (RQ2), except one significant interaction term for higher work avoidance and higher perceived instrumentality of the software for work avoidance, resulting in lower exam performance, β = -0.28, p = .014. Out of the proposed direct effects of the achievement goals on learning behavior (RQ2), we only identified such associations for task approach (positive association with learning time; β = 0.26, p = .032) and appearance avoidance goals (negative association with distributed learning; β = -0.27, p = .012). Higher task approach goals predicted higher learning time when taking the interaction with the instrumentality beliefs into account, β = 0.26, p = .032 (base model tendency: β = 0.17, p = .124. Further, the significant interaction, β = 0.18, p = .001, indicated that the association was stronger with higher expected instrumentality (RQ4). We also found a significant interaction in case of task approach goals, β = 0.08, p = .035, indicating stronger associations of task approach goals with distributed learning in case of stronger expected instrumentality (RQ4). For higher appearance avoidance goals, we found a significant negative association with distributed learning, β = -0.27, p = .012, which was not moderated (interaction: β < 0.01, p = .964).

Regarding associations between e-learning behavior and exam performance (RQ1), we found that learning delay predicted exam success in all models with effect sizes ranging from β = -0.41 to β = -0.49 with all p < .005. In contrast, we found no direct effects of total learning time and distribution of learning time (all p > .136). As we did not find any associations between achievement goals and learning delay (the sole predictor of exam performance), there was generally no foundation for potential indirect effects of achievement goals via learning behavior on performance (RQ5). This is why we chose to report findings derived from base models that did not factor in indirect effects. Yet additional analyses using bootstrapping for calculating indirect effects strongly supported our notion that such effects were not present in our data set (all p > .354; estimates and models are included in the electronic supplement) (See Table 3).

Table 3 Results of structural equation models of achievement goals and instrumentality beliefs on learning behavior and exam success

Discussion

The goal of the present preregistered research was to bridge the gap between achievement goal theory, self-regulation in digital learning environments and their impact on exam performance.This was done with the aim to shedding light on the complex differences in e-learning behavior beyond the mere predictive association of performance within such tools and later exam performance (Baker et al., 2020). We found an association between learning delay and exam success partly supporting our assumptions that beneficial learning patterns are linked with better performance (RQ1). We also aimed to show that the association of achievement goals and learning patterns on learning outcomes (RQ1-3) can be modeled as mediation (RQ5). However, we did not find evidence for such a meditaion model. Moreover, we analyzed whether the predictive power of achievement goals depends on the degree to which individuals deem engagement with the respective digital tool to be instrumental for their goal pursuit (RQ4). The conducted analyses did not yield sufficient evidence to support this idea in its full width. Yet our findings provide further insights into how achievement goals, expectancies and learning intersect. We follow the conceptual model (see Fig. 1) for the discussion of the results.

The role of achievement goals and expected instrumentality for learning behavior

Considering main effects, we did not find any associations of achievement goals with exam performance (RQ2), nor respective indirect effects via the observed learning behavior (RQ5). However, our findings indicate that (some) achievement goals may shape the way individuals engage in learning within intelligent tutoring systems. Particularly, we found empirical evidence for task approach goals (i.e., the aim to fulfill the requirements of the course well) predicting total learning time. This may indicate that strong task goals indeed facilitate the urge to use provided digital learning tools diligently. Interestingly, we found no respective effect for learning approach goals (i.e., the aim to learn as much as possible), which once again underlines that these two components of mastery goals do not facilitate the same effects (see also Korn et al., 2019). Particularly, individuals with strong task (approach) goals may consider the use of the intelligent tutoring system as a part of the course requirements, whereas individuals with strong learning goals may not bind their learning efforts on such considerations. From a practical point of view, this might indicate that learners see advantages in intelligent tutoring systems for consolidating knowledge in terms of practice testing but might not see further advantages for acquiring new and further knowledge (Adesope et al., 2017). This idea is somewhat echoed in the fact that learners descriptively report the highest instrumentality of intelligent tutoring systems for task approach and task avoidance goals compared to all other goal classes. From a practical standpoint, this could mean that fostering task goals is the most promising avenue to foster the usage of intelligent tutoring systems (see below).

The only other main effect of achievement goals that we found was that appearance avoidance goals (i.e., the aim to not appear incompetent) were associated with more massed learning. Here, we observe a detrimental effect of this goal class on adaptive learning behavior that could be driven by learners having the urge to accumulate learning hours in the days before the exam out of fear of personal disgrace. Once again, we see the value of differentiating the performance goal construct, given that normative avoidance goals (i.e., the aim to not be outperformed) as the backside of performance goals did not significantly predict the distribution of learning. Nevertheless, this seems less true for appearance approach and normative approach goals, which both did not facilitate any effect on the distribution of learning with comparable effect sizes.

While these main effects are certainly of further interest, our main hypothesis focused on the idea that expectancy beliefs – more concretely expected instrumentality of learning with the intelligent tutoring system – would shape the learning behavior within the digital environment. This integration of achievement goals into the framework of expectancy-value theory (Eccles & Wigfield, 2020; Wigfield & Eccles, 2000) is meant to take into account that individuals’ beliefs about success explain when and how personal values (in our case achievement goals) shape achievement-motivated behavior. While achievement goal theory provides framework on why students might engage or disengage e-learning activities, considerations about the instrumentality of the goals for goal success might as such explain if students actually engage with the goal-directed behavior.

We have postulated moderator effects for all achievement goals beforehand (RQ4). Yet, we only found selected evidence for the necessity of this theoretical integration within our study. Particularly, the previously noted association between task approach goals and learning time was more pronounced when the intelligent tutoring system was deemed to be instrumental for the aim to fulfill the course requirements. This strengthens the idea that individuals with these goals carefully evaluate which e-learning activities are actually bound to the tasks at hand, and which are not. In the present case, learners differed in the degree to which they evaluated the intelligent tutoring system as instrumental for fulfilling the task (of preparing for the exam), which partly shaped whether their task approach eventually translated into higher learning engagement. Furthermore, we found no main effect of normative approach goals on distributed learning. However, we indeed found that these goals were positively associated with this outcome given high expected instrumentality in using this tool to outperform others. This effect is less straightforward to interpret than the found moderation effect for task approach goals. One potential idea could be that learners who approach competition actively search for any tool that gives them the edge over other students, which makes them engage more early with such instruments. In the present case, the intelligent tutoring system provides feedback on an aggregated level through a “learning index” displayed to the learners. Some learners might use this information to compare their learning progress with others. Nevertheless, as for all observed effects, it is of utter importance to replicate the effect in further studies before emphasizing its implications too highly.

What remains from our study, is that – in absence of main effects – the expected instrumentality has indeed at least in some cases implications for the associations between achievement goals and learning behavior with intelligent tutoring system. While the presented findings may seem like piecemeal engineering at first glance, they may become more impressive when considering (a) that we conducted a study in the field, which increases the implication of the effects and (b) that our power for finding any effects was limited given the rather small sample size, (c) that we focused on actual learning data instead of self-reported behavior and (d) that our participants had numerous learning strategies at their disposal that competed with the use of the tutoring system. Overall, we see first evidence that our working model for motivated e-learning might be fruitful in explaining behavior, particularly in situations where individuals have to choose where and how to invest their learning time.

Learning patterns within intelligent tutoring systems and exam success

Outside of our main research question, our findings also allow for some further inspections into how the quality of learning with intelligent tutoring systems translates into actual achievement. This is important given that the overall predictive power of online practice testing (Naujoks et al., 2022; Roediger & Karpicke, 2006a, b; Schwerter et al., 2022) and intelligent tutoring systems (Mousavinasab et al., 2021; VanLehn, 2011) may be well documented in the literature, but it remains unclear which kind of usage is most beneficial (Baker et al., 2020). Inquiries into this question call for (1) the use of objective behavioral data collected in (2) realistic learning scenarios that cover (3) broad periods of time. Our study aligns well with these requirements and as such allows us a deeper view into how the differential usage of our software was indeed associated with actual achievement (RQ1).

We particularly found evidence that learning delay was a predictor of exam success, but not the overall learning time or distribution. Hence, we conclude that starting to engage with intelligent tutoring systems at a late point in time (i.e., just before the exam) may be considered as a maladaptive learning pattern. Such behavior possibly prohibits the user from thoroughly understanding the features of the tool as well as the system from unfolding the positive effects of adaptive testing that relies on longer periods of usage. This is well in line with a body of literature linking negative effects of procrastination with impaired academic achievement (Klingsieck, 2013; Steel, 2007).

It is interesting to note that, learning delay but not distribution of learning time was connected to exam success. With our specific assessment of both indicators, we were able to disentangle those effects from each other. While one might have assumed that more delayed learning automatically led to less distributed learning, we only observed a small association. This indicates the importance to disentangle both constructs. In sum, we may conclude that neither the distribution of learning time nor the total amount of time spent with the system is of utmost importance for the respective tutoring system to unfold its effects on educational attainment. It is of greater consequence that users reach familiarity with the system at an early time that is not to close to the exam itself. This notion as well as our findings on the moderated impact of task approach goals have direct implications for the implementation of intelligent tutoring systems.

While the overall predictive power of online practice testing (Naujoks et al., 2022; Roediger & Karpicke, 2006a, b; Schwerter et al., 2022) and intelligent tutoring systems (Mousavinasab et al., 2021; VanLehn, 2011) is well documented in the literature, the present research followed the current call for a closer look on why such tools are actually effective (Baker et al., 2020). Here, we find that the general effectiveness of the investigated tutoring system was not bound to the mere quantity of self-exposure to the tool (i.e. learning time). The shape of learning time (early versus delayed) seems to be of greater importance. With this finding in mind, we think that it is important that further research expands the idea on how we can most accurately describe optimal e-learning behavior and which objective behavioral data should be used to operationalize the respective behavioral indicators.

Practical implications

With our present research, we followed the call for research on a better understanding of the usage of digital learning systems like intelligent tutoring systems and their dependency on motivational variables (Azevedo et al., 2011; Baker et al., 2020; Winters et al., 2008). This is a necessary step as observing the relative effectiveness of intelligent tutoring systems (Kulik & Fletcher, 2016; Ma et al., 2014) is not the same as truly comprehending how individuals use them and which kind of usage is most effective. Knowledge about the meaningfulness of different learning patterns within such tools and the underlying goals of learners is important for practitioners to improve the effectiveness of such tools added to educational environments. When it comes to such practical implications, we want to underline three main takeaways, which are in our view relevant for the implementation of intelligent tutoring systems.

First, even if it is not the focal research goal of this particular study, our data show an overall predictive validity of intelligent tutoring systems for later exam success. While this is not an experimental proof that such systems can improve exam performance, it highlights the opportunities for learners. Such tools enable them to monitor and control their self-regulated learning activities providing valid feedback on the learning progress. Hence, we can recommend the implementation of such tools to learning settings for additional self-regulated learning opportunities.

Second, while we did not find that achievement goals were associated with achievement through the usage of intelligent tutoring systems, we did find that achievement goals did predict the way users interacted with such tools. Particularly, individuals who strongly strived to master the task at hand and who were convinced that intelligent tutoring systems were instrumental for that task were more likely to distribute their learning time equally over the learning period. This in itself did not seem to be beneficial for the performance in a summative exam, but might hold further value as research has established a strong association between spaced distribution of learning time and long-term learning (Kulik & Fletcher, 2016; Ma et al., 2014; Soderstrom & Bjork, 2015). Also it allows for optimal adaptive testing. From a practical perspective, we have to keep in mind that convincing learners that mastering the material at hand is important does not necessarily yield maximum effects on the optimal usage of learning platforms. Rather it seems important to advise educational practitioners to highlight the benefits of learning platforms for achieving such task goals. This could be achieved, for example, by introducing learners to empirical research demonstrating the benefits of intelligent tutoring systems for mastering course material.

Third, we found that postponing the start of one’s learning activities with intelligent tutoring systems is associated with less exam performance. While this finding is not central to a deeper understanding of the intricate interplay of achievement motivation, it has central implications for educational practitioners who want to introduce their learners to intelligent tutoring systems. Particularly, they can inform their learners about our finding and as such warn them that they will only reap optimal benefits from intelligent tutoring systems when they start using them right away.

Taken together, intelligent tutoring systems can be a valuable asset to facilitate learning within (higher) education settings. However, their impact may depend on when and how learners use these powerful tools. Educational practitioners are well advised to educate learners about the potential caveats of postponing learning activities. While intelligent tutoring systems can support self-regulated learning activities, they do not compensate for a general lack of self-regulation (Winters et al., 2008). Highlighting the instrumentality of intelligent tutoring systems in mastering the learning material could be a promising avenue to boost the adaptive usage of these systems.

Limitations and future research

The present field study yields high ecological validity. While observing actual learning behavior in an intelligent tutoring system and using exam performance as objective distal outcome, we tested our propositions in real-world conditions. On the other hand, we encounter several limitations, also explaining non-significant findings. In general, high ecological validity often comes with costs to internal validity (i.e., the ability to draw causal inferences), particularly within field studies. Some of these potential threats to internal validity could also apply to our study. For instance, the missing data on exam performance – due to a lack of given consent for using the data by some participants – was addressed using full information maximum likelihood information estimates in our model. However, missing data, especially if not missing at random is a statistical problem which might lead to biased results. This problem is difficult to rule out in field research but could be somewhat resolved using data from laboratory studies that include a learning period as well as subsequent testing. In such a design, missing data on achievement will likely be less prevalent and particularly less systematic. Such research may complement the conducted field study.

Furthermore, we cannot rule out that events outside of the intelligent tutoring system impair our ability to find meaningful result for associations between the investigated predictors and performance as a criterion. As learning in the field does not take place in a vacuum but individuals rather use multiple opportunities to learn, we cannot comprehensively model how functional or dysfunctional the software was used in the context of the overall learning activity. Even though the design is generally characterized by high ecological validity, some threats to generalizability remain. This is particularly due to the fact that we conducted our research within a sample of psychology undergraduate students. For example, we revealed lower work avoidance tendencies within our sample compared to other achievement goals.

Despite the significant moderation effect of work avoidance, we did not find any association between achievement goals and exam performance. Furthermore, the present moderated association with work avoidance goals is challenging to interpret as it implies a stronger negative association between work avoidance goals and exam performance when the perceived instrumentality to achieve work avoidance goals is high. However, the moderation effect on exam performance was not accompanied by changes in observed learning behavior, making this finding even more difficult to interpret. One might speculate that individuals who think that using the learning software provides a reasonable strategy to pass the exam with low effort, reduced other learning activities such as restudying the lecture or participate in learning groups.

Although the fact, that the observation of learning behavior was objective, it was not holistic and some assumptions about learning need to be made. We implied that delayed learning with the software can be connected to the literature on procrastination. Procrastination is defined as “the voluntary, irrational postponement of an intended course of action despite the knowledge that this delay will come at a cost to or have negative effects on the individual” (Simpson & Pychyl, 2009, p. 906). However, later learning with the software might not be irrational as it might be considered as valid learning strategy to use practice exercises later in the semester. This may be particularly true if using the tutoring system was accompanied by other means of elaborating on the subject matter (such as group-based learning; Gregory & Thorley, 2013). In such situations, a late use of the intelligent tutoring system may not be as harmful to individuals that have engaged with the learning material in other ways and see the platform merely as a way to repeat and exercise at a late stage of their learning process. Yet we cannot help but notice that our measure of delayed learning was rather substantially associated with lower exam performance, which at least seems to highlight that individuals who engaged with the platform early on, benefited more from its usage.

Still, learning delay in terms of a time stamp where 50% of the cumulative learning activities were reached as well as distributed learning operationalized as the deviation of learning activities over time and overall learning engagement equaling time in the software, are only one way to operationalize learning with intelligent tutoring systems. We consider these parameters to be a construct-valid operationalization of achievement-motivated behavior, but other parameters using the same raw data are possible which is inherent in the richness of logdata (Baker et al., 2020). For example, one could derive changepoints in learning behavior as an alternative measurement for procrastination (also see, Baker et al., 2020). We suggest that further research should continue to capitalize on this richness, while also critically investigating how associations between motivational antecedents and patterns of e-learning depend on the operationalization of those patterns of learning.

Furthermore, our research is but one puzzle piece in the effort to provide a more nuanced picture on the impact of achievement goals on (e-)learning. Here, we found that connecting achievement goal research and expectancy-value theory (Eccles & Wigfield, 2020; Wigfield & Eccles, 2000) can be as fruitful as it poses a range of new questions. Without taking the considerations about instrumentality beliefs into account, we would not have been able to identify all associations between achievement goals and learning behavior. Moreover, our research revealed differences in learning delay of learners which might be related to theoretical approaches incorporating delay. One such theory is the temporal motivation theory (Steel, 2007), which especially addresses differences in achievement motivation based on temporal proximity of deadlines (for a recent study, see Janson et al., 2024). Overall, we are convinced our findings have to be placed in a larger research framework, which may help to elucidate under which conditions achievement goals unfold actual effects on learning behavior. In doing so, we might uncover new yet unknown ways in which achievement goals impact learning and educational attainment.

Finally, it is important to note that we proposed a conceptual model with a mediation model which implies a causal direction of achievement goals and expected instrumentality on exam performance via e-learning behavior. However, we cannot ensure any causal inferences with our present study as it included no experimental manipulation of respective variables. We cannot rule out that the associations are based on other causal directions or potential third variables. Hence, experimental studies manipulating achievement goals and instrumentality beliefs could offer fruitful avenues for further research.

Conclusion

Digital learning environments, and especially intelligent tutoring tools, are promising in improving educational settings. However, research is needed to explain how such tools are used by students. In our preregistered study, we provide empirical evidence that personal (achievement) goals and the expected instrumentality of intelligent tutoring systems for reaching those goals might be important drivers of e-learning behavior. Our findings advance our understanding on how motivation might impact the usage of intelligent tutoring systems. As such, we hope that the conducted study inspires further scholars to conduct empirical studies on how motivational variables affect learning within digital environments.

Data availability

Data can be made available upon request.

Notes

  1. Please note that we have changed the order of the RQ. Also we changed the naming of degrees of procrastination in learning delay, as procrastination and strategic delay (Klingsieck et al., 2012) of learners are hard to disentangle.

References

Download references

Acknowledgements

We would like to thank our student assistant Emilia Zickermann and Julia Hilpert for the support in manuscript preparation.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

Marc Philipp Janson and Stefan Janke contributed equally to the conceptualization and formal analysis of the study. Marc Philipp Janson acquired the data and prepared the original draft, which was revised by Stefan Janke.

Corresponding author

Correspondence to Marc Philipp Janson.

Ethics declarations

Competing interests

Marc Philipp Janson is owner of the intelligent tutoring system used in the present study. The software is used for commercial purposes.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary Material 2

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Janson, M.P., Janke, S. The influence of e-learning on exam performance and the role of achievement goals in shaping learning patterns. Int J Educ Technol High Educ 21, 56 (2024). https://doi.org/10.1186/s41239-024-00488-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41239-024-00488-9

Keywords