Status quo of LA and AI-based solutions in further education
The business models are examined based on Teece’s (2010) main dimensions of value proposition, value delivery, and value capture, but the main research focus is understanding how the 25 EdTech companies generate, analyze, visualize, or distribute data and to what extent they use LA- or AI-based solutions. All 25 EdTech companies offer digital services in the field of further education, with a wide range of educational services. The spectrum of services ranges from the development and hosting of learning software to the creation of didactic concepts and implementation of digital training programs. The overview that follows summarizes the value proposition of the 25 EdTech companies (Fig. 3).
Eight EdTech companies offer software as a service (SaaS) in the field of further education, (i.e., e-learning, micro-learning, virtual and augmented reality developments, or game-based learning solutions). The SaaS solutions are developed in house by the EdTech companies in cooperation with the customers themselves. Didactic as a service (DIaaS) prepares the contents of digital teaching and learning elements for customers, such as companies or universities (Prifti et al., 2017). The EdTech providers become the creators and authors of the digital content. With the dynamic development of the EdTech market, new professional fields are emerging for educators, who now act as authors and design digital teaching and learning units for schools, universities, and further education. In our sample, there are nine providers offering a combination of SaaS developments and DIaaS. One company offers DIaaS combined with their own online trainings. There are three EdTech companies that provide SaaS solutions and online trainings, while there are three providers that only provide training, representing variations of e-learning, blended-learning, and offline training. One EdTech company in the sample operates in the field of educational consulting services to develop and implement further education in companies.
The interviewed EdTech companies are united in that they offer digital educational services allowing teaching and learning behavior to be measured. Our data can confirm the theoretical explanations in section “Artificial Intelligence in Further Education”. Measuring, collecting, analyzing, and interpreting data is an essential prerequisite for developing AI-based teaching and learning solutions. Figure 4 represents an adopted and extended form of the different data paths of EdTech companies in the field of further education. In addition, it shows the current status of the usage and implementation of LA of the EdTech companies. To the best of our knowledge, this is the first attempt to map the existing LA levels on the EdTech market. Although studies by Zawacki-Richter et al. (2019) and Viberg et al. (2018) have extensively analyzed existing research on LA and AIED, there is no clear differentiation regarding the degree to which LA and AI are implemented in teaching and learning processes.
Figure 4 outlines two ways in which EdTech companies use digital data in their business models. The first path characterizes low data business models, that is, where the data generated by digital teaching and learning solutions are transmitted directly to the customer. The EdTech company is not able to access the data; therefore, it cannot use them for the development of its value propositions. The EdTech company fulfills only the function of data routing, and it cannot implement LA. The generated data of the teaching and learning behavior are stored on the server of the customer. In our research, eight EdTech companies reported being on this path. The collection and analysis of the generated learning and teaching data are not part of the business model or offered services of these eight EdTech providers. The second path in Fig. 4 is determined by the EdTech companies having access to the generated data of the users’ teaching and learning behavior (data generation). The generated data are collected by the EdTech companies; this can be split into personalized user data and non-personalized user data, depending on the regulations of the client and/or country. Based on our empirical data and a study by Picciano (2014), we identified three different levels of so-called LA in the field of further education:
Figure 5 shows that, with increasing data collection and implementation of data in the optimization of teaching and learning solutions, the level of LA increases. In our sample, eight companies are implementing basic LA as part of their digital services. Two are using Google Analytics, for example, as a tool to generate these basic statistics of the learning and teaching behavior. The collection and analysis of user data, such as frequency analysis or mean value (simple statistics), represent the click behavior, usage behavior, or media choice of the user, processed as visual graphs, tables, or bar charts. This can be compared to the beginnings of e-commerce, where click and purchase behaviors were presented in simple statistical analyses in the early 2000s. At the second level, we can observe that basic LA is already being used in combination with customized learning recommendations based on the click, usage, learning, or teaching behavior or the media choice. Interestingly, we could identify two different types of recommendations, namely, algorithm-based and human-based recommendations. The digital user of the educational online service receives algorithm-based recommendations on what to learn next and in which order. The software of the EdTech company analyses the user data by creating patterns and classification through algorithms and combining them with the digital learning content to improve, customize and innovate the teaching and learning process based on the user’s individual needs. Three companies of the sample already offer these levels of LA. Regarding our analysis, basic LA is needed to implement a self-learning software that enables digital educational services to establish an individual and adaptive learning system based on the teaching and learning behavior of the user through AI. The digital system/software optimizes and customizes the teaching and learning content without human intervention. It will also revolutionize the teaching process, as the content of lessons will be created automatically using algorithms. This step will involve the disruptive innovation of teaching and learning through digitalization and a future scenario. Only two providers mentioned projects where they tried to individualize and optimize the teaching and learning process in a virtual space by means of AI.
Only one of 25 EdTech companies (Case 25) offered all three levels of LA as a service according to the needs of the client. For the basic LA, they created dashboards for their customers in which the frequencies of online behavior are visualized. In a second step, these dashboards can be combined with algorithmic or human-based recommendations. The development of the third level is currently being extended and trained with neural networks. This LA as a service has become part of the value proposition of Case 25, but this EdTech company argues that companies need to set up learning ecosystems in the course of training so that employees can learn individually “just in time” and “just in case.”
In summary, 14 companies in the sample already use one of these three levels of LA as a service in their business models, while 11 suppliers indicated that LA is currently not part of their business model. The empirical study shows that the area of LA, as well as that of AI-based learning, is still in its infancy in the field of further education in the German-speaking countries. Regarding the business model perspective, the classification of Hilbig et al. (2018) was used to identify that the sample represents two pure data-driven business models, 22 data-enhanced business models, and one low data business model (Interview Case 15). Therefore, we see the future of innovative teaching and learning methods in the implementation of LA combined with adaptive learning and self-learning systems (AI based).
Drivers and barriers of LA- and AI-based solutions in further education
LA- and AI-based solutions in higher education
LA- and AI-based teaching and learning solutions have the potential to change education drastically. Following Buschbacher (2019), such new technologies are always caught between enthusiasm and rejection. Some see them as long sought-after solutions to existing and future challenges, whereas others view them as a further step toward incapacitation. Both perceptions are equally exaggerated and harmful, as they block the view of the benefits and side effects of the solutions. In the second section, we already pointed out that there are currently only a few LA- and AI-based teaching and learning solutions on the market. The application is often only theoretically explored but not implemented in practice. Below, we first summarize the best-known cases that have successfully implemented such solutions in the education sector. Subsequently, drivers and barriers are presented, derived from our empirical analysis in combination with discourse analysis.
As described in the previous section, different levels of LA can be distinguished depending on data collection and usage. To develop intelligent, adaptive, and personalized learning systems, large amounts of data on learners’ behavior and habits need to be collected and analyzed (Holmes et al. 2019). Although AI technology has been the focus of scientific interest for more than 60 years, practical applications in education have only recently been advanced (Russell & Norvig, 2010). Techlords like Amazon and Google have invested in promising AI systems that will have a lasting impact and change teaching and learning behavior. For example, data-based business models like Carnegie Learning, Knewton, and Bettermarks can be seen on the emerging market (Renz et al., 2020). These companies develop AI-based solutions for education and further education based on teaching and learning data. For example, Knewton collects user data and establishes links between the learning behaviors of individual learners. Based on this, learning types or success prognoses can be derived. In the next step, complex algorithms based on this database define individual learning packages where the content and speed are continuously adapted. Following this, complex algorithms define individual learning packages on this data basis, with ongoing adaptation of the contents and speed (Renz et al., 2020).Footnote 4 Software solutions like Knewton seem to eliminate a previously unsolvable tension—access to education for all with individually designed curricula (Dräger & Müller-Eiselt, 2017). Currently, universities increasingly rely on such algorithm-based solutions to support learning success, curricula, and the study process per se. One of these successful projects is based at Deakin University in Australia. Deakin University integrates IBM’s supercomputer, Watson, which provides 365 days of feedback to students.Footnote 5 Since 2011, Austin Peay State University uses Degree Compass, which generates course recommendations for students according to the same logic employed by Amazon and Netflix. Among other things, the Compass predicts the probability of passing a course.Footnote 6 Another example of algorithm-based solutions is the eAdvisor used by Arizona State University. The personalized eAdvisor guides students through their studies, and all user data/behavior is recorded.Footnote 7
The examples listed here are already being successfully implemented, but at the same time, they represent only flagship areas of AIED. Renz et al. (2020) have investigated AIED in a research project and hardly found any teaching and learning solutions that were truly AI-based. Beyond that, numerous questions remain unanswered: What are the long-term consequences of the almost unlimited data measurement of our learning behavior? What are the drivers and barriers? Is the European EdTech market ready for such solutions? With the help of a theoretical discursive approach, drivers and barriers in the debate about AI-based learning systems in higher education, especially further education, can be roughly outlined. As mentioned above, further education will become increasingly important in the course of digital transformation and lifelong learning as job profiles and requirements change fundamentally. Following Arroway et al. (2016), institutions of higher and further education now have a great chance to proactively establish themselves as owners and drivers in the future of LA- and AI-based learning solutions. The theoretical discourse primarily includes scientific reflections and should be validated by the findings of our qualitative study.
Drivers and barriers of AI-based solutions in higher education
Based on the 25 interviews and a discursive analysis, we identify drivers and barriers for AI-based teaching and learning solutions, which are summarized below (Fig. 6).
While most drivers and barriers can be clearly identified, there are four elements—cultural change, sustainability, individualization, and human-digital-interaction—that have been classified as both driving and inhibiting AIED. The interviews showed that cultural change especially follows a diametric relationship. The change is taking place in the tension between a traditional understanding of education and a futuristic idea of education and knowledge transfer. Learning behavior and the requirements for learning methods are constantly changing. In this context, it is evident that an increased circulation of knowledge can be considered by using digital approaches due to the permanent availability of knowledge. Following Loop (2016), individuals—and especially employees—turn to search engines for information and knowledge; often, when knowledge is needed, they can do so without leaving their workflow. This shows a drastic change in the training and learning culture. While unorganized, often context- and problem-dependent real-time training is increasingly gaining importance, the relevance of corporate training is clearly decreasing. Direct access to experts, knowledge and learning freely available online intensifies this trend.
Drivers
Maseleno et al. (2018) accentuate the potentials that LA can assume in the context of individualized learning. Here, the learner is assigned a much more active role than in the current, traditional sense of further education. Among other things, learners are encouraged to participate effectively in promoting a solid learning environment, mindfully consider its individual adaptation needs, and identify and apply learning methods that work best for them. Avella et al., (2016) illustrate how the use of Big Data is beneficial for a wide range of higher education contexts. Advantages are shown in the use of academic analytics to improve learning. By analyzing Big Data, researchers can identify useful information that educational institutions, students, faculty, and researchers can use in various ways. These include targeted course offerings, curriculum development, student learning outcomes and behaviors, individualized learning, improved teacher performance, post-training employment opportunities, and improved educational research. In this regard, Manyika et al. (2011) comment: “In a big data world, a competitor that fails to sufficiently develop its capabilities will be left behind [...] Early movers that secure access to the data necessary to create value are likely to reap the most benefit.” Authors like Macfadyen, Dawson, Pardo, and Gaševic (2014) or Slade and Prinsloo (2013) follow this view. Thus, Macfadyen et al. (2014) note that education can no longer avoid the use of LA, while Slade and Prinsloo (2013) maintain, “Ignoring information that might actively help to pursue an institution’s goals seems shortsighted to the extreme.” Huda et al., (2017) argue that individualized learning increases students’ motivation and engagement. This makes learning more sustainable, as students are actively involved in shaping their learning journeys. The evaluation of the interviews with EdTech companies supports all these claims on a theoretical level. Both scientific findings and EdTech providers are pushing for greater support for data-based and data-driven systems in education.
Barriers
Although research in LA and its close field of AI has demonstrated high potential for understanding and optimizing learning behavior and processes (Baker & Siemens, 2014), few studies deal with the acceptance or barriers of LA and AI in the specific field of further education. Tsai and Gasevic (2017) identify six general challenges—shortages of leadership, equal engagement, pedagogy-based approaches, sufficient training, studies empirically validating the influence by LA, and LA-specific policies—related to the strategic planning and implementation of LA in institutions. Avella et al. (2016) identify data tracking, data collection, data analysis, optimization of the learning environment, and new technologies as the main challenges in the discourse on the use of LA in higher education. Ferguson and Clow (2017) ask whether LA improves learning in practice. With their contribution, they try to relate the current research results in LA to the following four issues: whether LA practices support learning, support teaching, are widely used, and are ethically used. The results show only vague evidence, suggesting that LA is not immune to pressure in other areas. In addition, the authors note that LA is a diverse, multidisciplinary field of research, making it much more challenging to obtain generalizable, valid, and reliable evidence. Our findings also suggest that a lack of evidence and a lack of systematic approaches represent barriers to the development of LA- and AI-based solutions in the market.
Several studies criticize the promise that the use of AI-based solutions in higher education can design individualized learning. With reference to section “Artificial Intelligence in Further Education” of this paper, we think that AI-based solutions in continuing education follow cumulative learning behavior monitoring of other students rather than a holistic individual learning creation. Software solutions, such as Knewton, can establish links between the learning behavior of individual users, deriving forecasts about learning success or classifications of learning types. However, only recommendation algorithms are ultimately generated (Dräger & Müller-Eiselt, 2017). Although there are currently no AI-based learning systems available on the observed market, another more fundamental problem emerged in the interviews: Neither data collection nor evaluation has yet been used for individualization. In this context, authors like Bond et al., (2018) and Collins and Halverson (2009) have already stated that technology as the sole driver of innovation in education is not enough. Rather, innovations in curricula, structures, organizations and companies are needed to make the digital revolution in education a reality. In our data, the skepticism and concerns of companies, particularly the relevant works councils, are cited as the main barriers. Besides, EdTech companies can only state a low level of data understanding or sovereignty in dealing with the corresponding data of the decisive instances in the respective companies. This finding coincides with the results of Seyda, Meinhard, and Placke (2018), who showed that companies only use Big Data analyses to a limited extent. Out of the 25 EdTech companies surveyed, 14 offer some type of LA. As a final consequence, EdTech companies only have limited access to data for the further development of their services.
The conflicting relationship between the desire for individualization of the teaching and learning journey and the unwillingness of customers to analyze data can be explained primarily by the lack of data sovereignty. The interviews also express a lack of technical understanding, fear of control and general ignorance of technical potentials as further significant barriers to the development of digital teaching and learning formats. In addition, users are often overwhelmed with the technical possibilities and adopt a passive or negative attitude toward digital solutions. One interviewee commented, “To be honest, we don’t work with Learning Analytics because we have experienced that the data basis is missing and that the companies are so far away in terms of maturity and the HR [human resources] department is still partly working in the Stone Age. I say that quite clearly. We are simply, in reality […] still so far away from these analytics, in the area of human resources, from learning that there is simply no market for them at the moment” (Interview Case 18). Analogously, another interviewee said: “I was with my colleague a few days ago […] at a common known customer, and he said: ‘Can’t I have the tool cheaper if I skip the analysis?’ […] Well, that’s exactly what they said. Did they recognize it or guess it right? The companies are not ready yet—which is a disadvantage for the companies and for us” (Interview Case 9).
Although the data quality, and thus, the reliability of the recommendation algorithms generated in this way increase with the increasing digitalization of education, we follow Dräger and Müller-Eiselt’s (2017) argument that the complexity of the individual educational pathways cannot be fully represented by algorithmic solutions. In this vein, Gasevic et al., (2016) observe that instructional conditions, study subjects, and participants influence the success of studies using online learning systems and algorithm-based predictions.
The discourse also focuses on ethical aspects associated with the use of LA/AI in higher education. Ferguson (2012) advocates for the development and application of ethical guidelines that regulate the use of Big Data generated by systems based on LA. Throughout the discourse, it becomes increasingly apparent how significant the uncertainty related to data collection, analysis, and transmission is. This uncertainty is often reflected in long-term perspectives—that is, what consequences will participants have in their future professional life due to the data collected today? Following Prinsloo, Slade, and Galpin’s (2012) argumentation, the identity of participants is to be viewed as a combination of permanent and dynamic attributes. Slade and Prinsloo (2013) extend this idea and describe LA as a snapshot of a learner at a certain point in time and in a certain context. Therefore, the data collected through LA should have an agreed lifetime and expiration date and mechanisms for students to request the deletion of the data according to agreed criteria. In this context, Buckingham Shum and Ferguson (2012) suggest that the aim of LA should always be to improve the learning process rather than reflect the performance of the past. Correspondingly, Buschbacher (2019) emphasizes that making mistakes is an essential part of the learning process and requires a secure room that is not being monitored.
The discursive excursus illustrates that the theoretical debate on questions of the permanence of the recorded learning data, further processing, and the self-autonomy of data suppliers has already experienced a certain dynamic. However, practice cannot answer these questions. The interviews with EdTech companies have shown that neither the companies nor the institutions using digital educational services are willing and/or able to make meaningful use of potential learning data. The dilemma of LA and AI in higher education is reflected precisely here. A decisive, albeit intuitive, cause of this dilemma is identified by Buschbacher (2019): “Human discourse cannot be replaced by AI.” As one of our 25 respondents said, “[Our method] follows an approach that says that people learn from people. The more digital the world becomes, the more humane our content must be” (Interview Case 10). Concerning the second level of LA in the area of further education, we found that one of the 25 respondents consciously integrated human interaction in the business model (Interview Case 19). The integration of human interaction in the digital learning processes should be emphasized here once again as a consensual understanding across all the interviews.