- Editorial
- Open access
- Published:
Can artificial intelligence transform higher education?
International Journal of Educational Technology in Higher Education volume 17, Article number: 42 (2020)
The aim of this edition
Many have argued that the development of artificial intelligence has more potential to change higher education than any other technological advance. For instance, Klutka et al. (2018) has listed the following goals for AI in higher education:
-
▪ Increase outcomes
-
▪ Increase access
-
▪ Increase retention
-
▪ Lower cost
-
▪ Decrease time to completion
However, these are aspirational goals. What is the reality, at least as we enter the 2020s? The purpose of this special edition, as expressed in the journal’s call for papers, is to examine the potential and actual impact of artificial intelligence (AI) on teaching and learning in higher education.
Scope of the edition
In the call for papers, authors were invited to submit on the following topics:
-
pedagogical implications of AI for teaching and learning in HE;
-
research on the effectiveness of AI applications for teaching and learning in HE;
-
the impact of AI on the assessment of learning;
-
the potential of AI technologies to enhance teaching and learning in HE;
-
the impact of AI on the role of human teachers in HE;
-
social and/or ethical issues in using AI for teaching and learning in HE;
-
costs and benefits of using AI in teaching and learning HE;
-
the impact of AI on the management and administration of teaching and learning in HE
Priority was given to papers that contained empirical research on outcomes and practices, although the editors were also interested in social and ethical issues that have arisen (or could arise) from the application of AI in higher education.
Resulting submissions
When we winnowed out articles that did not meet the fairly broad criteria of being about the use of AI for supporting teaching and learning, we were left with 23 articles for review. This somewhat small number in itself was surprising, given the interest in the potential of AI in higher education.
After review, only four of the 23 articles were considered appropriate for publication, based on their academic quality. In other words, only four of the submitted articles provided sound empirical evidence about the effect of AI applications on teaching and learning in higher education and one of these was a (thorough) review of the previous literature (Zawacki-Richter et al.), rather than a specific study itself. We will return to the reasons for the relatively small number of acceptable papers later in the editorial, but first let us look at the articles that have been accepted.
Brief overview of accepted papers
Readers are recommended to start with Zawacki-Richter et al.’s ‘Systematic review of research on artificial intelligence applications in higher education.’ The authors reduced an initial trawl of 2656 articles published between 2007 and 2018 in peer reviewed journals down to 146 articles that met their selection criteria.
The Zawacki-Richter at al. paper gives readers a good overview of the various areas where AI is being applied in higher education, as well as an indication of which areas researchers have tended to focus on. From these 146 articles, they were able to identify four key areas of AI applications for teaching and learning:
-
▪ profiling and prediction
-
▪ intelligent tutoring systems
-
▪ assessment and evaluation
-
▪ adaptive systems and personalisation.
One of the areas identified by Zawacki-Richter et al. was the use of AI to predict final academic performance based on test results earlier in a course (profiling and prediction).
The second paper in this issue by Akçapinar, Altun and Askar observed that 74% of the students who were unsuccessful at the end of term in an online computer science course in Turkey could be accurately predicted through the use of a specific algorithm (kNN) in as short as 3 weeks from the beginning of the course.
The third paper in this issue, by Tsai et el., also focused on prediction, in this case identifying students entering higher education in Taiwan with a high risk of subsequent drop-out, and also identifying factors associated with high risk of drop-out. (In Taiwan, the university drop-out rate increased from 4% in 1988 to 15% in 2017, associated with a broadening of the range of students accepted into higher education). Significantly, the Asia University in Taiwan has a policy of ‘Give up on no-one’ and has instituted differentiated learning paths and instructor intervention to adapt to individual differences. The study used AI techniques to identify not only predicted drop-out, but also the significant variables that might affect student learning performance and used them as a basis for providing remedial interventions.
The fourth paper, by Renz and Hilbig, is quite different from the others. It explores how commercial educational technology companies are approaching the use of learning analytics (LA) and AI in their products and services for the further education market (vocational and professional training). They focus on the two following research questions:
-
What is the current role of LA- and AI-based learning solutions in the business models of EdTech companies regarding the field of further education?
-
What are the drivers and barriers of LA- and AI-based learning solutions in the EdTech sector regarding further education?
They found that the current use of learning analytics and artificial education in the field of further education is only at a preliminary stage, mainly due to a lack of demand from educational institutions, and they propose some of the reasons for this.
The purpose of this special edition was to examine the potential and actual impact of artificial intelligence (AI) on higher education. In terms of the actual impact, we must conclude that on the evidence presented, it is currently marginal at best.
From the articles submitted, few showed any evidence-based significant influence of AI on teaching and learning in post-secondary or higher education. The main impact has been on the prediction of student success or failure. There was no valid evidence of improved learning outcomes, or radical, or even tangential pedagogical changes resulting from AI applications.
AI and HE: the gap between expectations and reality
The potential
As an emerging field of expertise, educational AI has the potential to transform our practices, and the experiences of our students. As Om Malik wrote,Footnote 1 the expansion of more sophisticated technologies as well as more robust algorithms free not only the imagination of some but also offer new promises such as the possibility to maintain more productive interactions with much less effort at almost no cost.
When talking of AI or any technology applied to education, its application can be at different levels and in the particular case of higher education, proposals have been directed towards at least two levels: strategic or institutional applications; and direct teaching and learning.
Strategic or institutional applications of AI: learning analytics
Technologically speaking, this application of AI deals with big data, statistics and machine learning; from the point of view of education, most such applications tackle the problems of student selection, dropout and group behaviour tendencies, and analyse such data as a means of predicting and eventually redirecting strategies for future students.
Many research papers are available in this area, but most of this research has been done by computer scientists (but using real educational institutions data) and have not yet been fully implemented within institutions. It is likely that this area will grow in importance in higher education institutions in the coming decades, once institutions improve their data collection and implement policies on how to use such data. However, this area of application is not the focus of this specific edition.
Teaching and learning applications of AI
A second level is concerned directly with the teaching and learning process. To what extent can AI facilitate or even manage the process of teaching and learning itself? To date this area has consisted mainly of AI assistants such as chatbots as well as techniques for personalizing and adapting learning to particular characteristics or needs of groups or individuals, and in general any educational software including AI techniques that directly helps the process of learning. This was the focus of this particular edition.
The reality
It would be true that as editors, we were disappointed with the result of our call for papers. Although the four papers accepted were of a high standard, we anticipated a larger number of submissions, and more submissions of a higher quality, in terms of showing the influence of AI on teaching and learning in higher education.
Artificial intelligence is in widespread use in some areas of society. In its direct impact on teaching and learning though, much has been promised, but as yet, little has been achieved, on the basis of both the Zawacki-Richter et al. literature review and the evidence of the submissions for this journal. We suggest there are several reasons for this inertia.
Tegmark (2017) argues that we have yet to attain the level of Artificial General Intelligence, where the processing capabilities of machines matches the cognitive capabilities of humans, while Bostrum (2017) suggests that we have endured an ‘AI Winter’ where AI proponents have suffered loss of credibility.
In recent years, however, there has been a resurgence in the development of machine intelligence, deep learning and cognitive architectures, and there are those who continue to predict a brighter future for AI across all sectors of society (Kaku, 2011; Kelly, 2017). These developments are sometimes described as ‘modern’ AI, to differentiate them from earlier applications of computer based learning, perhaps inaccurately described as AI previously.
However, on the evidence of papers submitted for this edition of the journal, there is little evidence at the moment of a major breakthrough in the application of ‘modern’ AI specifically to teaching and learning, in higher education, with the exception of perhaps learning analytics. As G.K. Chesterton once wrote about the Christian ideal, ‘It is not that it has been tried and been found wanting; it has been found difficult and untried’.
Now we must be careful in interpreting this. It is possible that the journal failed to catch the attention of those who do have valid research in this area. However, it is significant that of those articles that were submitted, very few addressed the majority of educational questions posed in the call for papers.
Why is AI in HE lagging other areas of AI applications?’
Much like ‘the cloud,’ ‘big data,’ and ‘machine learning’ before it, the term ‘artificial intelligence’ has been hijacked by marketers and advertising copywriters. A lot of what people are calling ‘artificial intelligence’ is really data analytics—in other words, business as usual. The emphasis is on augmentation, in which intelligent software helps people to interact and deal with the increasingly digital world we live in and the immense amounts of data this generates. This also seems to hold true in higher education, when we examine the papers submitted for this journal edition.
Another reason why AI to date has had so little impact on teaching and learning in higher education is that education tends generally to lag behind where new technologies are concerned. Lack of willingness to take risks, or to adopt new innovations, and lack of funding for anything different from traditional methods of teaching militate against the adoption of new technologies in all sectors of education, learning and development (Wheeler, 2019). Many educators need to be convinced that a new idea can enrich or extend learning outcomes and experiences, so the education sectors remain highly conservative toward new technologies. This was clear from the Renz and Hilbig paper in this edition.
Lastly, most so-called AI applications for teaching and learning today are heavily focused on content presentation and testing for understanding and comprehension. In particular, Zawacki-Richter et al. make the point that most AI developments for teaching and learning – or at least the research papers – are by computer scientists, not educators. Since AI tends to be developed by computer scientists, they tend to use models of learning based on how computers or computer networks work (since of course it will be a computer that has to operate the AI algorithms). As a result, such AI applications tend to adopt a very behaviourist model of learning: present/test/feedback. Lynch (2017) argues that:
“If AI is going to benefit education, it will require strengthening the connection between AI developers and experts in the learning sciences. Otherwise, AI will simply ‘discover’ new ways to teach poorly and perpetuate erroneous ideas about teaching and learning. Comprehension and understanding are indeed important foundational skills, but AI so far is not helping with the development of higher order skills in learners of critical thinking, problem-solving, creativity and knowledge-management.”
In general, many applying AI to teaching and learning in higher education, especially those coming from a computer science background, have not recognised or accepted that learning is developmental and constructed, and instead have imposed a specific method of teaching based on behaviourism and an objectivist epistemology that does not represent well the complexity of learning in higher education. Even behavioural psychologists believe that knowledge consists of the development of complex schema or constructs of ideas.
Thus the testing of acquisition of knowledge in the form of small chunks of memory or comprehension is a weak foundation for personalisation, intelligent tutoring, and learner assessment. Certainly, most educators believe that to develop the high level intellectual skills of critical thinking, creativity, and problem-solving, and emotional skills such as empathy - skills very much needed in a digital age - a more learner-centred, constructivist approach to education is required.
It is worth noting that although most AI applications to teaching and learning to date have focused more on ‘basic’ levels of learning such as memorization and testing comprehension, other technologies such as simulations, game-based learning and virtual reality have had more success in teaching skills such as problem solving, critical thinking and creativity.
What needs to be done to make AI more relevant to teaching and learning in higher education?
The current use of learning analytics and artificial intelligence in the field of further education is only at a preliminary stage, mainly due to a lack of demand from educational institutions. When facing potentially disruptive transformations that might benefit higher education, it will be critical to understand these disruptions not only from the specificity of artificial intelligence, but also from a more comprehensive and diverse understanding of the phenomena of teaching and learning. Diversity of perspectives is still one of the most powerful strategies for approaching the challenges and opportunities that lie ahead.
The need for multidisciplinary research
In most cases, the research produced by experts in higher education has not been necessarily multidisciplinary. Although in the last 30 years there is a growing number of multidisciplinary research centres, this has not defined traditional research in higher education during, or prior to, the 20 century. What we observe in this special issue is not an exception.
Interestingly, when exploring how AI can permeate the walls of higher education, this seems to be a question mainly explored by computer scientists, data experts and informatics and STEM professions, rather than from other disciplines. Zawacki-Richter et al. pointedly ask: where are the educators? Most research on AI in Higher Education appears to have been done by computer scientists and not surprisingly their focus is in the tools, the algorithms, and their validity and application, rather than on the impact on learning outcomes. Where they do show interest in learning outcomes, it is mainly as validation for the algorithms. Their focus on learning outcomes as a result tends to be superficial. Attention focuses on what is easily measurable, such as short-term memory testing, or student drop-out.
Much work still needs to be done to bring together communities of educators and experts from other relevant areas such as social informatics, sociologists, psychologists, lawyers, anthropologists, and other fields of knowledge representing the humanities and social sciences.
Educators need to be more involved
Renz and Hilbig note:
“The interviews with EdTech companies have shown that neither the companies nor the institutions using digital educational services are willing and/or able to make meaningful use of potential learning data.”
This suggests that educators themselves are not paying enough attention to the potential of AI. Rather, they tend to focus on the negatives of AI, such as ethical issues and the potential to replace teachers with machines. This is one possible explanation of the lack of papers from educators for this edition – or in Zawacki-Richter et al.’s review. However, this implies too sharp a division between the benefits and dangers of AI. What is needed is greater involvement by educators in the potential applications of AI, to identify both its strengths and weaknesses.
To date, from the evidence of this journal and the systematic review conducted by Zawacki-Richter et al., educational researchers are standing on the sidelines, criticizing but not participating. They need to collaborate with computer scientists to focus research upon the potential improvements to learning and pedagogy that AI may be able to offer. As the field of AI in education is still nascent, we anticipate that as it develops, we will see further papers that explore emerging trends and practices.
The lack of ‘fit’ between current AI applications and modern educational theories
In regard to teaching and learning, the first thought is how far we still have to go from what is currently possible to reach the promises AI can offer. What works well in finance or medicine does not necessarily translate to teaching and learning contexts.
Although at a mass level human behaviour is predictable, and to some extent controllable, each student is an individual and will respond slightly differently from others in the same context. There is a strong affective or emotional influence in learning. Students often learn better when they feel that the instructor or teacher cares. In particular, students wish to be treated as individuals, with their own interests, approaches to learning, and some sense of control over their learning.
Because of these emotional and personal aspects of learning, students need to relate in some way to their instructor. At one level, learning can be seen as a complex activity where only a relatively minor part of the process can be effectively automated, while at a personal level it is an intensely human activity that benefits from relationships and social interaction.
There is strong research evidence (see for instance, Garrison, 2007) that this relational aspect of learning can be managed equally well online and face-to-face, but it requires computing to support communication as well as delivering and testing content acquisition. Chatbots are one way to do this; machine learning and visualisation techniques also have potential here.
The perceived threat to teachers and instructors
AI advocates often argue that they are not trying to replace teachers but to make their life easier or more efficient. This should be viewed cautiously. The key driver of AI applications is cost-reduction, which means reducing the number of teachers, as this is the main cost in education.
Nick Bostrom states
“...I think that there might be a tendency to exaggerate the impacts on the labor market. It is going to take time to really roll out systems on a large enough scale to have a big impact. Over time, though, I do think that advances in machine learning will have an increasingly large impact on human labor markets and if you fully succeed with artificial intelligence, then yes, artificial intelligence could basically do everything. In some respects, the ultimate goal is full unemployment. The reason why we do technology, and why we do automation is so that we don’t have to put in so much effort to achieve a given outcome. You can do more with less, and that’s the gestalt of technology” (Ford, 2018).
However, Diamandis and Kotler (2020) reminds us that “productivity is the main reason companies want to automate workforces. Yet, time and again, the largest increases in productivity don’t come from replacing humans with machines, but rather from augmenting machines with humans.” Furthermore, Klutka et al. (2018) claim that AI can cope with many of the routine functions currently undertaken by instructors and administrators, freeing them up to solve more complex problems and to connect with students on deeper levels.
This reinforces the view that the role of the instructor or teacher needs to move away from content presentation, content management and testing of content comprehension – all of which can be done by computing – towards skills development.
The good news is that, used in this way, AI supports teachers and instructors, but does not replace them. Perhaps the less welcome news for some is that many teachers and instructors will need to change the way they teach or they will become redundant. Clearly, the key lesson from all AI developments is that we will need to pay increased attention to the affective and emotional aspects of life in a robotic-heavy society, so teachers will become even more important.
Challenges and questions for the future
Many questions emerge from this issue of the journal: Is AI different from previous technical innovations in terms of its enhancing the human potential? Is it possible to reduce the gap between the expectations of what AI can do for education, and what the evidence shows us? What are the main limitations of AI when used to support and develop human capacities? And do the applications facilitate the development of the skills and knowledge needed in a digital age?
The above five questions are critical and need to be considered in future studies of artificial intelligence, and its integration in different education contexts. These questions are not presented merely as rhetoric, but serve to highlight some of the challenges that researchers, education managers and policy makers will need to consider when writing the future pages of this unfinished chapter.
Is AI at the service of higher education, or are higher education institutions simply serving the development of AI?
Obviously, this is not a chicken or egg question. We have so far explored to what extent the new developments can benefit education. Machine learning, deep learning and intensive use of data might eventually be used to support and improve many of the activities conducted within higher education, but simultaneously, beneath the surface a different reality is happening. While higher education institutions across the globe are broadly promoting the adoption and intensive use of diverse digital technology platforms, less evident activities are occurring. While Google, Facebook, Amazon AWS, YouTube and others are offering their services ‘for free’ to ‘serve’ and ‘empower’ students and staff, many of their activities are contributing to training the algorithms of these large technology providers.
It iIs also evident that the higher education community strongly benefits from incorporating the AI services offered by these global tech providers (e.g., big data or pattern recognition studies). But it would be a serious error to disregard the role education institutions have adopted (consciously or not) as data providers. Vast amounts of personal data is collected every day from universities to train algorithms that mostly increase the profit margins of large tech companies (Zuboff, 2019).
“There is a concentration of power. Currently, AI research is very public and open, but it’s widely deployed by a relatively small number of companies at the moment” (Ford, 2018).
These less than obvious trade-offs are something that private companies have understood very well, especially when thinking about how lucrative that data and AI services can be for the ‘big tech’ companies. Are higher education institutions in a good position to redefine the power relationship in this data intensive world?
Learning is not only about having the right information at the right time. What else should be considered?
In this equation, AI is not the sole blackbox. The human brain - and its learning processes - is also a black box. There is no unique or specific magic formula to deliver learning to everyone in any place, at any time. More than 40 years of research has shown that learning takes place when a number of complex intervening conditions coalesce. These include relevant context, appropriate motivation, requisite time and opportunity, personal skills and other social and emotional factors that may extend beyond our cognitive capacity.
This is where pedagogy plays such a vital role. Critical factors such as how the technology and information is presented, for what community, with what levels of skills, and diverse motivations (intrinsic and extrinsic) need to be considered. The education and technology research community is familiar with previous enthusiastic initiatives that didn’t work as planned because these critical components were not properly addressed (the list is too long to include hereFootnote 2). Such lessons need to be considered by communities that work at the intersection between artificial intelligence, learning and higher education.
Inevitably, in a world in which AI is expected to gain more and more relevance in the labour market, the skills required cannot be the same we used when Windows 95 was moving from offices to homes. Nor can we use the same thinking we used when the Internet became the new environment that offered ways of staying always connected. A scenario in which humans and automated systems work together seems closer than ever. New educational challenges are increasingly adding to the large lists that higher education institutions are accumulating. Will there be enough time, capacity and opportunities to retrain the workforce before the effects of AI have a wider impact on the labour market?
The education community will need to be re-educated to work under the ‘new normal’
It is clear within higher education that some communities promote, defend and celebrate a broader adoption of new technologies, but these techno-enthusiast communities coexist with other groups which are more techno resistant, those who are more reluctant to promote technical changes or simply are sceptical of its opportunities, or perhaps are worried by the consequences it can cause.
It may be too early to determine whether artificial intelligence will be successful in enabling new forms of learning and improving outcomes. As can be seen from this edition, the evidence to date is still far from promising. However, it is increasingly evident that AI can play an important role in reducing time and effort to conduct administrative tasks. Interesting examples can be already seen in recruitment, chatbots, voice and image recognition, predictive information search, pattern recognition, and automated filtering and recommender systems.
Although it might sound strange, it will be important for education institutions to unlearn and relearn (Wheeler, 2015). Re-education in this context means avoiding extremist views and positions, reaching beyond the radical or reductionist binaries of ‘love or hate’, in order to navigate the complexity of these new contexts, where an increasing number of information transactions within higher education will be mediated through AI.
It is critical to understand the consequences of the datification of learning
Increasingly, the broad utilisation of large volumes of data in higher education has improved efficiency, and where possible optimised processes, reduced costs (replacing staff) or delivered content at scale. However, as in other sectors this is not an innovation necessarily created within higher education, but has been primarily promoted by external developers and commercial providers. As indicated earlier, that situation will demand better educated institutions to develop the needed capacities to thrive in this new context.
Higher institutions will become large data warehouse organizations in the coming decades (if they are not already). Educators and administrators in higher education will need to develop and outsource when needed, novel data intensive technical capacities. It is evident that the rapid development of digital technologies has outstripped the capacity of humans to develop the prerequisite digital competencies to navigate in this context. It is insufficient to simply use, or interact, with these new and emerging systems. We need to exercise critical thought, so we become capable of understanding, assessing and anticipating the unintended consequences of learning datification. Currently, most organizations lack this capacity and higher education communities are no different.
Although for the last two decades there have been concerted efforts to promote, develop and update the digital skills of instructors and faculty, researchers and administrators, the challenges now seem to be more complex. In the last few years, one of the interesting developments observed in the evolution of AI is the diversification of new interfaces. They extend far beyond the keyboard and mouse; meaning that users (especially non-expert users) can interact with AI simply by using voice or image recognition. This not only makes the interaction with advanced systems more transparent, but also opens up possibilities for those with lower levels of skills to benefit.
Our expectations that everyone should think and act as a computer scientist are unrealistic. The challenge seems to be enabling higher education to emerge fully into the world of AI without compromising its core principles and values:
-
developing the capacity to avoid bias and to ensure diversity,
-
protect privacy,
-
develop transparent data policies,
-
integrate regular ethical data impact assessments of the systems adopted and
-
treating personal data as a fundamental right (at least allowing three basic rights usus, abusus, fructusFootnote 3).
Increasingly in this changing landscape, it will be critical that higher education institutions become agile learning organisations capable of quickly adopting new practices and dynamics.
Research is needed to better understand the (unintended) consequences and opportunities
‘Trust but verify.’ This phrase was made famous by Ronald Reagan in December 1987 after the signing of the INF Treaty with Mikhail Gorbachev. Although the horizon of AI in HE is still in an emerging stage, it is increasingly important to take actions not to ignore, as has already happened in different governments and also health sectors, the unintended consequences of the growing protagonists that black boxed AI systems can generate.
The more people rely on AI systems to learn, upskill, or verify their knowledge or skills, the more important it will be to remain open but vigilant. As already explained, while this work-in-progress will require multi-disciplinary expertise, research is also required so we might better understand how AI technology can help reduce existing and future inequities. As Neil Postman (1983Footnote 4) wrote almost 30 years ago, there are several key questions that are still relevant:
-
What is the problem to which this technology is the solution?,
-
Which people and what institutions might be most seriously harmed by a technological solution?,
-
What new problems might be created because we have solved this problem?,
-
What sort of people and institutions might acquire special economic and political power because of technological change?.
Agility, smartness, openness and creativity will be required. As Diamandis and Kotler adds (2020) ‘We tend to think linearly about the dangers we face, trying to apply the tools of yesterday to the problems of tomorrow’ (2020).
Conclusions
In education, AI remains a sleeping giant. ‘Breakthrough’ AI applications for teaching and learning are unlikely to emerge from within mainstream higher education. They are more likely to arrive from outside the formal post-secondary system, through organizations such as LinkedIn, lynda.com, Amazon or Coursera, that have access to the large data sets that make the applications of AI scalable and profitable.
However, this would pose an existential threat to public schools, colleges and universities. The issue then becomes: who is best placed to protect and sustain the individual in a digital age: multinational corporations or a public education system?
The key question then is whether technology should aim to replace teachers and instructors through automation, or whether technology should be used to empower not only teachers but also learners. Above all, who should control AI in education: educators, students, computer scientists, or large corporations? These are indeed existential questions if AI does become immensely successful in reducing the costs of teaching and learning: but at what cost to us as humans? Fortunately AI is not yet in a position to provide such a threat, but this will not always be the case. The tsunami is coming.
Notes
Douilhet, Emile, and Argyro P. Karanasiou. “Legal Responses to the Commodification of Personal Data in the Era of Big Data: The Paradigm Shift from Data Protection towards Data Ownership.” Web Services: Concepts, Methodologies, Tools, and Applications. IGI Global, 2019. 2076–2085.
References
Bostrum, N. (2017). Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.
Diamandis, P. H., & Kotler, S. (2020). The Future Is Faster Than You Think: How Converging Technologies Are Transforming Business, Industries, and Our Lives. Simon & Schuster.
Ford, M. (2018). Architects of intelligence: The truth about AI from the people building it. Packt Publishing Ltd.
Garrison, D. R. (2007). Online community of inquiry review: Social, cognitive, and teaching presence issues. Journal of Asynchronous Learning Networks, 11(1), 61–72.
Kaku, M. (2011). Physics of the future: The inventions that will transform our lives. London: Penguin Group.
Kelly, K. (2017). The inevitable: Understanding the 12 technological forces that will shape our future. London: Penguin Group.
Klutka, J., et al. (2018). Artificial Intelligence in Higher Education: Current Uses and Future Applications. Louisville: Learning house.
Lynch, J. (2017). How AI Will Destroy Education https://buzzrobot.com/how-ai-will-destroy-education-20053b7b88a6.
Postman, N. (1983). The Disappearance of Childhood London: W.H. Allen.
Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. London: Penguin group.
Wheeler, S. (2015). Learning with ‘e’s: Educational theory and practice in the digital age. Carmarthen: Crown House.
Wheeler, S. (2019). Digital learning in organizations. London: Kogan Page.
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bates, T., Cobo, C., Mariño, O. et al. Can artificial intelligence transform higher education?. Int J Educ Technol High Educ 17, 42 (2020). https://doi.org/10.1186/s41239-020-00218-x
Published:
DOI: https://doi.org/10.1186/s41239-020-00218-x