Many questions emerge from this issue of the journal: Is AI different from previous technical innovations in terms of its enhancing the human potential? Is it possible to reduce the gap between the expectations of what AI can do for education, and what the evidence shows us? What are the main limitations of AI when used to support and develop human capacities? And do the applications facilitate the development of the skills and knowledge needed in a digital age?
The above five questions are critical and need to be considered in future studies of artificial intelligence, and its integration in different education contexts. These questions are not presented merely as rhetoric, but serve to highlight some of the challenges that researchers, education managers and policy makers will need to consider when writing the future pages of this unfinished chapter.
Is AI at the service of higher education, or are higher education institutions simply serving the development of AI?
Obviously, this is not a chicken or egg question. We have so far explored to what extent the new developments can benefit education. Machine learning, deep learning and intensive use of data might eventually be used to support and improve many of the activities conducted within higher education, but simultaneously, beneath the surface a different reality is happening. While higher education institutions across the globe are broadly promoting the adoption and intensive use of diverse digital technology platforms, less evident activities are occurring. While Google, Facebook, Amazon AWS, YouTube and others are offering their services ‘for free’ to ‘serve’ and ‘empower’ students and staff, many of their activities are contributing to training the algorithms of these large technology providers.
It iIs also evident that the higher education community strongly benefits from incorporating the AI services offered by these global tech providers (e.g., big data or pattern recognition studies). But it would be a serious error to disregard the role education institutions have adopted (consciously or not) as data providers. Vast amounts of personal data is collected every day from universities to train algorithms that mostly increase the profit margins of large tech companies (Zuboff, 2019).
“There is a concentration of power. Currently, AI research is very public and open, but it’s widely deployed by a relatively small number of companies at the moment” (Ford, 2018).
These less than obvious trade-offs are something that private companies have understood very well, especially when thinking about how lucrative that data and AI services can be for the ‘big tech’ companies. Are higher education institutions in a good position to redefine the power relationship in this data intensive world?
Learning is not only about having the right information at the right time. What else should be considered?
In this equation, AI is not the sole blackbox. The human brain - and its learning processes - is also a black box. There is no unique or specific magic formula to deliver learning to everyone in any place, at any time. More than 40 years of research has shown that learning takes place when a number of complex intervening conditions coalesce. These include relevant context, appropriate motivation, requisite time and opportunity, personal skills and other social and emotional factors that may extend beyond our cognitive capacity.
This is where pedagogy plays such a vital role. Critical factors such as how the technology and information is presented, for what community, with what levels of skills, and diverse motivations (intrinsic and extrinsic) need to be considered. The education and technology research community is familiar with previous enthusiastic initiatives that didn’t work as planned because these critical components were not properly addressed (the list is too long to include hereFootnote 2). Such lessons need to be considered by communities that work at the intersection between artificial intelligence, learning and higher education.
Inevitably, in a world in which AI is expected to gain more and more relevance in the labour market, the skills required cannot be the same we used when Windows 95 was moving from offices to homes. Nor can we use the same thinking we used when the Internet became the new environment that offered ways of staying always connected. A scenario in which humans and automated systems work together seems closer than ever. New educational challenges are increasingly adding to the large lists that higher education institutions are accumulating. Will there be enough time, capacity and opportunities to retrain the workforce before the effects of AI have a wider impact on the labour market?
The education community will need to be re-educated to work under the ‘new normal’
It is clear within higher education that some communities promote, defend and celebrate a broader adoption of new technologies, but these techno-enthusiast communities coexist with other groups which are more techno resistant, those who are more reluctant to promote technical changes or simply are sceptical of its opportunities, or perhaps are worried by the consequences it can cause.
It may be too early to determine whether artificial intelligence will be successful in enabling new forms of learning and improving outcomes. As can be seen from this edition, the evidence to date is still far from promising. However, it is increasingly evident that AI can play an important role in reducing time and effort to conduct administrative tasks. Interesting examples can be already seen in recruitment, chatbots, voice and image recognition, predictive information search, pattern recognition, and automated filtering and recommender systems.
Although it might sound strange, it will be important for education institutions to unlearn and relearn (Wheeler, 2015). Re-education in this context means avoiding extremist views and positions, reaching beyond the radical or reductionist binaries of ‘love or hate’, in order to navigate the complexity of these new contexts, where an increasing number of information transactions within higher education will be mediated through AI.
It is critical to understand the consequences of the datification of learning
Increasingly, the broad utilisation of large volumes of data in higher education has improved efficiency, and where possible optimised processes, reduced costs (replacing staff) or delivered content at scale. However, as in other sectors this is not an innovation necessarily created within higher education, but has been primarily promoted by external developers and commercial providers. As indicated earlier, that situation will demand better educated institutions to develop the needed capacities to thrive in this new context.
Higher institutions will become large data warehouse organizations in the coming decades (if they are not already). Educators and administrators in higher education will need to develop and outsource when needed, novel data intensive technical capacities. It is evident that the rapid development of digital technologies has outstripped the capacity of humans to develop the prerequisite digital competencies to navigate in this context. It is insufficient to simply use, or interact, with these new and emerging systems. We need to exercise critical thought, so we become capable of understanding, assessing and anticipating the unintended consequences of learning datification. Currently, most organizations lack this capacity and higher education communities are no different.
Although for the last two decades there have been concerted efforts to promote, develop and update the digital skills of instructors and faculty, researchers and administrators, the challenges now seem to be more complex. In the last few years, one of the interesting developments observed in the evolution of AI is the diversification of new interfaces. They extend far beyond the keyboard and mouse; meaning that users (especially non-expert users) can interact with AI simply by using voice or image recognition. This not only makes the interaction with advanced systems more transparent, but also opens up possibilities for those with lower levels of skills to benefit.
Our expectations that everyone should think and act as a computer scientist are unrealistic. The challenge seems to be enabling higher education to emerge fully into the world of AI without compromising its core principles and values:
-
developing the capacity to avoid bias and to ensure diversity,
-
protect privacy,
-
develop transparent data policies,
-
integrate regular ethical data impact assessments of the systems adopted and
-
treating personal data as a fundamental right (at least allowing three basic rights usus, abusus, fructusFootnote 3).
Increasingly in this changing landscape, it will be critical that higher education institutions become agile learning organisations capable of quickly adopting new practices and dynamics.
Research is needed to better understand the (unintended) consequences and opportunities
‘Trust but verify.’ This phrase was made famous by Ronald Reagan in December 1987 after the signing of the INF Treaty with Mikhail Gorbachev. Although the horizon of AI in HE is still in an emerging stage, it is increasingly important to take actions not to ignore, as has already happened in different governments and also health sectors, the unintended consequences of the growing protagonists that black boxed AI systems can generate.
The more people rely on AI systems to learn, upskill, or verify their knowledge or skills, the more important it will be to remain open but vigilant. As already explained, while this work-in-progress will require multi-disciplinary expertise, research is also required so we might better understand how AI technology can help reduce existing and future inequities. As Neil Postman (1983Footnote 4) wrote almost 30 years ago, there are several key questions that are still relevant:
-
What is the problem to which this technology is the solution?,
-
Which people and what institutions might be most seriously harmed by a technological solution?,
-
What new problems might be created because we have solved this problem?,
-
What sort of people and institutions might acquire special economic and political power because of technological change?.
Agility, smartness, openness and creativity will be required. As Diamandis and Kotler adds (2020) ‘We tend to think linearly about the dangers we face, trying to apply the tools of yesterday to the problems of tomorrow’ (2020).