The results obtained in this research heterogeneously compare with those obtained in other studies carried out, differences which we will discuss in the current section.
In online teaching environments, student activity generates a huge amount of information that is disseminated on the platforms within which the activity takes place. Based on the results and the teachers’ work, it is important to note that teachers’ must understand what happens in the learning activity if they are to assess it, both at the individual as well as the group level. The tools used by researchers to analyze online discourse are inadequate (Law, Yuen, Huang, Li, & Pan, 2007), mainly because these tools have difficulty in managing different information formats, the quantitative indicators insufficiently measure the quality of learning, and participation indicators and content analysis are handled using different tools. In contrast, this research tackles these issues from a different point of view; DIANA 2.0 was integrated into the virtual campus, giving teachers the opportunity to use the same data as that by the LMS. We also provided teachers with a variety of heterogeneous indicators and metrics, represented not only in text mode but also visually (visual learning analytics): bar charts, tag clouds, gradient meters, etc. Likewise, DIANA 2.0 could export the analyses generated in XML format, making it possible to share information between analytic tools, and combine both interaction metrics and low-level content analysis metrics.
Experts in online learning in higher education predict that learning analytics will be used not only to identify students’ behavior patterns but also to improve their learning and retention rates (Avella, Kebritchi, Nunn, & Kanai, 2016). The current study’s results support these two last statements, even though the improvements identified were not so extensive. In this vein, Viberg, Hatakka, Bälter, and Mavroudi (2018) classified learning analytics research in higher education in terms of evidence for learning and teaching, pointing out that more than a half of the studies analyzed showed clear evidence of an improvement in learning outcomes and in support for teaching. The results of our contribution back up the meta-analysis carried out by these authors.
During the research’s iterative design process, teachers expressed having difficulty in interpreting information. Such information tended to be heterogeneous between variables or difficult to evaluate in comparison with the reference values. This opinion has been reflected in other studies recognizing the limitations to teachers’ ability to make quick decisions due to a lack of real-time data analysis and a delay in accessing critical information (Gkontzis, Kotsiantis, Panagiotakopoulos, & Verykios, 2019). These needs have also been identified by other authors (Mor, Ferguson, & Wasson, 2015), who have stated that the assessment of students’ performance is a tiresome and time-consuming process for teachers. This is why previous training for teachers is necessary to help them interpret the information that learning analytics report on student activity. In order to minimize the level of difficulty, the DIANA 2.0 tool’s design included different data visualization models, from the most basic, based on icons, to the most complex, based on graphs of nodes in the style of social network analysis. Some of the information produced by learning analytics was used by teachers to understand the learning process carried out by their students, which the teachers then used to provide and improve the feedback sent to students, complemented by other qualitative information.
Another of the relevant elements of this study was that it addressed the way in which teachers access, process and interpret information related to online educational practices. Teachers have to face the challenge of understanding complex phenomena related to learning in educational environments. This is especially important when a multitude of variables and contexts intervene, not so much on how to collect the information they need but rather how they analyze that information to obtain judgments of value that assure correct decision- making. It is here where teachers play a fundamental and essential role, since they decide the action to be undertaken based on the interpretation of the data, no matter how well they are represented. Tió, Estrada, González, and Rodríguez (2011) considered the contribution of the teacher’ s role to expanding the student’s zone of proximal development. In this sense, Gkontzis et al. (2019) assessed student performance using learning analytics and they concluded, following the trends shown in our research, that the use of data during the teaching process can inform teachers about students at risk, but the authors recognized that advanced prediction is at an early stage.
This research respected the experimental work protocol in full, obtaining favorable impact results by giving visibility to the importance of collaborative learning in university work. In fact, the results obtained in this research indicate that the use of specific learning analytics instruments by teachers, configured based on a participation process and with teachers being trained in the use of these instruments, has improved specific training in the communicative interaction of students in asynchronous online discussions. We tried to compare these results with others from different studies which involved the use of learning analytics tools, for example Lotsari, Verykios, Panagiotakopoulos, and Kalles (2014), who found no clear correlation between students’ participation and their final grade. From our point of view, this lack of correlation was due to the slow process of extracting data from online discussion and its subsequent analysis using statistical software. The authors themselves recognized this limitation, claiming that real-time analysis, which DIANA 2.0 does, would have enriched the results. Kagklis, Karatrantou, Tantoula, and Panagiotakopoulos (2015) had similar results, and the reason is the same lack of real-time analysis as in the previous case. To support this argument, we found a correlation of 68% between grades and level of participation.
Student feedback, along with other qualitative information, is an important element in the application of analytical technology, not as an effective element in itself, but as a mediating tool for what the teacher wants to promote in the pedagogical process and what university students can obtain. In this case, other researchers (Park & Jo, 2015) stated that despite the absence of a significant impact on learning achievement in their study, the pilots organized in their investigation evidenced that learning analytics tools impacted not only on the degree of understanding but also on the students’ perceived change of behavior. Gkontzis et al. (2019) likewise articulated that we have to consider a diversity of indicators, not solely one of them, for predicting students’ future achievements and for improving educational outcomes (Avella et al., 2016). Such tasks have to be carried out by teachers, a conclusion which we have also reached in our research.
The observability of the research revealed, through analytics and their visual systems, a slight improvement in individual performance through an improvement in grades and, in general terms, reduced the dropout rate. However, this result would not have been valued by the participants without an understanding of the importance of collaborative learning. In the field of visual learning, other case studies (León, Cobos, Dickens, White, & Davis, 2016) have reported on the advantages of the availability of analytics for teachers and students and, in terms of usability, that teachers consider this information very useful in real-time. This perception was shared by the teachers involved in this papers’ research; they felt comfortable using the learning analytics too and were satisfied by its interface. Other works (Tió et al., 2011) have demonstrated similar results, and, going into the topic of student satisfaction in greater depth, Park and Jo (2015) found that satisfaction with using learning analytics dashboards (an example of an analytical tool) correlated with the degree of understanding and student change of behavior.
It may seem obvious that those students who posted the most messages in the online discussions also had a higher average of written words, but what is remarkable is that the “popularity” metric (responses received from the other students) also rose. That is, the average number of words and the total messages are metrics that depend on the individual students themselves, since they directly cause them with their actions, but popularity is based on the number of responses that their messages receive, something they do not control. In other words, it is not an action that a student fosters him- or herself but one that is fostered by the rest of the participants. One possible explanation is that a student’s messages had a significant impact on the group, either due to the quality of the interventions, the notoriety or some other factor that generated a high level of interest and many responses. The teacher’s ability to visualize this popularity and give feedback to the student (in the form of a teacher’s comment) reinforced the teacher’s understanding of the importance of peer interactions in a collaborative process.