The influence of the internet for pedagogical innovation: using twitter to promote online collaborative learning
© The Author(s) 2016
Received: 30 November 2015
Accepted: 18 February 2016
Published: 8 June 2016
This article analyses a practice of formative peer-assessment based on an experience in university teaching innovation. From a review of the literature on feedback for self-regulation, the traits of formative assessment practices are determined and a task and the assessment criteria are designed in a consistent way with these traits. After the application of the experience, the results are discussed in terms of students’ involvement; activity performance of said experience and of the whole subject; motivation and self-perception of learning and of the competency-based development of the students. The results show positive effects on the involvement, motivation and learning perception but not on performance improvement, suggesting that future research should address the effects of self-regulating feedback on the estimated learning from objective measurements and should expand the studies of the effects of these practices on the immediate and future self-regulating capacity.
Feedback, a key component of formative assessment
Providing feedback is one of the most powerful educational strategies that is connected with student success (Boud, 2000; Nicol, Thomson, & Breslin, 2014). The feedback can be provided both by teachers and students. Even if it is true that the research assures that teacher’s feedback tends to be more accurate and provides more information, studies show that peer feedback has unique attributes like, for example, collaborative learning and increasing students’ self regulation competencies (Boekaerts, Pintrich, & Zeidner, 2000; Dippold, 2009; Van Gennip, Segers, & Tillema, 2010).
Students who participate in collaborative learning processes co-build their knowledge from interactions related to the exchange of ideas and opinions, to the sharing of relevant information and/or to providing this feedback between peers (Strijbos, Narciss, & Dunnebier, 2010; Ware & O’ Dowd, 2008). This type of learning develops the communicative competencies of students and the social consciousness to engage in the discourse of knowledge building, to negotiate the meanings of ideas and to generate criteria for the assessment and resolution of different situations (Scardamalia, 2002; Stahl, Koshmann, & Suthers, 2006). Peer-feedback or peer review is a form of formative assessment where students can provide feedback, including warnings or suggestions to improve their work. In this sense, this kind of feedback can have benefits in terms of improving the self-regulation of learning (Shute, 2008), since it makes students meet the assessment criteria, to take ownership of them, and to implement and communicate their point of view, whereby a meta-cognitive and personal reflection process occurs.
This is self-formative assessment: to reflect on your own work and to know or have a narrow perception of what is right and what needs to be improved upon (Boud, Lawson, & Thompson, 2013). It is the ability to criticize how something was done, the ability to know the mistakes and to use the next time more of what worked and less of what did not work. It is an assessment that allows asking about what has been done (work) and about how good it has been done (performance). Self-regulation can be described as a process that helps students structure their learning activities through performing the appropriate cognitive, affective and behavioural adjustments (Boekaerts, 1999; Karoly, 1993).
Information and communication technologies in the processes of feedback
In this context of formative assessment, and in particular in a peer feedback context, the development and implementation of information and communication technologies (ICTs) have generated an added value (IEA, 2013). The increased accessibility to technology provides opportunities to develop learning experiences and motivators assessment (Osborne & Dillon, 2007). In this regard, in recent years, ICT has been introduced within the classroom to provide and improve student self-directed and collaborative learning (Dillenbourg & Hong, 2008; Jonassen, Howland, Marra, & Crismond, 2008; Shewbridge, Ikeda, & Schleicher, 2006). It is important to consider approaches for more open and participatory learning through basically applying and adapting the existing technologies and social networks such as weblogs (blogs), wikis or other popular social networks like Facebook or Twitter. In this context, these new technologies in recent years have been characterized by placing the student at the centre of the process of teaching and learning and have enhanced students’ competencies (Friesen & Lowe, 2012).
Twitter, a technology within Web 2.0, is considered a microblogging service with social networking features. A microblog is a service that allows users to write brief text updates (140 characters in the case of Twitter) from mobile devices or personal computers to publish them on the Web (Oulasvirta, Lehtonen, Kurvinen, & Raento, 2009). Individual participants in Twitter create their personal and unique networks in which learning occurs (Veletsianos, 2012).
Recently one of the uses that is being given to this new technology is related with the opportunities offered by Web 2.0 to enhance the interactivity between students in learning environments and to promote talks between them (Gao, Luo, & Zhang, 2012). The availability of this microblog platform (Twitter), as Luo and Gao (2014) call it, allows students to become immediately involved so it can become an ideal environment to enhance the feedback between learners.
Specifically, the practices based on the use of Twitter seem to be popular among faculties of Higher Education. The results of a questionnaire to Higher Education professionals claimed that more than 35 % of them, from 1,372 participants, used this social media (Faculty Focus, 2010).
In this context, despite the fact that research regarding Twitter is at an early stage, there is a significant number of studies that can help us understand the different possible activities of the university in this kind of platform (Veletsianos, 2012). Ebner, Lienhardt, Rohs, and Meyer (2010), for example, found that students could use Twitter to ask questions, give opinions, exchange ideas, share resources or reflect. At the same time, Dunlap and Lowenthal (2009) found that the fact of incorporating their students to Twitter allowed them to be part of virtual communities and to interact with other professionals and gain professional exposure. In any case, it is true that sometimes students do not feel comfortable or at ease with Twitter and they do not seem to be willing to use these informal tools as the sole teaching tool for learning (Manca & Ranierit, 2013).
The work presented here is part of the project “Design, implementation and assessment of proposals for sustainable feedforward” (reference REDICE2014-966) funded by the Intitut de Ciències de l’Educació (ICE) of the Universitat de Barcelona (UB). This study is intended to design and implement feedforward practices in different degrees and in different Spanish universities involved in the project, with the aim to find out if these practices can improve the performance in the activity assessed under feedforward methods and in the subject as a whole and, secondly, to know the perception of students and their teachers on this type of practice. This article specifically presents one of these experiences carried out in the compulsory subject on the Organization and Management of Educational Institutions of the second year of the degree in pedagogy at the University of Barcelona (UB).
This subject has incorporated the use of Twitter as a learning activity. The aim has been that students analyse the different key elements of the subject, as well as to increase the performance in this subject, the motivation and the perceived competency development as well as the ability to assess their peers. Therefore, a process of peer review has been incorporated so that students have provided, using a form made with Google Forms, regular feedback to their peers about the quality of their tweets and of the resources or materials that have been linked to them.
How have students been involved in the experience?
What is the type of tweets made by students?
What role has feedback played in improving students’ academic performance?
How have students used the information received through the feedback from their peers?
Are students satisfied with this experience which incorporates the use of Twitter and peer review?
Google Drive list of questions administered for peer review
Twitter username of the assessor
Twitter username of the assessed
Has your partner used what you told him in your last feedback?
[Yes; No; Partially; Not applicable]
Number of tweets on the new hashtag
1. What would you say to your partner for him or her to improve his or her tweets?
2. What kind of information does your partner tweet?
[Exposes an idea or opinion; Comments a new; Shares a resource; Not applicable]
3. Does your partner include links?
[Yes; No, Not applicable]
4. Evaluate, thinking of the whole set of tweets on this issue, the following items relating to your partner’s contribution:
- Are they related with the content of the subject?
- Are they of interest for the subject?
- Are they of special relevance to the content of the subject?
- Is the information provided academically rigorous?
- Are the tweets written in an academic tone?
- Is the spelling quality adequate?
What did your partner say about your tweets on the last hashtag?
How have you used your partner’s feedback to draw up the following tweets?
The satisfaction survey of students
The satisfaction survey of teachers
The pattern of analysis of feedback
The responses from the peer review Google Drive form
Specifically in this article, the analysis of the results of the satisfaction surveys of students and of the responses from the peer review Google Drive form are presented in this methodology section.
How have students been involved with the experience?
Analysis of the interaction among students from the experience of Twitter: Answers, Retweets and Likes given on the different issues of the subject
Number of Replies
Average of Replies per tweet
Number of Retweets
Average of Retweets per tweet
Number of Likes
Average of Likes per tweet
First of all, the number of Replies to a tweet (which is a possibility that Twitter allows) has been nil. Secondly, relating to the number of Retweets, that is the number of times students repeat the content of their peers to also share it with their followers, an increase is observed. They began as few given that issues 2 and 3 generated 17 and 5 Retweets respectively, but they increased considerably in issues 4 and 5, with 35 and 29 Retweets. It is also important to take into account that the average of these Retweets per tweet oscillates poorly in a range from 1 (the third issue) and 1.38 (in the last issue). This means that if students have used the possibility to make a Retweet, this has been done practically on a single occasion. Finally, with regard to the number of Likes, which is another way for students to interact and to show interest in the contributions of their partners, this has been the largest element of the experience. Likes started being 36 and 21 in the first two issues and ended being 79 and 71 in the last ones. Specifically, the average of these Likes per tweet has been between 1.17 and 1.44, that is, somewhat higher than the average of Retweets but equally low.
What is the type of tweets made by students?
What role has feedback played in improving students’ academic performance?
The improvement of students’ performance has been addressed by analysing both scores on the activity of Twitter and the final grade of the subject comparing it from the one of the previous year.
Moreover, the average of students’ grades at the end of the course was 6.69 against 6.05 on average in the same subject the previous year, in which the experience of Twitter was not performed. Despite this increase, no apparent relationship has been found between this activity and the overall rating since the correlation coefficient is r = 0.32.
How have students used the information received through the feedback from their peers?
When asked “How have you used the information your partner gave you for the elaboration of your next tweets?”; the responses (which despite being open have been classified based on their similarity) indicate that the main change that is generated as a result of what classmates affirm is to progressively show their personal opinion on their tweets. The second consequence in terms of frequency (in fact, it is the most important in the last issue) is the provision of resources that accompany each tweet. Both elements (showing their opinion and linking tweets to relevant outside resources) are aligned with the criteria given on the first day of the subject regarding the proper use of Twitter.
Are students satisfied with this experience which incorporates the use of twitter and peer review?
The data received from my partner to assess my tweets was adequate
The timing of the return of peer feedback was adequate
This feedforward experience has helped me to improve my competencies
This feedforward experience has helped me to improve my learning process
The feedback received has been useful to improve future activities
The experience has increased my participation and involvement in the classroom activities
The experience of feedback has improved my motivation towards the subject
The workload has been adequate
The most valued element was the first item: (“The data received from my partner to assess my tweets was adequate.”), which received an average of 4.94 out of 6 points. While the lowest score obtained was in the element: “This feedforward experience has helped me to improve my competencies (with 3.5). This calls into question the relevance of this innovation that was made relating to the ultimate objective of every teaching and learning process, which is the development of competencies.
The opportunities to provide feedback to students and to provide additional communication on this feedback are often limited due to the difficulties of interacting between teachers and students as well as among the students themselves. The objective of this experience was to increase these opportunities to provide this feedback from different resources and this required additional communication through the use of new technologies.
This experience highlights the good and bad points about the usefulness of Twitter. On the one hand, a positive assessment regarding the technology is obtained as a tool to promote peer review, something that has also been very little explored in previous studies (Luo & Gao, 2014). However, this innovation promoted by Twitter does not achieve an improved perception of learning by the participants nor objectively of the total performance in the subject. Even so, it is true that it improves students’ satisfaction on the subject.
With the birth of Web 2.0, and specifically to teacher training, it would be important to increase their motivation to carry out experiences in the use of Twitter and other microblog systems (Luo & Gao, 2014) that allow us to consider how to design and facilitate peer assessment activities using new technologies.
Moreover, it is important to take into account the explicit aim of knowing whether the practice of peer review led to an improved performance in activities under feedforward and in the subject. On the one hand, one progressive improvement (but not sustained as it decreases in the last issue) would be in carrying out the activity that students attribute to receiving feedback from their peers, indicating in which specific aspects they have improved their future Twitter tasks. But on the other hand, the mark obtained in this activity has a low correlation with the final score and performance in the whole subject has improved just slightly compared to previous years.
There is a need for a better understanding of how to finally use Twitter in education. The experience presented in this article is similar to what happened in the experience of Lee, Tsai, Chait, and Koht (2014). While students were very productive in the use of computers as productivity tools (Internet search, creating PowerPoints slides for a presentation, use of the Word editor, etc.), the technological competency and the widespread provision of ICT access at the school and at home did not lead to a responsible use of technology for learning in general. In our experience, despite the positive valuation of the use of the tool, this use has not resulted in noticeable improvements in the learning process.
Regarding the extent to which students are involved in the experience, the assessment is positive in that, on average, 84.65 % of those students assigned to continuous assessment participated in it. It is important to take into account that this number is due to the decline in the participation of students during the last weeks of the subject, when this participation usually exceeded 90 %. That is certainly a positive aspect. Not so with the commitment of students to assess their peers within the stipulated period that even suffers a worsening in answering the peer review form by Google Drive, which does not show an improvement in the responsibility within this practice.
If it is analysed how students use Twitter, an increasingly complex use is perceived as a single tweet incorporates several elements (opinions, resources, links…), as is apparent from the assessment of the type of content that students have made of the tweets to their peers. This is corroborated by the students’ own perception that report having increasingly progressed in the expression of their personal opinions and in providing resources that accompany their tweets. This result suggests that the quality of tweets has improved and it has adjusted more to the key academic requirements of this kind of experience in the context of Higher Education.
Finally, in assessing the role played by feedback in improving academic performance, which was the main objective of the research, the results vary considerably. On the one hand, the valuation is relatively high and a substantial improvement in the tweets can be seen. It is found that the role of feedback is essential to improve student learning, for the self-regulation of their knowledge and for an increase in the acquisition and development of competencies. It is for these reasons that the positive valuation is to be welcomed by the fact that while students have been receiving the assessment from their peers, they have been able to modify and improve their future tasks. Therefore feedback seems to have had a role in improving the activity. However, this feedback has not appeared to contribute to improve students’ achievement. The perception of the degree in which the entire experience helps to improve competencies is not high (3.5 to 6) and the final grades, as outlined, have not either. This calls into question the relevance of this innovation that was made relating to the ultimate objective of every teaching and learning process, which is the development of competencies.
This paper has had a supplementary support from the Department of Teaching and Educational Organisation at the University of Barcelona who provided help on the writing assistance and proof reading the article.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
- Boekaerts, M. (1999). Self-regulated learning: Where we are today. International Journal of Educational Research, 31, 445–457.View ArticleGoogle Scholar
- Boekaerts, M., Pintrich, R., & Zeidner, M. (2000). Handbook of self-regulation. London: Academic.Google Scholar
- Boud, D. (2000). Sustainable assessment: rethinking assessment for the learning society. Studies in Continuing Education, 22(2), 151–167.View ArticleGoogle Scholar
- Boud, D., Lawson, R., & Thompson, D. (2013). Does student engagement in self-assessment calibrate their judgment over time? Assessment & Evaluation in Higher Education, 38(8), 941–956.View ArticleGoogle Scholar
- Dillenbourg, P., & Hong, F. (2008). The mechanics of CSCL macro scripts. International Journal of Computer- Supported Collaborative Learning, 3, 5–23.View ArticleGoogle Scholar
- Dippold, D. (2009). Peer feedback through blogs: Student and teacher perceptions in an advanced German class. ReCALL, 21(1), 18–36.View ArticleGoogle Scholar
- Dunlap, J., & Lowenthal, P. (2009). Horton hears a tweet. EUCAUSE Quarterly, 32, 1–10.Google Scholar
- Ebner, M., Lienhardt, C., Rohs, M., & Meyer, I. (2010). Microblogs in higher education – a chance to facilitate informal and process-oriented learning? Computers and Education, 55, 92–100.View ArticleGoogle Scholar
- Faculty Focus (2010). Twitter in Higher Education 2010: Usage Habits and Trends of Today’s College Faculty. Retrieved from https://library.educause.edu/resources/2010/10/twitter-in-higher-education-2010-usage-habits-and-trends-of-todays-college-faculty
- Friesen, N., & Lowe, S. (2012). The questionable promise of social media for education: connective learning and the commercial imperative. Journal of Computer Assisted Learning, 28, 183–194.View ArticleGoogle Scholar
- Gao, F., Luo, T., & Zhang, K. (2012). Tweeting for learning: A critical analysis of research on microblogging in education published in 2008–2011. British Journal of Educational Technology, 43(5), 783–801. doi:10.1111/j.1467-8535.2012.01357.x View ArticleGoogle Scholar
- IEA. (2013). PREES RELEASE. Brussels: International Association for the Evaluation of Educational Achievement (IEA).Google Scholar
- Jonassen, D. H., Howland, J., Marra, R., & Crismond, D. (2008). Meaningful learning with technology (3rd ed.). Upper Saddle River, NJ: Pearson.Google Scholar
- Karoly, P. (1993). Mechanisms of self-regulation: A systems view. Annual Review of Psychology, 44, 23–52.View ArticleGoogle Scholar
- Lee, K., Tsai, P. S., Chait, C. S., & Koht, J. H. L. (2014). Students’ percetions of self-directed learning and collaborative learning with and without technology. Journal of Computer Assisted Learning, 30, 425–437.View ArticleGoogle Scholar
- Luo, T., & Gao, F. (2014). Enabling Twitter-Mediated Peer Feedback in Face- to-Face Classrooms. AERA: AERA Online Paper Repository.Google Scholar
- Manca, S., & Ranierit, M. (2013). Is it a tool suitable for learning? A critical review of the literature on Facebook as a technology-enhanced learning environment. Journal of Computer Assisted Learning, 29, 487–504.View ArticleGoogle Scholar
- Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher education: a peer review perspective. Assessment & Evaluation in Higher Education, 39(1), 102–122.View ArticleGoogle Scholar
- Osborne, J., & Dillon, J. (2007). Research on learning in informal contexts: Advancing the field? International Journal of Science Education, 29, 1441–1445.View ArticleGoogle Scholar
- Oulasvirta, A., Lehtonen, E., Kurvinen, E., & Raento, M. (2009). Making the ordinary visible in microblogs. Personal and Ubiquitous Computing, 14, 237–249.View ArticleGoogle Scholar
- Scardamalia, M. (2002). Collective cognitive responsibility. In B. Smith (Ed.), Liberal education in the knowledge age (pp. 76–98). Chicago: Open Court.Google Scholar
- Shewbridge, S., Ikeda, M., & Schleicher, A. (2006). Are students ready for a technology-rich world?: What PISA studies tell us. Paris: OECD.Google Scholar
- Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189.View ArticleGoogle Scholar
- Stahl, G., Koshmann, T., & Suthers, D. (2006). Computer-supported collaborative learning. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 409–426). Cambridge, UK: Cambridge University Press.Google Scholar
- Strijbos, J. W., Narciss, S., & Dunnebier, K. (2010). Peer feedback content and sender’s competence level in academic writing revision tasks: are they critical for feedback perceptions and efficiency? Learning and Instruction, 20(4), 291–303.View ArticleGoogle Scholar
- Van Gennip, N. A. E., Segers, M. S. R., & Tillema, H. H. (2010). Peer assessment as a collaborative learning activity: the role of interpersonal variables and conceptions. Learning and Instruction, 20(4), 280–290.View ArticleGoogle Scholar
- Veletsianos, G. (2012). Higher education scholars’ participation and practices on Twitter. Journal of Computer Assisted Learning, 28, 336–349.View ArticleGoogle Scholar
- Ware, P. D., & O’ Dowd, R. (2008). Peer feedback on language form in telecollaboration. Language, Learning & Technology, 12(1), 43–63.Google Scholar