Skip to main content
  • Research article
  • Open access
  • Published:

Comparison of video demonstrations and bedside tutorials for teaching paediatric clinical skills to large groups of medical students in resource-constrained settings

Abstract

Videos are increasingly being used for teaching clinical skills in medical education. However, most reports on the effectiveness and benefits of videos in medical teaching have come from developed countries. Resource constraints in South African academic hospitals, together with increasing numbers, may apply pressure on the standard of clinical teaching. This study investigated the potential for using video demonstrations to replace the bedside teaching of introductory paediatric clinical examination skills to large groups of medical students. Sixty medical students were randomised to an experimental group that watched a video of a paediatric abdominal examination or to a control group that received a bedside tutorial on the same topic. Immediately afterwards, experienced assessors observed and scored the students in a clinical examination. Data were analysed for the non-inferiority of the video group scores within a 10% margin of the bedside group. Students’ and clinician educators’ perceptions of the two teaching methods and their views on how video instruction could be integrated into the clinical teaching programme were explored. Qualitative data were analysed thematically. The video teaching was non-inferior to the bedside teaching within the 10% margin and did not significantly affect the pass/fail or distinction rates. Students and clinician educators suggested that the videos be used for teaching basic concepts, allowing bedside tutorials to focus on applied learning. The findings have important implications for using video demonstrations to supplement the teaching of clinical skills to large groups of medical students across multiple variably-resourced settings.

Introduction

The global pressure to increase the quantity, quality, and relevance of medical graduates (Frenk et al., 2010) poses a challenge to maintaining current clinical teaching practices. Increasing student-educator ratios in South African medical schools as a result of national imperatives for more health care graduates (South African National Department of Health, 2010) are adversely affecting clinical teaching. Furthermore, clinician educators’ teaching time is being impacted by increasing patient loads (Coovadia, Jewkes, Barron, Sanders, & McIntyre, 2009; Mayosi et al., 2009; 2014) and the number of educators is unlikely to increase because of “less economic resources available to fund higher education institutions” (Hornsby, Osman, & De Matos-Ala, 2013, p. 10) in low- to middle- income countries. Undergraduate paediatric clinical training is further compromised by the lack of physical space around the bedside in overcrowded wards and the impracticality of allowing large numbers of students to examine an infant or young child (Craze & Hope, 2006). These challenges hamper the development of the essential clinical, problem-solving, and critical thinking skills required for competent medical graduates (Hornsby et al., 2013; Maudsley & Strivens, 2000; McKeachie, 1980).

Clinical skills training involves the acquisition of both technical skills and non-technical skills. An example of a technical skill is conducting a physical examination while non-technical skills include communication and cognitive skills like clinical reasoning and decision-making (Hibbert et al., 2013; Michels, Evans, & Blok, 2012). Video demonstrations may provide a” best practice exemplar” for the initial learning of clinical skills (Hibbert et al., 2013, p. 2). This initial demonstration of clinical skills forms the basis of the theory of acquiring skills. According to the social cognitive model of sequential skill acquisition (see Fig. 1), observing a skill being demonstrated by someone proficient in that skill is the first of four phases of learning, followed by imitating the behaviour, and eventually leading to the self-regulated performance of the behaviour (Schunk & Zimmerman, 1997; Zimmerman & Kitsantas, 2002). The first two phases are regarded as learning in a social context, which may lead to the self-directed practice of the skill (last two phases; see Fig. 1) (Schunk & Zimmerman, 1997; Zimmerman & Kitsantas, 2002). Observing a demonstration also forms the first step in Peyton’s approach to teaching procedural and physical examination skills, which consists of the following steps (Lake & Hamdorf, 2004; Walker & Peyton, 1998):

  • Demonstration: The tutor demonstrates the skill at a normal pace, without commentary.

  • Deconstruction: The tutor demonstrates each procedural step while describing it.

  • Comprehension: The student provides instructions, which the tutor then demonstrates.

  • Performance: The student performs the skill while describing the steps.

Fig. 1
figure 1

Social cognitive model of sequential skill acquisition

Demonstrations are believed to be useful for providing an overview of the complete set of skills to be learned, especially when an overview is provided early in the learning process (Kneebone, 2005; Sadideen & Kneebone, 2012; Singley & Anderson, 1989), as evident in the models of how clinical skills are learned.

In addition to the benefits for learning clinical skills, video demonstrations have been shown to be useful in managing the limited time available for clinical teaching (Hoffman & Donaldson, 2004; Hurtubise, Martin, Gilliland, & Mahan, 2013; Jeffries, 2001; Knowles, Kinchington, Erwin, & Peters, 2001; Schwerdtfeger et al., 2014), allowing more efficient utilisation of resources (Schwerdtfeger et al., 2014), and improving cost-efficiency (Hibbert et al., 2013; Wouters, Tabbers, & Paas, 2007). The benefits of improving the efficiency of teaching and the utilisation of resources, and the cost efficiency, could be advantageous in resource-constrained settings in low- and middle-income countries. However, most reports on the use of videos have come from developed countries, with little evidence for whether videos improve the efficiency of clinical teaching in low- and middle-income countries.

Concerned about the prevailing resource constraints and the increasing student numbers (12–14 students per group compared to 6–8 students per group in earlier years), educators in the Department of Paediatric and Child Health at the Chris Hani Baragwanath Academic Hospital (CHBAH) in Johannesburg, an academic hospital affiliated with the University of the Witwatersrand, recorded videos demonstrating introductory paediatric clinical examination skills. These video demonstrations were intended to replace the bedside teaching of clinical skills. This study investigated whether video demonstrations were at least as good as (that is, not inferior to) the bedside teaching and explored medical students’ and clinician educators’ perceptions of the benefits and limitations of video teaching compared to bedside teaching.

Methods

Study design

This study adopted a pragmatic approach to the selection and combination of data collection methods and analysis strategies to address the research questions (Creswell, 2014; Johnson, Onwuegbuzie, & Turner, 2007). The mixed methods design combined qualitative and quantitative methods to answer different research questions. Table 1 provides an overview of the data collection strategies employed in the study, in chronological order, and describes the sample, purpose, and type of data analysis used for each strategy.

Table 1 The research strategies, samples, and type of analysis used in the study

CHBAH receives a group of approximately 20 medical students in their fifth (penultimate) year of study for 6 weeks of training in paediatric clinical skills (paediatric rotation). There were three such paediatric rotations between February and May 2017, a total of 61Footnote 1 students. Each group of students consists of self-selected pairs of clinical partners who would have received similar clinical training up to that point in their studies. To control for skills acquired during their pre-clinical training, consenting students were randomised by splitting the clinical pairs so that one partner was allocated to the video group and the other to the bedside teaching group. The 61 students reporting to CHBAH for paediatric training between February and May 2017 were eligible to participate in the study, 60 of them agreeing to participate and one declining. Figure 2 provides an overview of the design of the study. Each participating student provided written informed consent. The Human Research Ethics Committee (Medical) of the University of the Witwatersrand approved the study: clearance certificate M160739.

Fig. 2
figure 2

Study design

Description of video demonstration and bedside teaching interventions

For each of the three rounds of data collection, the video group (n = 10) watched a 20-min video demonstration of a structured abdominal examination being conducted on a real patient (as opposed to a simulated one) by an experienced paediatric gastroenterologist. The patient’s caregiver was present during the recording session, which took place in a facility separate to the general ward without other patients or healthcare professionals present. The infant did not necessarily exhibit any abdominal symptoms since the teaching session focused on the steps and sequence of the examination rather than on identifying pathology. The protocol for teaching the clinical examination focused on the domains of professionalism (for example, greeting the patient), physical examination skills (for example, inspecting the abdomen or examining the genitals) and how well the examination was structured (organisation).

For each round of data collection, the bedside group (n = 10) received a 25- to 30-min tutorial at the bedside on how to conduct a structured abdominal examination, using a real patient. The same paediatric gastroenterologist conducted the three bedside tutorials using the same teaching protocol as for the video demonstration. The bedside tutorial was conducted in a quieter and more spacious facility than the general paediatric ward that is usually used for clinical teaching, to facilitate a better comparison of teaching methods. Similar to the video demonstration, the patients used for the bedside tutorial did not necessarily exhibit any abdominal symptoms.

Randomised non-inferiority trial

The hypothesis for the randomised control trial was that video teaching of the paediatric abdominal examination is not inferior to traditional bedside teaching regarding student performance in a clinical examination.

After the video and bedside teaching sessions, each student performed a 10-min structured clinical examination of an in-hospital patient (Fig. 2). The assessment team consisted of five paediatricians, each with 27–33 years of experience as clinician educators. Four assessors were used on each of the three rounds of data collection, according to availability (Fig. 2). Individual students were allocated at random to one of four assessors. The assessment was blind, in that the assessors did not know which students had received video teaching and which had experienced bedside teaching.

The evaluation instrument assessed three domains (see Additional file 1: Assessment tool for clinical examination) according to the standard method of teaching and assessing the structured abdominal examination at Wits medical school. The main domain, Physical Examination skills, used a checklist of ten items, each scored on a scale of 1 to 5. The total of the score for the ten items was used to compare the scores for the bedside and video groups. The two other domains, Professionalism and Organisation/Efficiency, were each scored on a scale of 1–10. The assessors assigned an Overall Grade category, using a percentage range, to each student for the clinical examination. The assessors were briefed on how to use the evaluation instrument before the clinical examinations.

The required sample size for non-inferiority analysis (Piaggio, Elbourne, Altman, Pocock, & Evans, 2006) was calculated using data from a pilot study conducted in August 2016. Assuming a mean score of 70% in the better group, a non-inferiority margin of 10%, a standard deviation of 10% for both groups, 90% power, and a 95% confidence interval, we used the ssi module (Jones, 2010) for Stata (version 14.0) to calculate that at least 18 students were needed in each group (total = 36). A non-inferiority margin of 10% (or a difference of 1 mark between the mean scores for the video and bedside teaching groups) was determined based on previous class marks for four paediatric teaching blocks for fifth-year students in 2015 and 2016. Given the average mark of 69.5% for the four paediatric teaching blocks for fifth-year students in 2015 and 2016, and the pass mark of 60% used for clinical blocks, the assessors deemed a 10% (or 1-mark) difference educationally important (Cook & Hatala, 2015; Greene, Morland, Durkalski, & Frueh, 2008; Tolsgaard & Ringsted, 2014). Half-marks are not allocated in the clinical assessments, so a 1-mark difference was the smallest margin that would affect the number of students who pass or fail.

The scores for each of the three domains assessed in the clinical examination were converted to percentages for the analyses. For the Overall Grade category, the midpoint of the range for each symbol was converted to a percentage to enable a numerical comparison between the video and bedside groups. A statistical test was used to calculate the one-sided 95% confidence interval of the difference between the two groups and was compared to the 10% non-inferiority margin decided by the paediatricians (Greene et al., 2008; Tolsgaard & Ringsted, 2014). Linear regression was used to adjust for differences between the groups: age, previous academic performance based on examination scores obtained from the university’s Unit for Undergraduate Medical Education, and any assessor effects. The robustness of the results was checked using a number of alternative specifications. The overall pass and failure rates for the two groups were compared using the chi-squared test for statistical differences. Finally, non-inferiority was checked using a two-sided confidence interval instead of one (Piaggio et al., 2006).

Focus groups and semi-structured interviews

For each of the three rounds of data collection, after the clinical examinations were completed, five of the ten students in each of the video and bedside teaching groups were invited to participate in separate focus group discussions for each intervention (see Fig. 2). The focus group discussions were conducted by a research assistant (with a Bachelor of Arts Honours degree) trained in conducting focus group interviews. Table 2 lists the questions asked in each focus group. Students in each group were initially asked similar questions designed to explore their perceptions of the teaching method they had received and to probe for their suggestions for improvements to that teaching method. Different focus questions were then asked of each group, based on the fact that the bedside groups had not watched the video, and the video groups had not experienced a ‘live’ bedside demonstration. The final question for the bedside groups honed in on whether students felt they would have derived similar benefits from the video demonstration compared to the bedside tutorial they had attended. The video groups were asked two final questions, focusing on, firstly, how they felt having access to video demonstrations could benefit their learning, and secondly, what they perceived were the limitations of video demonstrations for teaching clinical examination skills.

Table 2 Focus group questions

Semi-structured interviews were conducted with the five assessors after all three rounds of data collection had been completed. The assessors were asked about their clinical teaching experience, how their experiences of clinical teaching had changed over the time they had been teaching at CHBAH, and how they felt about the use of video demonstrations for teaching clinical examination skills.

The focus group discussions and interviews were audio-recorded and transcribed verbatim. Thematic analysis (Braun & Clarke, 2013) was used to analyse the transcripts. The patterns identified by open coding in the focus group and interview transcripts were clustered into themes and sub-themes, which were mapped to display the relationships between them (Braun & Clarke, 2013). The thematic maps include frequency counts to show the extent of the themes and sub-themes identified (Fraenkel, Wallen, & Hyun, 2012; Krippendorff, 2013), without implying that “numbers reveal the truth in the data” (Braun & Clarke, 2013, p. 262). MAXQDA 2018 was used to manage the analysis (Creswell, 2014). Multiple coding involving the “cross checking of coding strategies and interpretation of data” (Barbour, 2001, p. 1116) by two researchers on the team was used to obtain different insights and perspectives on the data. Member checking (McMillan & Schumacher, 2010) was used to check the accuracy of the interpretations of the interviews with the interviewed assessors to improve the credibility of the findings.

Results

Profiles of the video and bedside-teaching groups

Table 3 compares the students randomised to the video and bedside-teaching groups regarding age, gender, and academic performance in 2016. The differences were found not to be statistically significant. The comparisons of the number of students allocated to each assessor for the two groups was also not significantly different.

Table 3 The mean characteristics of participants, by group (n = 60)

Comparison of the scores for the students’ clinical examinations, by group

Table 4 reports the mean scores for the assessment of the students’ clinical examination performance, by group. The mean scores for all measures assessed in the clinical examination, except for Professionalism, were lower for the video group, with wider variability.

Table 4 Mean scores for the students’ clinical examinations, by group (n = 60)

Figure 3 shows the non-inferiority analysis. The mean difference and associated confidence interval were within the non-inferiority margin of 10% for all four measures assessed in the clinical examination performed by the students (Fig. 3). The difference between the means for the video and bedside groups was highest for the Efficiency domain (− 4.33%) followed by the Physical Examination domain (− 2.20%) and Professionalism (− 1.17%). The difference in the Overall Grade between the video and bedside groups was − 2.6% [one-sided 95% CI: − 6.2 – − 2.6]. Adjusting the non-inferiority analysis for age, gender, previous academic performance and assessors made no difference to the conclusions of the non-inferiority analysis. Lastly, non-inferiority analysis using the two-sided confidence interval did not change the overall results and study conclusions.

Fig. 3
figure 3

Non-inferiority of clinical examination assessment scores between the video and bedside teaching groups. The diamonds represent the mean difference between the groups and the horizontal lines shows the one-sided confidence interval for non-inferiority. For all four criteria, the confidence interval is narrow enough to exclude an educationally important effect. The adjusted results show the mean difference adjusted for age, gender, previous academic performance, and evaluator

The mean difference and associated confidence interval were within the non-inferiority margin of 10% for all four measures assessed in the clinical examination performed by the students. The difference between the means for the video and bedside groups was highest for the Efficiency domain (− 4.33%) followed by the Physical Examination domain (− 2.20%) and Professionalism (− 1.17%). The difference in the Overall Grade between the video and bedside groups was − 2.6% [one-sided 95% CI: − 6.2 – − 2.6]. Adjusting the non-inferiority analysis for age, gender, previous academic performance and assessors made no difference to the conclusions of the non-inferiority analysis. The non-inferiority analysis using the two-sided confidence interval did not change the overall results and study conclusions.

Lastly, the proportion of fails (< 60%) and distinctions (≥80%), based on the Overall Grade, was not significantly different between the two groups (Table 5).

Table 5 Fails and distinctions, based on Overall Grade, by group (n = 60)

Student perceptions of the video demonstrations compared to bedside teaching

In the focus group discussions, students were asked about the usefulness and limitations of the teaching method they had received. Open coding of the transcripts revealed the features of clinical teaching that students felt were important. Responses from the two groups were compared to identify themes describing the potential of video demonstrations to replace bedside teaching. The two themes identified (see Fig. 4) have been named to reflect the potential for video demonstrations to replace bedside teaching: Videos cannot replace bedside tutorials, and Videos could supplement bedside teaching.

Fig. 4
figure 4

Map of themes identified from analysing students’ views of video and bedside teaching. The frequency counts represent counts of features. Where counts of features do not add up to the totals for themes or sub-themes/categories, two features mentioned by one individual in the same comment have been counted in the same sub-theme or theme

Two sub-themes representing categories of video features were identified under the theme Videos cannot replace bedside tutorials. The main category was the lack of interpersonal interactions with tutors, peers, and patients (see Fig. 4). Students also valued learning from the mistakes made by their peers at the bedside (see Fig. 4). The other category under this theme was Videos do not engage students as much as bedside teaching.

Eight features of video demonstrations made up the theme Video demonstrations could supplement bedside teaching (see Fig. 4). The four most frequently mentioned features were that video demonstrations allow flexible access to the demonstration, offer better audio and visual features, tend to be better organised than bedside tutorials and are useful for providing the theory underpinning clinical skills.

Students emphasised the benefit of videos allowing flexible access to clinical demonstrations (see Fig. 4). While students preferred that the videos be used with bedside teaching, there was no clear preference as to how the videos should be used. Some students suggested that it could be useful to watch the demonstrations before the tutorial while other students indicated that the videos would be a useful tool for revision. The second most frequently mentioned feature was the superior audio and visual features of videos (see Fig. 4), which were often mentioned in the context of the difficulties with seeing and hearing in large groups at the bedside. Students pointed out the benefits of the different camera angles and capability to zoom in on features possible in video recordings, which further enhanced the visual aspect.

The third most frequently mentioned feature was that video demonstrations tend to be better organised than bedside tutorials (see Fig. 4). A subtle difference emerged between videos being more ‘systematic’ than bedside teaching and video demonstrations offering ‘standardised’ teaching. ‘Systematic’ refers to the teaching being organised in an orderly manner, while ‘standardised’ suggests that the teaching is uniform across, for example, different teachers. Teaching can be standardised (uniform) without being systematic (organised in a logical fashion). Teaching can also be systematic (organised in a logical fashion) without being standardised (uniform). A video demonstration of a clinical examination is likely to contribute to both systematic and standardised teaching.

Another frequently mentioned feature was that students regard videos as useful for providing the theory (see Fig. 4). There was a distinction between how students perceived the roles of the video demonstrations and the bedside tutorials. The video demonstrations were regarded as useful for providing the theory (see Fig. 4), while the bedside tutorials allow for the application of that theory.

Assessors’ views on the video demonstrations compared to bedside teaching

The five assessors (four males and one female) had been teaching clinical skills at CHBAH for between 16 and 33 years. None of the assessors had been involved in recording the video used in the study or had watched the video.

Figure 5 shows a map of the four themes identified from analysing the assessor interviews. The patterns highlighted in this analysis are those relating to the third sub-objective for the study, which was “To explore clinician educators’ views of using video demonstrations for teaching clinical examination skills”.

Fig. 5
figure 5

Map of themes identified from analysing assessors’ views. The frequency counts represent counts of features. Where counts of features do not add up to the totals for themes or sub-themes/ categories, two features mentioned by one individual in the same comment have been counted in the same sub-theme or theme

The assessors described six sub-themes or categories of factors that influenced their clinical teaching (see Factors affecting clinical teaching in Fig. 5), with the main factor being the impact of increasing student numbers.

Four categories were identified from assessors’ comments about problems they encountered when teaching larger groups at the bedside (see Problems when teaching large groups in Fig. 5). The assessors’ concerns centred around larger groups making it difficult for teachers to engage with individual students and to pay more attention to weaker or more reticent students.

The assessors suggested five types of interventions that could be implemented to address the problems experienced when teaching larger groups (see Interventions to cope with larger groups in Fig. 5). Video demonstrations were one of the ways assessors suggested of coping with larger groups without increases in staff numbers (see Fig. 5). The assessors regarded the feature that video demonstrations could provide the basic knowledge of the skills to be learned as both a benefit and a limitation (see Fig. 5). The other limitations of video demonstrations focused on the need for students to gain practical experience.

Discussion

To develop innovative solutions to the teaching challenges facing paediatric clinician educators in a low-resourced setting, videos demonstrating introductory paediatric clinical examination skills were recorded. The effectiveness of these videos was evaluated in a randomised trial that showed that the videos were non-inferior, within a 10% margin, to the bedside demonstration of clinical skills. There were no differences in the pass rates, or in the number of distinctions, between the two groups. The Belgian non-inferiority study by Mpotos et al. (2011) also found video instruction not to be inferior to traditional teaching. During the focus group discussions, the assessors and students acknowledged that there was a need for an intervention to cope with the teaching of large groups of students around the bedside and described a number of benefits that videos could offer to alleviate the difficulties of teaching large groups. Both students and assessors emphasised that bedside teaching was an essential method to teach clinical skills; this type of teaching allows students to examine the child whilst allowing for real-time interactions between the clinician and student. Both groups of stakeholders acknowledged that video demonstrations could be useful for providing the basic knowledge before students attend the bedside tutorials, allowing educators to focus on the nuances of clinical examinations and the teaching of practical skills, and both groups felt that both methods were needed in our setting. Students in other studies have expressed similar views on the advantages offered by video instruction, including the flexibility of use it offers and the benefit of being able to prepare for teaching sessions, and also preferred video instruction to be used with lecturer demonstrations rather than replacing them (Hibbert et al., 2013; Kelly, Lyng, McGrath, & Cannon, 2009). Blended learning, a combination of face-to-face and online learning, has been shown in other studies to the preferred method for incorporating videos into medical education (Choules, 2007; Hibbert et al., 2013; Hull, Chaudry, Prasthofer, & Pattison, 2009).

Inadequate educational planning and government spending that does not keep pace with increasing student enrolment are likely to impact negatively on the quality of higher education ("African universities recruit too many students," 2017; Hornsby et al., 2013). More specifically, the impact of large class sizes could impact on the competency of medical graduates and, ultimately, on the quality of healthcare delivered in the country. The impact of class size on meaningful learning depends on the particular “discipline and/or the pedagogical needs of the learning environment” (Hornsby & Osman, 2014, p. 714). The time-consuming and resource-intensive nature of clinical teaching (Bradley & Postlethwaite, 2003) make it likely that large class sizes will affect the quality of teaching required to produce competent doctors. Additionally, the need for interpersonal interactions during clinical teaching, as pointed out by both educators and students in this study, is also likely to suffer. The social goal of the imperatives issued by the South African government to graduate more doctors was to address the doctor shortage in the country. However, while the need for more doctors is evident, an intervention is needed in educational institutions to ensure that increased student numbers do not negatively affect doctor competency. Hornsby et al. (2013) suggested that, in the absence of funding for additional resources, large classes in higher education provide opportunities for exploring innovative teaching methods. In this case, the use of video demonstrations for teaching clinical skills represents a radical change from the bedside teaching traditionally used at the academic hospitals affiliated with Wits University. It was thus necessary not only to compare the effectiveness of the video demonstrations to bedside teaching but also to explore students’ and educators’ perceptions of how video demonstrations could be used for clinical teaching.

It appears that both staff and students want video demonstrations to supplement rather than replace bedside teaching. Such a supplemental model could fit in well with models of how clinical skills are learned. Peyton’s four-step model of learning a skill emphasises the importance of students initially observing a demonstration without being involved, to allow students to focus on learning the steps that make up the modelled or demonstrated behaviour (Walker & Peyton, 1998). In Peyton’s model, students only conduct an examination as the final step in learning a clinical skill. Similarly, in the cognitive model of sequential skill acquisition, observing a demonstrated behaviour is the first phase, culminating in a self-directed performance of the observed behaviour (Schunk & Zimmerman, 1997; Zimmerman & Kitsantas, 2002). Schunk and Zimmerman (1997) stressed the importance of students acquiring sufficient knowledge of the demonstrated behaviour before moving on to the second phase of learning a skill, that of imitating the observed behaviour. Video demonstrations could aid students with acquiring the basic knowledge that forms the foundation of learning clinical skills and allow them to do so at their own pace, as many times as they need to and at times convenient to them.

The potential benefits for learning of the supplemental model suggested by the stakeholders fail to address the central problem motivating the study, the limited resources available for teaching at CHBAH, specifically the demands on clinician’s time. Although the reusability of videos could ease clinicians’ workload in the future, the initial costs of recording the videos, regarding both time and money, are likely to increase the clinicians’ current burden. Educational interventions usually focus on educational benefits rather than, for example, the cost implications (Mustafa, 2018). However, costs are especially pertinent in low- and middle-income countries (Mustafa, 2018). The production cost for the video used in the study was recorded in 2015 at an estimated cost, ex-post facto, of €1194.70 (ZAR19 640). This amount includes the cost of the consultant’s time, based on the hourly rate for a consultant in the public sector (14 h x R760), including pre-shooting preparation and patient selection (6 h), and recording the video, including retakes (8 h); and the cost of a videographer to record and edit the video (18 h x R500/hr). In the case of CHBAH, the question is whether video demonstrations represent a cost-effective intervention if they are going to supplement bedside teaching. Although a formal cost analysis has not been undertaken, the initial investment required for the production of the videos is less than the annual costs paid to clinicians for their time spent teaching students. The case for the cost-effectiveness and scalability of the video demonstrations recorded at CHBAH is strengthened by the fact that the videos have been rolled out at the two other main teaching hospitals associated with Wits University, and that other South African universities have expressed an interest in using the video demonstrations.

The enduring nature and scalability and the cost-effectiveness of video demonstrations for teaching clinical skills reported in other studies (e.g. Hibbert et al., 2013) warrant further investigation into videos for supplementing bedside teaching in resource-constrained settings, and into the cost-effectiveness of such a supplemental model of video usage. A potential area for research is to investigate the effects on learning of including interactivity in the design of the videos, which could foster student engagement. Another area is to assess the impact of integrating clinical videos with bedside teaching to optimise clinical training in low-resource settings.

The following limitations of this study are acknowledged. First, the defined non-inferiority margin of 10% may seem quite broad. All domains showed non-inferiority at 10%, but would not be non-inferior at a margin of 5%. Studies with larger sample sizes would be required to establish equivalence for narrower margins. However, the interval used was determined by study clinicians as appropriate, based on the mean pass mark obtained in fourteen paediatric end-of-block examinations in the 2 years preceding the study. It should also be noted that we converted the actual assessment marks into percentages in this analysis, for convenience. So the 10% margin equates to a 1-mark difference between the video and bedside teaching groups. A non-inferiority margin of less than 1-mark is not realistic since half-marks are not assigned for these clinical assessments. One mark was thus was the minimum difference that could be used. Second, although the study was adequately powered, the small sample size may limit its generalisability. However, the marks of students participating in the study suggest they were fairly typical of other medical students in the university, and the clinical teaching at CHBAH is similar to that of other teaching hospitals in the country. A final limitation of the study is that the separate facility in which the bedside tutorial was held offered comparatively better conditions than what students would commonly experience in the general paediatric ward. The better conditions may have increased the scores for the bedside teaching group compared to students taught under more typical conditions, but would not influence the conclusions from the non-inferiority analysis.

Conclusion

Based on the finding of noninferiority of the video teaching compared to the bedside teaching, and students and teachers perceptions that video demonstrations could be used to address the difficulties of teaching large groups of students, the results of the study suggest that video demonstrations of clinical skills may provide a cost-effective and scalable intervention in environments in which the pressure of numbers may be increasing. The findings from the study may have implications for policy decisions on the nature of clinical teaching at CHBAH and other resource-constrained hospitals in South Africa, and possibly in other low- and middle-income countries.

Availability of data and materials

The datasets generated and/or analysed during the current study are available from https://doi.org/10.17605/OSF.IO/R58JS at the repository handle https://osf.io/r58j. Any request for de-identified sample data will be considered by the data access committee on a case-by-case basis.

Notes

  1. Discordant numbers of students result from students repeating a clinical rotation.

Abbreviations

CHBAH:

Chris Hani Baragwanath Academic Hospital

CI:

Confidence interval

References

Download references

Acknowledgements

The authors would like to acknowledge the students who participated in this study, the research intern who conducted the focus group discussions, and the paediatricians who agreed to assist with the study should the need have arisen.

Funding

This work is based on the research supported in part by the National Research Foundation of South Africa for the grant, Unique Grant No. 107106. The grant holder acknowledges that opinions, findings and conclusions or recommendations reported in this paper are those of the author(s), and that the NRF accepts no liability whatsoever in this regard.

Author information

Authors and Affiliations

Authors

Contributions

There are 12 authors on this paper. AG, SL, ZD, and LGT conceptualised the study. AG, SL, CH, NL, KP, JR, SV, UK, and PV performed the data collection. AG, DB, and LGT analysed the data. AG wrote the first draft of the manuscript. All authors were involved in editing the paper for submission and approved the final version.

Corresponding author

Correspondence to Ann George.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:

Assessment tool for clinical examination. (DOCX 15 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

George, A., Blaauw, D., Green-Thompson, L. et al. Comparison of video demonstrations and bedside tutorials for teaching paediatric clinical skills to large groups of medical students in resource-constrained settings. Int J Educ Technol High Educ 16, 34 (2019). https://doi.org/10.1186/s41239-019-0164-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41239-019-0164-z

Keywords