Introduction
E-learning is an increasingly used teaching modality in medical education. The use of e-learning has become even more valuable with the COVID-19 pandemic, which has limited in-person learning for many medical students and residents. Reference Merzouk, Kurosinski and Kostikas1 In addition, with the continuous rise of neurology’s complexity and diminishing clinical opportunities for residents resulting from increasing resident numbers and shortened lengths of patient hospitalization, e-learning provides the unique opportunity to complement clinical learning. Reference Elkind2 E-learning grants access at the user’s convenience, holds potential for frequent updates to reflect current guidelines, and can provide virtual clinical exposure to rare diseases not seen frequently in clinical practice. Reference Lewis, Cidon, Seto, Chen and Mahan3–Reference Masic7 It can contain multimedia to adapt to various learning styles and has the potential to provide equivalent learning opportunities for trainees no matter their location. Reference Letterie8 Limitations of e-learning include the skill and time necessary for educators to create these tools as well as the cost for design and maintenance. Reference Tarpada, Morris and Burton9 E-learning has been shown to be equally or more effective than conventional learning through research primarily conducted in surgical specialties. Reference O’Doherty, Dromey, Lougheed, Hannigan, Last and McGrath10–Reference Lee, Chao and Huang13 However, the effectiveness of e-learning has been shown to vary across medical disciplines and e-learning types and has not been examined in pediatric neurology. Reference Cook, Levinson, Garside, Dupras, Erwin and Montori14,Reference Cook, Levinson and Garside15
Ebrain (ebrain.net) is a not-for-profit web-based training resource and is the world’s largest in the domain of clinical neuroscience. Reference Holmes16,Reference Thomson17 This training resource involves over 650 short lessons and has been utilized mainly across Europe since 2011.
Although there has been an expansion in the use of e-learning in medical education, the feasibility and benefits of these tools within the discipline of pediatric neurology are unclear. Our study evaluates the learning outcomes and satisfaction of e-learning compared to conventional review paper learning on four pediatric neurology topics, with the aim of determining the value of pediatric neurology e-learning. Ultimately, our results may help medical educators to tailor the curriculum to learners’ needs.
Methods
Recruitment
Medical students and residents from Canadian universities were invited to participate. All medical students from the University of Ottawa, Queens University, and Western University were approached about the study via email from their institution and were asked to participate if they had interest in pursuing neurology, pediatric neurology, or pediatrics. They were invited to participate from June to November 2020. Pediatric, pediatric neurology, and neurology residents in postgraduate year 1–5 from the University of Ottawa and the University of Calgary received email invitations to participate in the study between July 2019 and January 2021. Furthermore, all Canadian residents in these specialties were eligible to participate after informally hearing about the study from peer residents and reaching out to our team. REB approval was obtained at all participating sites (CHEO REB #16/89X). The target number of participants to complete the study was 60. This was estimated by data simulations to be the number required to detect the hypothesized difference in learning gains between conventional with sufficient power of 90%.
E-learning Modules
The topics for the learning sessions included pediatric stroke, childhood absence epilepsy (CAE), acute disseminated encephalomyelitis (ADEM), and Duchenne muscular dystrophy (DMD). These topics were chosen as they were considered to be highly relevant for pediatrics, pediatric neurology, and neurology. We created a four-topic crossover design with participants randomly assigned to two conventional learning tools and two ebrain modules. Conventional learning was in the form of pre-selected review articles. The time to complete each article set was estimated by a medical student to take between 20 and 40 minutes. Expert pediatric neurologists created the four ebrain learning modules, utilizing information from the peer-reviewed review articles. Each module was approximately 20 minutes in length. The ebrain content incorporated the use of multimedia, practice questions, and cases.
Evaluations
Participants received pre-tests via the survey tool REDCap™ for each of the four topics and then completed their respective learning sessions and a survey on their experience. 18 The survey included Likert scale questions on participant engagement, applicability of concepts learned, further questions or feedback participants had, the use of the tool in the future, and which method of learning they preferred. Participants received a post-test for each topic 1 week after completing the respective learning session.
We developed a bank of 30 multiple-choice case-based questions per learning topic to create pre- and post-tests. Experts in pediatric neurology created the questions, and an expert with knowledge in multiple-choice question development reviewed the questions (HW). The team grouped the questions into pairs that were of similar difficulty and covered comparable topics. We randomized one question from each pair to the pre-test for each participant, with the remaining question delivered in the post-test.
Data Analysis
The primary outcome variable in this study was median pre-post change in test score (%). We calculated this across learning formats (ebrain versus review paper) and topics. To test if there was a statistical difference in the performance between learning formats, we constructed a mixed-effects model. This allowed us to control for fixed effects (e.g., pre-test score, and the lag time between learning sessions and testing) and random effects (e.g., differences between individuals). Further data exploration compared the learning experience between ebrain and review paper approaches in the Likert scale survey questions. We performed all analyses in the R statistical programing language. 18
Results
Demographics
A total of 119 individuals consented to participate in the study. Of these, 53 were medical students and 66 were residents. Among medical student participants, 5 were from Queens University, 23 went to the University of Ottawa, and 25 were from Western University. There were 6 participants in their first year of medical school, 20 in their second, 20 in their third, and 7 in their fourth year. Among residents, there were 14 residents from the University of Calgary and 43 residents from the University of Ottawa. There were two residents from Queens University. There was one participant each from the schools Dalhousie University, McGill University, McMaster University, University of Alberta, University of British Columbia, and the University of Toronto. These participants were residents who reached out to our research team directly to participate after learning about the project from their resident colleagues. Forty-one participants were in a pediatrics residency program, 14 were in neurology, and 10 were in pediatric neurology. Demographics can be found in Table 1.
Pre- and Post-Test Scores
There was statistical evidence of a difference in pre-learning and post-learning test scores for each learning topic (p < 0.05). The median [interquartile range (IQR)] change in score in the pediatric stroke topic was 6.7 (−6.6, 20.0) among review paper learners and 20.0 (6.7, 33.3) among ebrain module learners. There was a median (IQR) change in score of 10.0 (0.0, 20.0) for review paper learners and 13.4 (6.7, 26.6) for ebrain learners for the DMD topic. In the ADEM topic, the median (IQR) change in score among review paper learners was 26.7 (−6.6, 36.7) and was 13.3 (0.0, 23.3) among ebrain learners. The median change in score for CAE was 13.4 (0.0, 21.7) among review paper learners and 13.3 (6.7, 20.0) among ebrain module learners.
A mixed-effects model taking into consideration the effect of individuals, pre-test score, module, learning tool, and time between the end of the learning session and the post-test on the post-test score showed that ebrain users scored 4.21% higher on post-tests as compared to review paper users (p = 0.03). Although the pre-test score appears to have a strong relationship to post-test score (p = 0.01), a 1% increase in pre-test score resulted in a 0.19% increase in post-test score. Learning topic showed statistical relationship to post-test score (p = 0.04). Compared with the ADEM module, participants scored 6.99% higher on the pediatric stroke post-test. Similarly, they scored 5.15% higher and 1.93% higher on the CAE and DMD modules, respectively. There is some evidence that residents perform better than medical students (β = 5.34% (95% CI: −0.45%, 11.1%)), but the confidence interval is very wide. In other words, residents’ post-test scores are 5.34% higher than medical students’, with a large uncertainty around this estimate. The time elapsed from completing the learning session to completing the post-test did not have a significant effect (p > 0.05) on the post-test score. This mixed-effects model can be seen in Table 2.
1CI = confidence interval.
Subjective Learning Experience (Likert Scales)
Depending on the module topic, 57–92% (n = 59–66) of survey respondents favored e-learning over review articles (Likert response 4 or 5). Between 84 and 87% of e-learning users agreed that their experience was engaging, where 7–39% of review paper users agreed (Likert response 4 or 5). 62–82% of e-learning users felt comfortable applying the concepts covered in the learning tool, meanwhile 23–53% of review papers agreed with this statement (Likert response 4 or 5). Among e-learning users, 22–39% had questions with regard to the learning topics that were not answered by the learning material and 13–61% of review papers had questions (Likert response 4 or 5). Lastly, 83–92% of e-learning users agreed with using the learning tool in the future to refresh their understanding of these concepts, meanwhile 44–85% of review paper learners agreed with this statement (Likert response 4 or 5). The percent responses to the learning experience questions for each topic can be found in Figure 1.
Discussion
Our results demonstrate that most participants preferred utilizing e-learning. Ebrain users scored higher on post-tests than review paper learners. These findings are consistent with a study by Cook et al., who found that among internal medicine residents from the Mayo School of Graduate Medical Education, there was no difference between web module and paper-based formats in knowledge-test-score change, but residents preferred learning with web-based modules. Reference Cook, Dupras, Thompson and Pankratz19 In our study, the effect of the increase in post-test scores from ebrain to review papers was small and may not be educationally meaningful. A systematic review of plastic surgery e-learning demonstrated that the majority of participants showed higher satisfaction and knowledge gains in e-learning than conventional learning, with novice learners benefiting more than senior learners. Reference Lin, Lee and Mauch20 However, in our study’s mixed-effects model, there was some evidence that residents appeared to have a larger learning gain than medical students. This could be due to the complexity of topics covered. Further evaluation of the knowledge acquisition of more novice learners to senior learners in undergraduate and postgraduate medical education could prove beneficial for medical educators and provide an additional factor to consider when implementing e-learning. In addition, further analyzing why users enjoy e-learning more than conventional review papers may improve the medical education curriculum. Our study showed that ebrain participants more commonly noted their experience was engaging and felt more comfortable applying concepts from the modules than review paper learners, which may have impacted learning preference.
The topic of module was found to have a significant effect on post-test score in our study. This suggests that e-learning modules may increase the knowledge of users; however, the amount of learning depends on the module topic. This result is similar to a systemic review analyzing internet-based learning on health profession education which found that learning efficacy depended on the nature of the module. Reference Cook, Levinson, Garside, Dupras, Erwin and Montori14 Our study reinforces the importance of undergraduate and postgraduate medical education programs dedicating resources and time to create online learning modules for students and piloting them to ensure educational efficacy. However, current literature demonstrates a lack of consensus of what indicators to use to evaluate the efficacy of these modules in postgraduate medical education and the need for a homogenous way to evaluate e-learning. Reference de Leeuw, de Soet, van der Horst, Walsh, Westerman and Scheele21 Our study also suggests that a learning topic itself may be more suitable for a specific learning modality which is important to consider when creating these modules. Upon examination of our modules, each was similarly organized into epidemiology/risk factors, pathogenesis and clinical features, diagnostic workup, management/future directions, and key points to consider. A short test was provided to consolidate learning. However, each module was slightly different in the learning tools incorporated, including visual algorithms for the DMD module, diagnostic imaging for pediatric stroke, CAE and ADEM modules, and videos of clinical presentations for the CAE module. Perhaps the use of these features impacted the inter-topic differences in learning and appreciation. The length of review papers (ADEM and CAE as the shortest and DMD as the longest) and use of visual media in certain papers may have also impacted learning and enjoyment.
Among both e-learning and conventional learning users, there was a significant change between pre-learning and post-learning test scores for each learning topic. However, the median change in score for each topic was only approximately a 13% improvement, a value smaller than anticipated. Upon data review, although participants completed module completion surveys stating they read the appropriate review paper or ebrain module, a proportion of participants using ebrain as well as those participants using the review paper scored lower on the post-tests as compared to the pre-tests. These data suggest that learning modules and review papers may not be the most effective learning method for all residents and their unique learning styles. This consolidates the need to incorporate other methods of learning like in-person clinical learning in medical and residency education. The increasingly popular technique of a blended education method, involving online and in-person learning, has been favored among students. Reference Rajab, Gazal and Alkattan22
There are limitations to our study. One limitation is the relatively small number of questions in the knowledge tests. The number was chosen as we wanted to limit the time for study completion due to the busy schedules of residents. Another limitation is the number of participants completing all modules of the study which is 49 (41% of the 119 participants), which is slightly under our desired participant completion goal of 60. This was estimated by data simulations to be the number required to detect the hypothesized difference in learning gains between conventional and e-learning with sufficient power of 90%. This limitation was offset by utilizing data from residents who completed some but not all of the e-learning topics. Residents were found to complete individual learning topics at a lower rate than medical students. Due to participant feedback noting difficulty allotting time to this study because of their busy schedules, the study was extended over a period of 19.5 months to attempt to increase study completion and additional participants were recruited. Due to this limitation in sample size, further subgroup analysis comparing medical learners to resident learners and in-between specialty programs was not feasible in our study.
Conclusion
This study highlights that, despite no meaningful increase in test scores as compared to conventional learning, e-learning is a preferred learning modality for most medical students and residents in pediatric neurology. Learning acquisition varies across module topics. Incorporating e-learning into pediatric neurology residency and medical education should be increasingly implemented, due to its preference among learners, and non-inferiority as compared to conventional learning via review papers.
Acknowledgements
We recognize financial support from the University of Ottawa Educational Initiatives in Residency Education in 2016-2017.
Conflict of Interest
The authors declare no conflicts of interest.
Statement of Authorship
BC contributed to the conception, design, and analysis of this study and provided equal authorship of the manuscript. SB, HM, AK, SS, AM, HM, and DP contributed to the creation of the e-learning modules, study design, and critical revision of the manuscript. RW and DR contributed to the study design, data analysis, and critical revision of the study. HW contributed to the study design and critical revision of the manuscript.