Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-23T07:26:33.194Z Has data issue: false hasContentIssue false

Adapting to a Pandemic: Web-Based Residency Training and Script Concordance Testing in Emergency Medicine During COVID-19

Published online by Cambridge University Press:  29 November 2023

Katarzyna Naylor
Affiliation:
Independent Unit of Emergency Medical Services and Specialist Emergency, Medical University of Lublin, Lublin, Poland
Maja Chrzanowska-Wąsik
Affiliation:
Department of Emergency Medicine, Medical University of Lublin, Lublin, Poland
Patrycja Okońska
Affiliation:
Department of Emergency Medicine, Medical University of Lublin, Lublin, Poland
Tomasz Kucmin
Affiliation:
Department of Didactics and Medical Simulation, Medical University of Lublin, Lublin, Poland
Ahmed M. Al-Wathinani
Affiliation:
Department of Emergency Medical Services, Prince Sultan bin Abdulaziz College for Emergency Medical Services, King Saud University, Saudi Arabia
Krzysztof Goniewicz*
Affiliation:
Department of Security, Polish Air Force University, Dęblin, Poland
*
Corresponding author: Krzysztof Goniewicz; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Objective:

The coronavirus disease (COVID-19) pandemic necessitated alternative methods to ensure the continuity of medical education. Our study explores the efficacy and acceptability of a digital continuous medical education initiative for medical residents during this challenging period.

Methods:

From September to December 2020, 47 out of 60 enrolled trainee doctors participated in this innovative digital Continuous Medical Education (CME) approach. We utilized the Script Concordance Test to bolster clinical reasoning skills. Three simulation scenarios, namely Advanced Trauma Life Support (ATLS), Advanced Life Support (ALS), and European Paediatric Life Support (EPLS), were transformed into interactive online sessions via Zoom™. Participant feedback was also collected through a survey.

Results:

Consistent Script Concordance Testing (SCT) scores among participants indicated the effectiveness of the online training module. Feedback suggested a broad acceptance of this novel training approach. However, discrepancies observed between formative SCT scores, and summative Multiple-Choice Questions (MCQ) assessments highlighted areas for potential refinement.

Conclusions:

Our findings showcase the resilience and adaptability of medical education amidst challenges like the global pandemic. The success of methodologies such as SCT, endorsed by prestigious bodies like the European Resuscitation Council and the American Heart Association, suggests their potential in preparing health care professionals for emergent situations. This research offers valuable insights for shaping future online CME strategies.

Type
Original Research
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Society for Disaster Medicine and Public Health

As the World Health Organization declared the coronavirus disease (COVID-19) pandemic in March 2020, the global health care system faced an unprecedented challenge. 1 The widespread impact of the disease and protective measures adopted to control its spread disrupted medical education, necessitating a rapid and substantial shift toward online modalities. This transition from conventional face-to-face education to digital learning was not merely a luxury or a technical accomplishment but rather an imperative to maintain the continuity and quality of medical education during a global health crisis. Reference Naylor and Torres2,Reference Schulte, Gröning and Ramsauer3

However, emergency medicine, a critical discipline with a significant role during the pandemic, presented unique challenges for online education. It was essential to ensure continuous education for emergency health care workers due to the rapidly evolving understanding of COVID-19, changes in clinical guidelines, and the need for enhanced infection control procedures. Notably, the European Resuscitation Council (ERC) and the American Heart Association (AHA) published guidelines for emergency medical practice during the pandemic, 4,Reference Edelson, Sasson and Chan5 and the Centers for Disease Control and Prevention (CDC) released essential resources, such as instructional videos and fact sheets for personnel protective equipment usage. 6 These new regulations and recommendations underscored the importance of maintaining up-to-date, relevant knowledge among health care professionals during a time of acute need and rapid change.

Moreover, the ERC, in collaboration with the International Liaison Committee on Resuscitation (ILCOR), released an educational update in April 2020 addressing teaching during the pandemic. Reference Nolan, Monsieurs and Bossaert7 These organizations emphasized the necessity of preserving education on acute emergency situations and patient-centered care, particularly in response to cardiac arrest, even under conditions of social distancing and self-isolation. Reference Scquizzato, Olasveengen, Ristagno and Semeraro8

The need for adaptive Continuing Medical Education (CME) training to maintain core clinical competencies, including emergency medicine, during the pandemic has been widely recognized. Reference Kanneganti, Sia, Ashokka and Ooi9 A Best Evidence Medical Education (BEME) scoping review found 22 manuscripts describing educational interventions in CME in response to the pandemic; however, only 2 were specific to emergency medicine training. Reference Daniel, Gordon and Patricio10Reference Dennis, Highet and Kendrick12 These studies focused on transitioning the teaching-learning process online and implementing simulation activities for practical sessions, yet a comprehensive program specifically tailored for emergency medicine specialty trainees (STs) remained lacking.

Against this background, we aimed to present our institutional approach to delivering emergency medicine STs during the COVID-19 pandemic. We sought to describe and evaluate the effectiveness and acceptance of this new online training approach among specialty trainees in the Lubelskie district, with a focus on the use of Script Concordance Testing (SCT) in this context. We also aimed to explore the potential implications of these findings for post-pandemic emergency medicine education.

Materials and Methods

Study Design and Settings

The presented research constituted a prospective cohort study where a novel CME emergency medicine training program was initiated in April 2020. This program was delivered to 8 cohorts of medicine doctors at the Centre for Continuing Education, Medical University of Lublin, Poland, between September 2020 and March 2021.

Participant Recruitment and Sampling

For this study, we selected a convenience sample, initially comprising 60 medical doctors who were enrolled in 2 instances of emergency medicine training at the Medical University of Lublin (MUL) between March and September 2021. Regarding their previous education, most of the participants had received formal education in Advanced Trauma Life Support (ATLS), Advanced Life Support (ALS), and European Paediatric Life Support (EPLS). These participants were invited to partake in the research, and their participation was independent of any institutional or instructional obligations. It’s crucial to emphasize that no instructors or higher authorities provided informed consent on behalf of the participants, ensuring that the decision to participate was solely at the discretion of the individual medical doctors.

The study was integrated into a modular course, which adhered to the guidelines set by the Bill of the Ministry of Health, Poland. This legislative framework delineates the postgraduate specialty training for both medical doctors and dentistry doctors. It’s uniformly applied across all medical specialties, with the singular exception being the specialty training in the field of emergency medicine. 13 Despite initially recruiting 60 doctors, complete data for analysis were available for only 47.

The first cohort (Cohort I) consisted of medical doctors who enrolled in the CME emergency medicine training in Fall 2020 (November 31–December 11, 2020). The curriculum spanned 40 instructional hours, incorporating 25 hours of online lectures and an additional 15 hours of practical online simulation exercises. These hours were structured into blocks, with 5 meetings each spanning 8 hours (Appendix A).

Ethical Considerations

The research proposal received ethical clearance from the Bioethics Committee at the Medical University of Lublin (decision number: KE-0254/154/2020). We adhered to the ethical principles outlined in the Recommendations from the Association of Internet Researchers (Markham & Buchanan, 2012) during the conduct of the study.

Educational Innovations Introduced for the Online ST in Emergency Medicine Module

Script Concordance Test

The Script Concordance Test is a written questionnaire format adapted from prior research. Reference Lubarsky, Dory and Duggan14 It evaluated the decision-making capacity of trainees in the context of uncertainty. Each scenario within the SCT comprised 3 sections, each supplemented by a new piece of information that could modify the course of evaluation. Trainees were asked to select how this new information would influence their assessment and actions, using a 3-point Likert scale. Reference Fournier, Demeester and Charlin15

Development of the SCT

The SCT’s development was a result of a collaborative effort. The lead researcher, a seasoned professional with extensive experience in emergency medicine, conceptualized its initial draft. Subsequent iterations of the SCT were meticulously refined based on feedback from diverse stakeholders.

A focus group consisting of 6 seasoned specialists, each recognized as an expert in emergency medicine, critically reviewed the SCT. Their discussions revolved around enhancing its comprehensiveness, rectifying ambiguities, and suggesting pivotal revisions to ensure its relevance and accuracy.

To cater to the learners’ perspective, a panel of 6 final-year medical students was also convened. These students, despite being at the preliminary stages of their medical careers, provided crucial insights. They critiqued the SCT for clarity, relevance, and flow. Their feedback was instrumental in ensuring that the SCT was comprehensible to learners while retaining the depth expected by experts. Reference Fournier, Demeester and Charlin15

For the validation of the SCT questions, a meticulous process was adopted. The Multiple-Choice Questions (MCQ) were not just curated by a specialized team well-versed in the subject matter but were also vetted by an external review team to ensure content validity. Following this, a pilot test was carried out with a subset of participants distinct from the main study cohort. This was aimed at gauging item difficulty and determining discrimination indices. Any items that posed issues during the pilot testing were either adapted or eliminated. Such a rigorous approach was pivotal in certifying that the MCQs were both valid and dependable for assessing participants’ expertise.

Online Simulation Practice

Our research embraced an avant-garde transformation of ATLS, ALS, and EPLS scenarios into dynamic online exercises. Delivered via the prevalent teleconferencing platform, Zoom™ (Zoom Video Communications Inc., San Jose, CA, USA), we ensured that these simulations were not only technologically sound but also clinically representative.

Torres et al.'s Reference Torres, Domańska-Glonek and Dzikowski16 functional framework served as a cornerstone for our online simulation design. In this innovative setup, a dedicated instructor, equipped with state-of-the-art wireless devices and a mannequin control pad, orchestrated the evolving scenarios. This real-time broadcasting allowed participants from disparate locales to gain access to the patient’s monitor online, promoting a fully immersive experience.

One salient feature we incorporated was the mandate for each participant to assume a leadership role during these sessions. By guiding the ATLS, ALS, or EPLS algorithms’ execution, participants honed their decision-making capabilities, specifically in cardiac arrest scenarios where efficient leadership can be the difference between life and death. The COVID-19 pandemic underscored the undeniable value of adept leadership during medical emergencies, Reference Fernandez Castelao, Boos and Ringer17Reference Bennett19 reinforcing the urgency to sharpen these competencies. Reference Kinney and Slama20

Ensuring the authenticity and interactivity of these tele-simulation sessions was paramount. With the collaboration of our seasoned faculty at the established simulation center, we bridged the virtual gap. In addition to the online simulations, we utilized Multiple Choice Questions (MCQs) to assess the knowledge assimilation and comprehension of participants. This form of assessment was instrumental in gauging the effectiveness of our online simulation practices in terms of imparting knowledge. Technicians, educators, and participants converged on the teleconferencing platform, each operating from unique venues using individual equipment. This digital collaboration emulated the authentic dynamics of traditional face-to-face simulations. Additionally, integrating the Laerdal LLEAP Software (Laerdal Medical, Stavanger, Norway)—a staple in conventional simulation scenarios—further bolstered the realism of our sessions.

To uphold the integrity and standardization of our simulations, a rigorous validation process was instituted. Expert faculty reviewed each scenario to ascertain its clinical accuracy and relevance. Just as our simulation scenarios were rigorously reviewed by expert faculty, our MCQs also underwent meticulous scrutiny. This ensured their relevance, accuracy, and alignment with the objectives of each simulation. Feedback loops were established to continuously refine the simulation dynamics, ensuring they remained both educationally effective and reflective of real-world clinical situations. Our feedback loops, integral to refining our simulation dynamics, also incorporated analysis from MCQ results. These results were pivotal in understanding the clarity and depth of each scenario. Instructors were provided with standardized guidelines to guarantee a consistent interaction pattern with students during simulations, irrespective of the scenario. Moreover, to measure the reliability and validity of our simulations, we conducted a pilot with a subset of participants and made iterative adjustments based on their feedback.

Our primary outcome measurements were twofold: performance in simulations and MCQ scores. The latter provided quantifiable data on the knowledge gained from each session.

As the session unfolded, the instructor mirrored the directives from the lead participant, fostering a vibrant and instructive experience. Post each simulation session, participants were subjected to a set of MCQs derived from the presented scenarios. This not only tested their understanding but also provided immediate feedback on areas of strength and areas that needed further revision. Participants were granted an exhaustive perspective of the ongoing scenario, thanks to the simultaneous sharing of the patient’s monitor on Zoom and the direct feed from the simulation room (Figure 1).

Figure 1. Screenshot from a simulation online session.

MCQs, being a primary assessment tool, were subjected to our quality control protocols. Regular reviews ensured the questions remained updated, relevant, and free of ambiguity.

Quality control was paramount. Each simulation was recorded and reviewed by an independent expert who was not involved in the course delivery to ensure the fidelity and quality of the simulations. Any discrepancies or deviations from the standard scenario script were noted, and the involved instructor was given feedback to maintain standardization in subsequent simulations.

Comparison with Previous Training

To evaluate the efficacy of our novel training modules, we compared the summative evaluations of our participants with those of a control group from 2019. This control group underwent traditional face-to-face training, encountering scenarios similar to our 2023 group. The primary differentiator between the 2 was the delivery mode: The 2019 group received their training in person, whereas our study focused on an online approach.

Both the current study and the 2019 training aimed at analogous objectives and employed equivalent assessment tools. To ascertain the effectiveness of our novel online method, a non-inferiority analysis was conducted, aiming to determine whether this approach was at least on par with the traditional in-person training.

Statistical Significance and Sample Size

The sample size was calculated based on an anticipated effect size of 0.5, a power of 0.8, and an alpha of 0.05. This yielded a required sample size of 47 participants. Our initial recruitment of 60 doctors provided a buffer for potential dropouts.

Data Collection

Figure 2 offers a meticulous breakdown of our methodical data collection regimen. We enlisted participants primarily from 2 cohorts, all of whom were either undergoing or had culminated their specialized training in emergency medicine, a pivotal facet of their broader ST program at the esteemed Medical University of Lublin.

Figure 2. The stages of the data collection process.

The crux of this emergency medicine training was to fortify and elevate their foundational knowledge and hands-on experience within the critical ambit of emergency medicine. We tailored our curriculum with a pronounced focus on building competencies for managing cardiac arrest and other unforeseen clinical exigencies. Ensuring congruence with national and international benchmarks, the curriculum was meticulously aligned with the stipulations set out in the National Bill of the Ministry of Health for specialized medical and dental training, 21 the esteemed recommendations from the International Trauma Life Support, Reference Campbell22 and the robust guidelines propounded by the European Resuscitation Council. 4

To quantify and evaluate participants’ grasp of the academic content and their adeptness in its application, we collated Summative MCQ results from both participant clusters. These results served as a tangible metric, reflecting their holistic understanding and retention of the course material.

Beyond academic performance, we were keen to gauge the palatability and receptiveness to the SCT approach. To this end, we resorted to an anonymous feedback mechanism. Each participant was handed a digital survey upon culminating the course. To ensure data integrity and maintain the anonymity of responses, LimeSurvey, a reputable online survey platform, was chosen for its robust data protection protocols and user-friendly interface.

Drawing from existing literature that has underscored the utility of online questionnaires in medical research, Reference Buchanan and Hvizdak23 we found it apt to deploy this modality for our data-gathering exercise. To facilitate ease of access, we disseminated the survey link via email immediately post their MCQ summative assessment. Recognizing the sporadic nature of response rates, we also dispatched a gentle reminder email 3 days post the initial communication. This 2-tier approach was aimed at augmenting response rates, ensuring a comprehensive perspective on the SCT approach’s acceptability.

Data Analysis

The initial coding of SCT data was executed in Excel (Microsoft, 2020) and represented in a binary format. We assigned values of 1, 2, 3 to scale responses of -1, 0, 1 to facilitate further examination. Our database and statistical computations were conducted using the software STATISTICA 10 (StatSoft Poland). Categorical variables were stated as numbers and percentages, while the distributions of quantitative variables were detailed using mean value (M), standard deviation (SD), median (Me), and minimum (Min) and maximum (Max). We employed the Shapiro–Wilk test to assess the conformity with a normal distribution, setting a significance level of P < 0.05. Reference Ghasemi and Zahediasl24

To examine the concurrent validity between MCQ and SCT scores, we generated Bland–Altman scatter plots. Reference Bland and Altman25 Initial steps included calculating the mean for both MCQ and SCT scores from repeated measurements. Subsequently, we determined the mean difference and plotted the 95% limits of agreement (LOA ± 1.96 SD) to compare the 2 methods using a scatter plot analysis. Reference Bland and Altman25

We also utilized the non-parametric Spearman rank-order correlation coefficient to examine the relationship between MCQ and SCT scores. This enabled us to compare our findings with previous studies. Reference Goos, Schubach, Seifert and Boeker26,Reference Duggan and Charlin27 A Sign Test and χ2 were used for data analysis, as appropriate. We considered P values less than 0.05 to be significant.

Reliability analysis was conducted using intraclass correlation coefficients (ICCs) to compare SCT data from Cohort I and Cohort II students. Per Koo and Mae, Reference Koo and Li28 an ICC close to 0 indicates no agreement, whereas an ICC close to 1 demonstrates agreement. The significance of this agreement was also calculated with P < 0.05 set as the threshold.

In order to align with other studies investigating SCT reliability, we used Cronbach’s α coefficient to assess SCT reliability. The coefficient of variance was also calculated.

In order to assess the acceptability of the SCT method, we analyzed the responses from the online questionnaire, focusing on questions related to course assessment, and generated descriptive statistics.

Results

Sample Characteristics

From an initial pool of 60 physicians, comprehensive data on SCTs, MCQ exam results, and post-training evaluation, questionnaires were successfully gathered from 47 physicians, comprising Cohort I (n = 27) and Cohort II (n = 20). This 47 set of data was included in the final analysis. Figure 3 illustrates the recruitment process and provides the final count of participants in each cohort included in the analysis.

Figure 3. Recruitment process and participant count.

Table 1 provides a synopsis of the cohort characteristics. There were no substantial differences between the cohorts, as evident in Table 1. The χ2 test was employed to verify correlations between the individual characteristics of the 2 cohorts, affirming the similarity between the groups under study.

Table 1. Characteristics of the participants

* M, mean; SD, standard deviation

Data Normality

Before proceeding with the analysis, we ensured the assumption of normality for the data using the Shapiro–Wilk test, applicable since the cohorts did not exceed n > 100. Reference Ghasemi and Zahediasl24 Nevertheless, both MCQ and SCT data deviated significantly from the normal distribution (P < 0.0001); therefore, we undertook a non-parametric analysis, reporting median and interquartile ranges.

Concurrent Validity

To investigate the relationship between the 2 assessment points during the Special Training course—the novel SCT and the MCQ scores for Cohorts I and II—we calculated a non-parametric Spearman rank-order correlation coefficient. The results indicated no statistically significant correlation between the SCT and MCQ results (Rs = 0.3; P = 0.8).

We employed Bland–Altman plots to further examine the relationship between the novel formative assessment, SCT, and the summative MCQ scores for Cohorts I and II. Figure 4 presents the plotted results of the MCQ and SCT.

Figure 4. Bland–Altman scatter plot presenting the difference between STC and MCQ results.

*MCQ: Multiple Choice Questions, SCT: Script Concordance Test

The analysis suggests an approximate mean difference of 24% between the outcomes of the MCQ and SCT methods. This difference indicates that the classical MCQ examination results were, on average, 24% higher than the SCT results (see Table 2). The plot also indicates broad limits of agreement (LOA ± 1.96 SD: 41.7% to −13.4%), but the LOA are visibly scattered, suggesting no substantial evidence of concurrent agreement between the students’ SCT and MCQ scores.

Table 2. The coefficient of variance statistics in the case of MCQ and the SCT results

*CV, coefficient of variance; M, median; SD, standard deviation

Reliability Analysis

We also calculated the coefficient of variance to inspect the reliability of the introduced assessment points—the formative SCT and summative MCQ assessments. The novel SCT method shows a notably higher coefficient of variance, while the MCQ one exhibits minor variability in exam results. Details are provided in Table 2.

We also employed Cronbach’s α coefficient computation to assess the internal consistency or reliability of the SCT. Cronbach’s α coefficient resulted in a value of 0.67, indicating a satisfactory level of internal consistency for the SCT implemented. Reference Goos, Schubach, Seifert and Boeker26,Reference Duggan and Charlin27,Reference Boulouffe, Doucet and Muschart29

Acceptability

We collected data from the online questionnaire from all participants from Cohorts I and II at the end of the course, and the median (Me) and interquartile (IQR) ranges are reported below.

The study subjects responded to 8 statements in the questionnaire relating to the SCT on a scale from 0 (strongly disagree) to 5 (strongly agree). Our analysis focused on the overall feedback on the ST course and the open responses. The general assessment of the course content was positive, with the median scores for all questions reaching 5 (see Table 3).

Table 3. The general assessment of the course content by its participants

*M, Mean; SD, standard deviation; Me, median; Min–Max, minimum and maximum; Q1–Q3, upper and lower quartile

The majority of participants provided open comments about the ST online course, and most confirmed it was an acceptable format:

Q1: Considering the challenging times of the course (epidemic period—online course) and the practical nature of the subjects addressed, I am impressed by how well this course turned out. Thank you.

Q2: A good, factual course; I am glad that despite being online, the training was also conducted successfully.

Q3: The course format was very accessible. Interaction with the lecturers was possible, and the content was presented engagingly—one of the better courses I participated in during specialization training.

Comparison to Traditional Training

In order to assess the efficiency of our online training method, we contrasted our cohort’s performance with a historical group from 2019 that received conventional face-to-face training. The MCQ scores from the historical cohort were somewhat lower than those of our current online group. Feedback from our present cohort was largely positive toward the online platform, with a significant majority finding it user-friendly and efficient. Technical complications were rare, with only a small fraction of participants encountering occasional connectivity problems.

Discussion

During the COVID-19 pandemic, emergency medicine health care providers became a cog in the health care system’s machinery. Being the first line of contact for patients with distressing symptoms, they were instrumental in initial assessments, diagnosis, and immediate care. Reference Christopher and Christopher30 This accentuated the need for expedient dissemination of pandemic-specific protocols, leading to the rapid transformation of the traditional CME program into an online format. Reference Kanneganti, Sia, Ashokka and Ooi9

The Script Concordance Testing, first recognized in medical education literature in 2000, offers a novel approach to assessing and fostering clinical reasoning skills. Its methodology revolves around clinical scenarios developed by expert panels, providing a robust testing platform. Reference Lubarsky, Dory and Duggan14 The urgency of decision making in emergency health care settings underscores the importance of sound clinical reasoning, a skill that SCT effectively measures. Reference Andersson, Maurin Söderholm and Wireklint Sundström31 The efficacy of this tool across various postgraduate training programs has been well-documented. Reference Boulouffe, Doucet and Muschart29,Reference Subra, Chicoulaa and Stillmunkès32Reference Tan, Tan and Kandiah35 Coupled with its growing role in online learning, SCT appears to be a valuable instrument for enhancing participant engagement, promoting the acquisition and application of knowledge, and facilitating progression in Miller’s pyramid from “knows” to “knows how.” Reference Deschênes, Charlin and Phan36 In our study, the theoretical virtual lectures in our CME program were complemented by SCT cases, sparking productive discussions among participants moderated by a tutor.

Digital transformation of CME programs, as necessitated by the pandemic, unlocked unique advantages. These include scheduling flexibility, cost and time efficiencies, and expanded participant reach. Reference George, Papachristou and Belisario37 Furthermore, such platforms facilitate global collaboration, offering diverse learning experiences. The SCT, with its versatile clinical reasoning assessment, could be further enhanced by integrating technologies like Artificial Intelligence for real-time feedback and personalized learning. Reference Roslan and Yusoff38 However, these benefits hinge on continuous enhancement of digital literacy among health care professionals.

Incorporating SCT into online CME programs presents an adaptable model for future medical education, particularly when traditional in-person training isn’t viable. Reference Brentnall, Thackray and Judd39 Given the positive response and flexibility, its use could be expanded to various specializations. Such a transformation necessitates careful planning and design, with collaboration among education experts, health care professionals, and technologists to ensure relevance, engagement, and efficacy of course content. Reference van der Vleuten and Driessen40 It underscores the importance of continuous professional development programs in enhancing digital skills for effective engagement with these platforms.

Deschênes et al. reported a similar successful implementation of SCT in an online learning context. Their study found that participants employed both cognitive and metacognitive learning strategies when addressing SCT tasks. Reference Van Alten, Phielix, Janssen and Kester41 The SCT in our study was aimed at initiating a self-regulatory process concerning the knowledge acquired during lectures. Reference Thorne, Kimani and Hampshire42 A noticeable discrepancy was observed between the formative SCT results and the summative MCQ results, with MCQ scores, on average, being 24% higher than the SCT scores. We attribute this difference primarily to the novelty of the SCT methodology, as this was the first encounter our participants had with this type of test. Reference McCutcheon, Lohan, Traynor and Martin43

The strategic integration of the SCT into our online program leveraged the virtual platform’s strengths. Theoretical lectures were followed by SCT-based case discussions, allowing participants to immediately apply their theoretical knowledge. This structure provided the dual benefit of knowledge application and a more engaging learning experience, mimicking some advantages of face-to-face training. Over 85% of participants agreed that these discussion sessions enhanced their understanding and provided a practical perspective often missed in traditional lectures. The feedback underscores the potential of online training, especially when integrated with tools like SCT, to rival, if not surpass, the efficacy of traditional approaches.

The difficulty of first-time SCT usage is corroborated by Bursztejn et al. Reference Brentnall, Thackray and Judd39 Their findings guided our decision to adopt a formative approach with the SCT, aligning with the recommendations of Lubarscy et al. Reference Lubarsky, Dory and Duggan14

Our investigation into the acceptability of the course revealed overall positive attitudes, signifying the successful reception of the online CME format. This echoes the sentiments expressed by Kanneganti et al., Reference Kanneganti, Sia, Ashokka and Ooi9 emphasizing the transformative potential of technology in medical education. The online format not only kept health care professionals abreast with the evolving pandemic dynamics, but also served as a platform for sharing experiences. Reference van der Vleuten and Driessen40 Particularly appreciated were the hands-on virtual sessions that allowed participants to lead emergency teams (ALS, EPLS, and ITLS), reflecting the methodological basis proposed by Torres et al. Reference Torres, Domańska-Glonek and Dzikowski16

While online CME programs offer the advantage of reaching a wider audience, the potential effects of the absence of physical interaction on learning outcomes and learner’s satisfaction need to be studied in detail. Reference Deschênes, Goudreau and Fernandez44 Particularly, the impact on the development of practical skills, traditionally learnt through hands-on practice, is a vital area for future exploration. Reference Goniewicz and Burkle45,Reference Goniewicz, Khorram-Manesh and Włoszczak-Szubzda46 It would be interesting to investigate whether a blended learning approach, combining online theoretical sessions with in-person practical sessions, could provide a more optimal training experience. Reference Wan47 That blended learning is implemented during post-pandemic ERC and AHA courses. Reference Bursztejn, Cuny and Adam48 Moreover, given the novelty and the complex nature of SCT, supplemental resources or preparatory sessions to familiarize the learners with the SCT format may enhance its effectiveness as an assessment tool.

According to our research, Yang et al. Reference Rambaldini, Wilson and Rath49 used existing medical simulation centers (faculty, staff, and resources) to deliver simulation training via Zoom limited to pediatrics emergencies. The authors received positive comments from the participants, confirming acceptance of this form of training. Although the majority of the course participants expressed satisfaction with the online course format, the open feedback provided valuable insights for future improvements. Some participants expressed challenges related to the practical nature of the course being delivered online, due to the limitations imposed by the pandemic. Therefore, in post-pandemic times, when it is safe to return to in-person sessions, incorporating a hybrid model for the CME program that blends online theoretical instruction with in-person practical skills training could be considered. This approach would leverage the convenience and reach of online instruction, while still allowing participants to gain valuable hands-on experience in a controlled environment.

As we navigate unprecedented challenges in medical education, adopting novel teaching and evaluation methods like SCT in online platforms could pave the way for more flexible, adaptable, and effective training programs.

Limitations

Our study, conceived as a pivotal pilot assessment, elucidated the complexities surrounding the transition to online CME programs in the specialized field of emergency medicine. While the results are enlightening, it’s pivotal to recognize certain limitations.

The sample was predominantly drawn from 2 iterations of emergency medicine training at the Medical University of Lublin. This could potentially restrict the generalizability of findings to wider contexts, as diverse institutions maintain distinct teaching methodologies.

Transitioning to an online environment amidst the COVID-19 pandemic was a significant challenge. This sudden shift may have placed certain participants, especially those less familiar with digital platforms, at a disadvantage due to varied technical or adaptive challenges.

Moreover, variables such as participants’ background knowledge in emergency medicine, tech-fluency, or individual circumstances that might have influenced online learning engagement were not explored in depth.

The use of SCT, albeit innovative, was unfamiliar to participants, which might have influenced their performance and perception.

Our research methods favored quantitative data, thereby sidelining rich qualitative insights that might have been garnered from open-ended questions or interviews.

Lastly, our sampling method makes it challenging to establish whether the sample truly mirrors the broader physician community at MUL.

To mitigate these limitations, we recommend future research to:

  • Engage a broader and more diverse participant pool to enhance generalizability.

  • Offer orientation sessions for participants to familiarize with digital platforms and assessment tools like SCT.

  • Incorporate both quantitative and qualitative data collection methods for a holistic understanding.

  • Continuously adapt based on real-time feedback from participants.

Conclusions

The COVID-19 pandemic undeniably reshaped the landscape of medical education. As institutions globally were compelled to adapt, this study delved into the nuances of online CME programs, with a spotlight on the Script Concordance Test.

A resonating takeaway is the general receptivity toward online adaptations of emergency medicine training, attesting to both the resilience of the medical community and the potential of online platforms. The differential outcomes between SCT and MCQ highlight the learning curve associated with novel assessment tools like SCT. However, the consistent internal metrics of SCT underscore its viability as a measure of clinical reasoning.

Participants’ feedback illuminates the value of interactivity in e-learning, accentuating the need for dynamic modules to bolster engagement.

Our results underscore the potential and challenges of online CMEs, serving as an initial guidepost. Future research endeavors should expand their reach, both in terms of sample size and demographic diversity. Addressing challenges head-on, adapting methodologies based on feedback, and anticipating future shifts in the educational landscape will be pivotal.

In conclusion, beyond the realm of pedagogy, the ethical considerations broached here advocate for a holistic approach in medical education, underlining the intertwined nature of psychological well-being, effective learning, and preparedness for potential crises.

Data availability statement

The data sets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Acknowledgments

The authors would like to extend their appreciation to King Saud University for funding this work through the Researchers Supporting Project number (RSPD2023R649), King Saud University, Riyadh, Saudi Arabia.

Author contribution

Conceptualization, KN; methodology, KN, TK; validation, KN; formal analysis, KN, TK, PM-O, and MC-W; investigation, KN; data curation, KN; writing—original draft preparation, KN, TK, PM-O, MC-W, AMA-W, and KG; writing—review and editing, AMA-W and KG; visualization, KN; supervision, KN and KG; project administration, KN; all authors have read and agreed to the published version of the manuscript.

Funding statement

This research received no external funding.

Competing interests

The authors declare no conflicts of interest.

Appendix A Emergency medicine course schedule 18.01.-22.01.2021

References

The World Health Organization. Key Planning Recommendations for Mass Gatherings in the Context of COVID-19: Interim Guidance (No. WHO/2019-nCoV/POE Mass Gathering/2020.2). Published 2020. Accessed August 20, 2023. https://www.who.int/publications-detail/key-planning-recommendations-for-mass-gatherings-in-the-context-of-the-current-covid-19-outbreak Google Scholar
Naylor, K, Torres, K. Approaches to stimulate clinical reasoning in continuing medical education during the COVID-19 pandemic. Kardiol Pol. 2020;78(7-8):770-772. https://doi.org/10.33963/KP.15419. https://pubmed.ncbi.nlm.nih.gov/32500994/ Google Scholar
Schulte, TL, Gröning, T, Ramsauer, B, et al. Impact of COVID-19 on continuing medical education—results of an online survey among users of a non-profit multi-specialty live online education platform. Front Med. 2021;8:2104. https://www.frontiersin.org/articles/10.3389/fmed.2021.773806/full CrossRefGoogle ScholarPubMed
COVID-19 Guidelines. European Resuscitation Council. Published 2020. Accessed June 6, 2020. https://www.erc.edu/assets/documents/ERC_covid19_spreads.pdf Google Scholar
Edelson, DP, Sasson, C, Chan, PS, et al. Interim guidance for Basic and Advanced Life Support in adults, children, and neonates with suspected or confirmed COVID-19. Circulation. 2020;141(25). https://www.ahajournals.org/doi/10.1161/CIRCULATIONAHA.120.047463 CrossRefGoogle ScholarPubMed
COVID-19 Videos. Centre for Disease Control and Prevention. CDC. Published 2021. Accessed June 20, 2021. https://www.cdc.gov/coronavirus/2019-ncov/communication/videos.html?Sort=Date%3A%3Adesc Google Scholar
Nolan, JP, Monsieurs, KG, Bossaert, L, et al. European Resuscitation Council COVID-19 guidelines executive summary. Resuscitation. Published 2020. Accessed June 8, 2020. https://linkinghub.elsevier.com/retrieve/pii/S030095722030232X Google Scholar
Scquizzato, T, Olasveengen, TM, Ristagno, G, Semeraro, F. The other side of novel coronavirus outbreak: fear of performing cardiopulmonary resuscitation. Resuscitation. 2020;150:92-93. https://www.sciencedirect.com/science/article/pii/S0300957220301337 CrossRefGoogle ScholarPubMed
Kanneganti, A, Sia, C-H, Ashokka, B, Ooi, SBS. Continuing medical education during a pandemic: an academic institution’s experience. Postgrad Med J. 2020;96(1137):384-386. http://www.ncbi.nlm.nih.gov/pubmed/32404498 CrossRefGoogle ScholarPubMed
Daniel, M, Gordon, M, Patricio, M, et al. An update on developments in medical education in response to the COVID-19 pandemic: a BEME scoping review: BEME Guide No. 64. Med Teach. 2021;43(3):253-271. https://www.tandfonline.com/doi/full/10.1080/0142159X.2020.1864310 CrossRefGoogle Scholar
Chandra, S, Laoteppitaks, C, Mingioni, N, Papanagnou, D. Zooming-out COVID-19: virtual clinical experiences in an emergency medicine clerkship. Med Educ. 2020;54(12):1182-1183. http://www.ncbi.nlm.nih.gov/pubmed/32502282 CrossRefGoogle Scholar
Dennis, B, Highet, A, Kendrick, D, et al. Knowing your team: rapid assessment of residents and fellows for effective horizontal care delivery in emergency events. J Grad Med Educ. 2020;12(3):272-279. http://www.ncbi.nlm.nih.gov/pubmed/32595843 CrossRefGoogle ScholarPubMed
Ministry of Health. The bill concerning the specialities for medical doctors and dentistry doctors. Warsaw: The Polish Ministry of Health; 2013:1-10.Google Scholar
Lubarsky, S, Dory, V, Duggan, P, et al. Script concordance testing: from theory to practice: AMEE Guide No. 75. Med Teach. 2013;35(3):184-193. http://www.ncbi.nlm.nih.gov/pubmed/23360487 CrossRefGoogle ScholarPubMed
Fournier, JP, Demeester, A, Charlin, B. Script concordance tests: guidelines for construction. BMC Medical Inform Decis Mak. 2008;8:18. http://www.ncbi.nlm.nih.gov/pubmed/18460199 CrossRefGoogle ScholarPubMed
Torres, A, Domańska-Glonek, E, Dzikowski, W, et al. Transition to online is possible: solution for simulation-based teaching during the COVID-19 pandemic. Med Educ. 2020;54(9):858-859. https://onlinelibrary.wiley.com/doi/10.1111/medu.14245 CrossRefGoogle ScholarPubMed
Fernandez Castelao, E, Boos, M, Ringer, C, et al. Effect of CRM team leader training on team performance and leadership behavior in simulated cardiac arrest scenarios: a prospective, randomized, controlled study. BMC Med Educ. 2015;15(1):116. https://bmcmededuc.biomedcentral.com/articles/10.1186/s12909-015-0389-z CrossRefGoogle Scholar
Saiboon, IM, Apoo, FN, Jamal, SM, et al. Improving the position of resuscitation team leader with simulation (IMPORTS); a pilot cross-sectional randomized intervention study. Medicine (Baltimore). 2019;98(49):e18201 http://www.ncbi.nlm.nih.gov/pubmed/31804343 CrossRefGoogle ScholarPubMed
Bennett, CE. Not who, but rather how: the ideal resuscitation team leader. Mayo Clin Proc Innov Qual Outcomes. 2021;5(5):817-819. http://www.ncbi.nlm.nih.gov/pubmed/34458679 CrossRefGoogle ScholarPubMed
Kinney, B, Slama, R. Rapid outdoor non-compression intubation (RONCI) of cardiac arrests to mitigate COVID-19 exposure to emergency department staff. Am J Emerg Med. 2020;38(12):2760.e1-2760.e3. https://isap.sejm.gov.pl/isap.nsf/DocDetails.xsp?id=WDU20200001566 CrossRefGoogle ScholarPubMed
Ministry of Health. [Bill on the specialities for medical doctors and dentistry doctors]. 2021;4(2):37-48. http://www.ncbi.nlm.nih.gov/pubmed/19480590 Google Scholar
Campbell, J. International trauma life support for emergency care providers. Pearson+; 2021:1689-1699, Vol 1. https://www.itrauma.org/product/itls-for-emergency-care-providers-9th-edition/ Google Scholar
Buchanan, EA, Hvizdak, EE. Online survey tools: ethical and methodological concerns of human research ethics committees. J Empir Res Hum Res Ethics. 2009;4(2):37-48. http://www.ncbi.nlm.nih.gov/pubmed/19480590 CrossRefGoogle ScholarPubMed
Ghasemi, A, Zahediasl, S. Normality tests for statistical analysis: a guide for non-statisticians. Int J Endocrinol Metab. 2012;10(2):486-489. http://www.ncbi.nlm.nih.gov/pubmed/23843808 CrossRefGoogle Scholar
Bland, JM, Altman, DG. Statistical methods for assessing agreement between two methods of clinical measurement. Int J Nurs Stud. 2010;47(8):931-936. https://www.sciencedirect.com/science/article/abs/pii/S0020748909003204 CrossRefGoogle Scholar
Goos, M, Schubach, F, Seifert, G, Boeker, M. Validation of undergraduate medical student script concordance test (SCT) scores on the clinical assessment of the acute abdomen. BMC Surg. 2016;16(1):57. http://www.ncbi.nlm.nih.gov/pubmed/27535826 CrossRefGoogle ScholarPubMed
Duggan, P, Charlin, B. Summative assessment of 5th year medical students’ clinical reasoning by script concordance test: requirements and challenges. BMC Med Educ. 2012;12(1):29. https://bmcmededuc.biomedcentral.com/articles/10.1186/1472-6920-12-29 CrossRefGoogle ScholarPubMed
Koo, TK, Li, MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med. 2016;15(2):155-163. http://www.ncbi.nlm.nih.gov/pubmed/27330520 CrossRefGoogle ScholarPubMed
Boulouffe, C, Doucet, B, Muschart, X, et al. Assessing clinical reasoning using a script concordance test with electrocardiogram in an emergency medicine clerkship rotation. Emerg Med J. 2014;31(4):313-316. http://www.ncbi.nlm.nih.gov/pubmed/23539495 CrossRefGoogle Scholar
Christopher, TA, Christopher, AN. Emergency medicine and COVID-19: now and next year. Emerg Crit Care Med. 2021;1(1):14-19. https://journals.lww.com/10.1097/EC9.0000000000000010 CrossRefGoogle Scholar
Andersson, U, Maurin Söderholm, H, Wireklint Sundström, B, et al. Clinical reasoning in the emergency medical services: an integrative review. Scand J Trauma Resusc Emerg Med. 2019;27(1):76. https://sjtrem.biomedcentral.com/articles/10.1186/s13049-019-0646-y CrossRefGoogle ScholarPubMed
Subra, J, Chicoulaa, B, Stillmunkès, A, et al. Reliability and validity of the script concordance test for postgraduate students of general practice. Eur J Gen Pract. 2017;23(1):209-214. https://www.tandfonline.com/doi/full/10.1080/13814788.2017.1358709 CrossRefGoogle ScholarPubMed
Drolet, P. Assessing clinical reasoning in anesthesiology: making the case for the Script Concordance Test. Anaesth Crit Care Pain Med. 2015;34(1):5-7. https://linkinghub.elsevier.com/retrieve/pii/S2352556815000090 CrossRefGoogle Scholar
Lubarsky, S, Chalk, C, Kazitani, D, et al. The Script Concordance Test: a new tool assessing clinical judgement in neurology. Can J Neurol Sci. 2009;36(3):326-331. http://www.ncbi.nlm.nih.gov/pubmed/19534333 CrossRefGoogle Scholar
Tan, K, Tan, NCK, Kandiah, N, et al. Validating a script concordance test for assessing neurological localization and emergencies. Eur J Neurol. 2014;21(11):1419-1422. http://www.ncbi.nlm.nih.gov/pubmed/24484361 CrossRefGoogle ScholarPubMed
Deschênes, M-F, Charlin, B, Phan, V, et al. Educators and practitioners’ perspectives in the development of a learning by concordance tool for medical clerkship in the context of the COVID pandemic. Can Med Educ J. 2021;12(6):43-54. http://www.ncbi.nlm.nih.gov/pubmed/35003430 Google ScholarPubMed
George, PP, Papachristou, N, Belisario, JM, et al. Online eLearning for undergraduates in health professions: a systematic review of the impact on knowledge, skills, attitudes and satisfaction. J Glob Health. 2014;4(1):010406. doi: 10.7189/jogh.04.010406 CrossRefGoogle ScholarPubMed
Roslan, NS, Yusoff, MS. Script concordance test. In: Written Assessment in Medical Education (pp. 101-109). Springer International Publishing; 2023.CrossRefGoogle Scholar
Brentnall, J, Thackray, D, Judd, B. Evaluating the clinical reasoning of student health professionals in placement and simulation settings: a systematic review. Int J Environ Res Public Health. 2022;19(2):936.CrossRefGoogle ScholarPubMed
van der Vleuten, CP, Driessen, EW. What would happen to education if we take education evidence seriously? Perspect Med Educ. 2014;3:222-232.CrossRefGoogle ScholarPubMed
Van Alten, DC, Phielix, C, Janssen, J, Kester, L. Effects of flipping the classroom on learning outcomes and satisfaction: a meta-analysis. Educ Res Rev. 2019;28:100281.CrossRefGoogle Scholar
Thorne, CJ, Kimani, PK, Hampshire, S, et al. The nationwide impact of COVID-19 on life support courses. A retrospective evaluation by Resuscitation Council UK. Resuscitation Plus. 2023;13:100366. https://doi.org/10.1016/j.resplu.2023.100366 (https://www.sciencedirect.com/science/article/pii/S2666520423000097)CrossRefGoogle Scholar
McCutcheon, K, Lohan, M, Traynor, M, Martin, D. A systematic review evaluating the impact of online or blended learning vs. face-to-face learning of clinical skills in undergraduate nurse education. J Adv Nurs. 2015;71(2):255-270.CrossRefGoogle ScholarPubMed
Deschênes, M-F, Goudreau, J, Fernandez, N. Learning strategies used by undergraduate nursing students in the context of a digitial educational strategy based on script concordance: a descriptive study. Nurse Educ Today. 2020;95:104607. http://www.ncbi.nlm.nih.gov/pubmed/33045676 CrossRefGoogle ScholarPubMed
Goniewicz, K, Burkle, FM. Redefining global disaster management strategies: lessons from COVID-19 and the call for united action. Disaster Med Public Health Prep. 2023;17:e450. doi: 10.1017/dmp.2023.111 CrossRefGoogle ScholarPubMed
Goniewicz, M, Khorram-Manesh, A, Włoszczak-Szubzda, A, et al. Influence of experience, tenure, and organisational preparedness on nurses’ readiness in responding to disasters: an exploration during the COVID-19 pandemic. J Glob Health. 2023;13:06034. doi: 10.7189/jogh.13.06034 CrossRefGoogle ScholarPubMed
Wan, S. Using the script concordance test to assess clinical reasoning skills in undergraduate and postgraduate medicine. Hong Kong Med J. Published online August 28, 2015. http://www.hkmj.org/earlyrelease/hkmj154572.htm CrossRefGoogle Scholar
Bursztejn, A-C, Cuny, J-F, Adam, J-L, et al. Usefulness of the script concordance test in dermatology. J Eur Acad Dermatol Venereol. 2011;25:1471-1475. https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1468-3083.2011.04008.x CrossRefGoogle ScholarPubMed
Rambaldini, G, Wilson, K, Rath, D, et al. The impact of severe acute respiratory syndrome on medical house staff: a qualitative study. J Gen Intern Med. 2005;20(5):381-385. http://www.ncbi.nlm.nih.gov/pubmed/15963157 CrossRefGoogle ScholarPubMed
Yang, T, Buck, S, Evans, L, Auerbach, M. A telesimulation elective to provide medical students with pediatric patient care experiences during the COVID pandemic. Pediatr Emerg Care. 2021;37(2):119-122. http://www.ncbi.nlm.nih.gov/pubmed/33181792 CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. Screenshot from a simulation online session.

Figure 1

Figure 2. The stages of the data collection process.

Figure 2

Figure 3. Recruitment process and participant count.

Figure 3

Table 1. Characteristics of the participants

Figure 4

Figure 4. Bland–Altman scatter plot presenting the difference between STC and MCQ results.*MCQ: Multiple Choice Questions, SCT: Script Concordance Test

Figure 5

Table 2. The coefficient of variance statistics in the case of MCQ and the SCT results

Figure 6

Table 3. The general assessment of the course content by its participants