Terrorism events, floods, bushfires, and even pandemics are occurring in increasing frequency over the past century. Reference Castoldi, Greco and Carlucci1,Reference Bird, Braunold and Dryburgh-Jones2 In health care, these events can be grouped under the term major incident (MI). An MI can be defined as “an incident or event where the location, number, severity or type of live casualties, requires extraordinary resources” beyond the normal “resources of the emergency and health care services’ ability to manage.” Reference Bird, Braunold and Dryburgh-Jones2–4
Over the past decades, preparation for MIs has become a focus of concern for health-care systems. During such events, hospitals must “adapt to exceptional situations, and all activities must be coordinated to cope with the unavoidable chaos… Everything is different from routine, and responders need to be coordinated by people accustomed to these dynamics.” Reference Castoldi, Greco and Carlucci1
These events must be analyzed from a systems perspective, to appreciate the complexity involved. A system can be defined as “a group of interacting, interrelated and interdependent components that form a complex and unified whole.” 5 Systems thinking provides a set of tools to describe and analyze these networks. It is particularly useful in addressing complex problems, that cannot be solved by any 1 stakeholder. It focuses on organization learning and adaptive management, and is a vital tool in addressing complex public health issues, such as MIs. 5
To reduce the chaos of these complex events, most Western health-care systems have developed a Major Incidence Response Plan (MIRP). 3,6 Generally MIRPs are rarely “stress tested” and often not known by most staff. Reference Tallach, Schyma and Robinson7,Reference Mawhinney, Roscoe and Stannard8 Practically, and ethically, it is only possible to test MIRPs by means of simulation. Thus, the methods to create high level scientific evidence are very limited. Reference Nilsson, Vikström and Rüter9 In an MI simulation, the participating system “simulates the influx of a large number of patients” and the system responds to this stress. Reference Verheul, Dückers and Visser10 Simulations vary in fidelity and scale. Reference Klima, Seiler and Peterson11,Reference Tochkin, Tan and Nolan12 Ideally simulations should be evaluated, and learnings fed back into the involved system in a Plan-Do-Study-Act cycle. Reference Verheul, Dückers and Visser10,Reference Tochkin, Tan and Nolan12
Anecdotally, MI simulations are thought to help improve health-care system preparedness, Reference Tobert, von Keudell and Rodriguez13,Reference Albert and Training14 although it is difficult to objectively evaluate. Reference Legemaate, Burkle and Bierens15 The majority of MI simulation research focuses on Emergency Department (ED) Triage, or prehospital care. Reference McGlynn, Claudius and Kaji16–Reference Imamedjian, Maghraby and Homier21 However, analyzing MI response from the perspective of a single department does not reflect the impact of these events on the hospital system as a whole. For example, after the 2005 London Bombings, the Royal London Hospital stood down from the formal declaration of an MI 5 h after the bombings started and reopened for normal services. However, at the time of reopening “theatres were operating to full capacity and the intensive care unit had not received the patients it had already accepted from the MI.” Reference Johnson and Cosgrove22 Published expert opinion after this event identified that:
“Such actions have the potential to further overload pressured systems. Thus, the ongoing care of the patients admitted from the incident should form part of a major incident plan as the impact of their admission and treatment is beyond a period of a few hours.” Reference Johnson and Cosgrove22
Thus, to determine how an MI may impact the hospital health-care system, wider whole hospital simulations must be performed. Locally, there is little published Australian data on hospital disaster preparedness. Reference Corrigan and Samrasinghe23 Therefore, the aim of this scoping review of the international literature was to determine if whole hospital-based simulation improves hospital response capability to prepare for and manage major incidences, from an Australian health-care system perspective. A systems perspective was used in the analysis.
Methods
Search Strategy
A systematic style scoping review was undertaken in August 2022, according to the recommendations of the Preferred Reporting Items for Systemic Reviews and Meta-Analyses (PRISMA) and the Joanna Briggs Institute (JBI) Methodology, with the aim to tabulate all relevant literature. 24,25 The initial research question was reviewed against the population/problem, concept, and context (PCC) and FINER frameworks. 25,Reference Fandino26 This research aimed to determine if whole hospital-based simulation improves hospital response capability to prepare for and manage major incidences, from an Australian perspective. A systematic search was then undertaken, using 2 databases: PubMed and CINAHL. An attempt was made to include the ERIC database; however, no results were returned.
As per the JBI methodology, an initial limited search was undertaken in each database to identify appropriate key and index terms. A second formal search was then performed, and these results were included. Slightly different search terms were used between databases, due to different tools offered by each. The ERIC database was included in the initial limited search; however, no appropriate results were returned despite numerous searches.
The search terms are provided in Table 1, and the eligibility criteria can be observed in Table 2.
Note: Refer to Appendix 1 for full Boolean search string.
The inclusion and exclusion criteria were predefined before beginning the scoping review (Table 2). Articles included must have evaluated the implications of the simulation on the health-care system (ie, not a pure feasibility study). Evaluation data must have been included. A broad scope of publication dates was included as these events are rare, and contemporary data were assumed to be minimal. Articles limited to the single departments were excluded.
The initial PubMed search returned 171 articles, and CINHAL returned 122 articles. These articles were combined then the title and abstracts were screened against the eligibility criteria. Refer to the PRISMA Flow diagram (Figure 1). After title and abstract screening, 54 relevant articles were identified for full text screening. Following this, 15 duplicates were identified and removed. Thus 39 articles proceeded to full text screening.
Prehospital or emergency department only simulation accounted for a significant proportion of articles returned in the search. However, these were not sufficient to answer the research question, and were excluded. Simulations based purely on mathematical and computational modeling were also excluded. This is justified by a 2008 study, which demonstrated that there were marked differences in patient benchmarks between computer simulation and live exercises. Reference Franc-Law, Bullard and Corte28 Three papers were excluded as they were set in Saudi Arabia, which was assessed as too dissimilar to the Australian population and health-care system. Reference Bajow, Alkhalil and Maghraby29–Reference Bin Shalhoub, Khan and Alaska31 A further 3 studies were excluded as English translations were not available. Reference Kippnich, Kippnich and Markus32–Reference Wolf, Partenheimer and Voigt34 Thus, after full text screening, 11 relevant articles were retained. Reference lists from the included articles were snowballed to identify relevant papers. However, no new articles were identified.
Quality Assessment
All included articles were assessed for quality against the appropriate CASP checklist. 35 Of note, most articles were found to be of low evidence strength, likely due to ethical and procedural difficulties in this topic.
However, 2 studies were excluded for further quality concerns. A 2014 United States article was excluded, due to a very significant risk of selection bias and low strength of evidence. Self-reported perception of knowledge improvement was assessed by a post course questionnaire only, of which only 20 participants completed, despite a whole hospital simulation being conducted at 3 Los Angeles Hospitals with staff from all 3 hospitals participating. Reference Burke, Kim and Bachman36 Another 2018 Dutch study was excluded as the primary outcome recorded was not considered valid by the authors of this review, and there were significant sources of bias. The original Dutch authors retrospectively evaluated 32 MI simulation reports from Dutch hospitals and identified the difference in the number of items of improvements identified in different reports. Measuring the number of areas of improvement identified, with no actual evaluation into these areas, is not a valid outcome measurement. The study was thus excluded as it lacks internal validity. Reference Verheul, Dückers and Visser10 Please refer to Appendix 2 for further details.
Data Extraction and Synthesis
Author 1 of this review independently reviewed the relevant articles identified from the search strategy, described above. As per the JBI protocol for scoping reviews, 25 data were extracted from each article under key characteristics and main conceptual categories.
Results
Study Characteristics
After a scoping systematic literature search, and the application of the exclusion and inclusion criteria listed in Table 1, a total of 11 relevant articles were identified, as can be seen in Table 3. Although the date range for inclusion was set as the past 20 y, the majority of articles (n = 10; 91%) were published in the past 12 y. Only 1 article included was based in Australia. Of the other articles, 4 were based in Sweden, 3 in the United States of America (USA), 2 in Italy, and 1 in England. The type and size of simulation used in the articles varied greatly, from tabletop exercises to multijurisdictional simulations. Where described, all simulations appeared to have involved a man-made MI.
Of the included articles, 8 were prospective observational study designs, 2 used quasi-experimental study design, with pre- and postsimulation evaluation. All 11 articles examined mixed populations, including both adult and pediatric patients. Assessed against the National Health and Medical Research Council (NHMRC) Evidence Hierarchy, 10 articles were all found to be of level 4 evidence with a high chance of bias. Reference Paul, Shekelle and Maglione37 One study, a prospective cohort design which examined a purely pediatric cohort, was found to be level 3 evidence. Reference Bird, Braunold and Dryburgh-Jones2 Overall, there was a significant paucity in high level data.
Across the 11 included articles, there was a significant amount of heterogeneity in study designs, outcome measures, and evaluation techniques. No 2 articles used the same evaluation technique or outcome measures, making direct comparison difficult. In addition, a mixture of qualitative and quantitative measures were used across articles.
Common Themes Identified
The aim of this scoping review was to determine if whole hospital-based simulation improved hospital response capability to prepare for and manage MIs, from an Australian health-care system perspective. From a single site outlook, the 2020 Italian article provides the best example. Reference Castoldi, Greco and Carlucci1 Over a 2-y period, 7 whole hospital simulations were held using a preestablished course to train staff on the implementation of the hospitals MI plan. Overall, the authors found it to be an efficient way to train hospital staff in MI management, although the article was assessed as representing a low level of evidence.
This is supported by the other articles. In general, participants in the simulations self-reported improvement or increased understanding. Reference Castoldi, Greco and Carlucci1,Reference Tallach, Schyma and Robinson7,Reference Bartley, Stella and Walsh38 Of interest, in a 2022 English article which involved a whole hospital simulation with more than 700 staff participants, further exercises were requested by the participants. They found “the simulations mimicked real responses and that exercising as a whole system was beneficial.” Reference Tallach, Schyma and Robinson7 The only Australian study located in the literature that used a whole hospital simulation found that participation in MI simulations improved factual knowledge among participants. Reference Bartley, Stella and Walsh38 Benefits of MI simulation reported in the included articles have been summarized in Box 1.
-
- Improved understanding of roles in an MI Reference Castoldi, Greco and Carlucci1,Reference Tallach, Schyma and Robinson7,Reference Harris, Bell and Rollor39
-
- Improved understanding of MIRP Reference Castoldi, Greco and Carlucci1,Reference Tallach, Schyma and Robinson7
-
- Familiarity with paradigm shift of managing resources to maximize survival Reference Castoldi, Greco and Carlucci1,Reference Tallach, Schyma and Robinson7,Reference Harris, Bell and Rollor39,Reference Davidson, Magalini and Brattekås40
-
- Identification of latent errors and systems safety issues Reference Tallach, Schyma and Robinson7,Reference Klima, Seiler and Peterson11,Reference Davidson, Magalini and Brattekås40,Reference Khorram-Manesh, Lönroth and Rotter41
-
- Identification of areas of improvement Reference Tallach, Schyma and Robinson7,Reference Klima, Seiler and Peterson11,Reference Davidson, Magalini and Brattekås40–Reference Davids, Case and Hornung42
-
- Testing surge capacity from a resource perspective Reference Klima, Seiler and Peterson11,Reference Davidson, Magalini and Brattekås40–Reference Davids, Case and Hornung42
-
- Testing clinical tools for MI Reference Bird, Braunold and Dryburgh-Jones2
Some articles evaluated an entire region’s response to MI by means of simulation. For example, the 2012 USA prospective observational study completed a full-scale regional exercise, which included 17 participating hospitals. All 17 hospitals considered the simulation exercise outcomes across the whole hospital. This massive exercise was used to evaluate the region’s response and identified key areas that required improvement. Similar areas of improvement were identified in the other 11 included articles; these have been summarized in Box 2.
-
- Communication Reference Tallach, Schyma and Robinson7,Reference Klima, Seiler and Peterson11,Reference Davidson, Magalini and Brattekås40,Reference Khorram-Manesh, Lönroth and Rotter41
-
- Lack of working knowledge of MIRP Reference Tallach, Schyma and Robinson7,Reference Klima, Seiler and Peterson11,Reference Harris, Bell and Rollor39–Reference Khorram-Manesh, Lönroth and Rotter41
-
- Staffing, and medical resources Reference Tallach, Schyma and Robinson7,Reference Klima, Seiler and Peterson11,Reference Davidson, Magalini and Brattekås40,Reference Khorram-Manesh, Lönroth and Rotter41
-
- Command structure Reference Klima, Seiler and Peterson11
-
- Lack of compatibility between prehospital and hospital teams, or between departments. Reference Khorram-Manesh, Lönroth and Rotter41
-
- Improved security during events Reference Harris, Bell and Rollor39
-
- Engagement with community partners and first responders Reference Harris, Bell and Rollor39,Reference Grant and Secreti43
-
- Documentation Reference Tallach, Schyma and Robinson7
-
- Media strategy Reference Nilsson, Vikström and Rüter9
Some articles identified unique points, through more novel study designs. Refer to Appendix 3 for further information.
Discussion
Improving MI preparedness and management is a topic of significant public health concern. However, there is little published data evaluating management in real-world events. Some recommendations have been published after specific events; but these are examples of expert opinion only. Reference Tobert, von Keudell and Rodriguez13,Reference Albert and Training14,Reference Yanagawa, Ishikawa and Takeuchi45–47
Simulation has long been thought to be an effective tool to assist this preparation, although it is difficult to objectively evaluate. Reference Tobert, von Keudell and Rodriguez13–Reference Legemaate, Burkle and Bierens15 Unfortunately, similar to previous publications, Reference Hsu, Jenckes and Catlett48 this scoping review has also demonstrated a paucity of strong data. Studies were generally either quasi-experimental or prospective observational design. Although they contribute preliminary insights, these designs do not have randomization, a limited control of confounding variables, and no control group. This weakens the scientific strength of the evidence, and it must all be interpreted with caution.
In general, retrospective self-evaluation demonstrated improvement of MI simulation management, and increased understanding of MIRP. Reference Castoldi, Greco and Carlucci1,Reference Tallach, Schyma and Robinson7,Reference Bartley, Stella and Walsh38 Participants in a 2022 study stated that “the simulations mimicked real responses and that exercising as whole system was beneficial.” Reference Tallach, Schyma and Robinson7 Thus simulations seem to improve staff confidence, which is important and beneficial. While performance is not a substitute for capacity, “individual, leader, and team confidence play essential roles in achieving success and the absence of confidence has been connected with failure.” Reference Owens and Keller49 Simulations appeared to be useful tools for identifying areas of improvement, as can be seen in Box 2. While these studies were highly heterogenous, similar themes of improvement were found, suggesting potential generalizability.
Simulations of a variety of fidelity were performed. Due to common deficiencies across the region, the 2012 USA study found that “tabletop exercises are inadequate to expose operational and logistic gaps in disaster response. Full scale regional exercises should routinely be performed to adequately prepare for catastrophic events.” Reference Klima, Seiler and Peterson11 From a systems perspective, it would be ideal to regularly run large scale exercises to truly stress the networks involved. However practically these exercises are expensive, time and resource consuming. Reference Tochkin, Tan and Nolan12 Other studies used lower fidelity techniques as they believed “the resource investment and expense of high-fidelity simulation was not justified.” Reference Tallach, Schyma and Robinson7 At this stage, there is not enough evidence to support 1 approach over the other. However, despite fidelity level, all studies included found some benefit or identified areas of improvement.
As identified in the 2010 Swedish study “monitoring health-care quality may be difficult without the use of clinical indicators.” Reference Nilsson, Vikström and Rüter9 This is further emphasized by the existing literature on MIs and simulation, which has found demonstrating the effectiveness of such exercises difficult. Reference Verheul, Dückers and Visser10,Reference Tochkin, Tan and Nolan12 In this literature review, all studies evaluated their simulation differently. In future, to accurately evaluate the effectiveness of these activities clinical indicators must be developed. The proposed indicators in the 2010 Swedish study are 1 possibility, but they must be externally validated.
Review Strengths and Limitations
This is the first known scoping review on MI simulations in hospital-based health care that considers a whole hospital or regional response to MIs. It provides preliminary insights into the areas of benefit and possible improvements that could be made to MI simulation. To ensure rigor in our process, this scoping review followed the JBI manual, carried out pilot searches to refine search terms, and predefined inclusion and exclusion criteria before screening.
However, the generalizability of these scoping review findings to different international health-care systems is a limitation of concern. Only 1 study identified in this review was performed in Australia. Four studies were performed in Sweden, and 1 in the United Kingdom. Arguably, these countries have comparable health-care system. 50 However, this review also included 3 American studies, which has a vastly different health-care system and limits the generalizability of the American study findings. 50 Thus, conclusions from these articles must also be interpreted with caution, when considering within the context of different health-care systems. This concern is reinforced further by acknowledging the essential role and influence of the key elements of the systems thinking framework.
There were other limitations to this scoping review. The database search was performed by a single author, which may introduce a bias regarding the “relevant” articles included. Additionally, the author was unable to include or analyze 3 articles published in another language. Reference Kippnich, Kippnich and Markus32–Reference Wolf, Partenheimer and Voigt34 Another limitation that should be acknowledged is the small number of included articles; however, this may be reflective of the current literature deficit in this field.
To support the value of simulation in MI preparation and management, further research must be performed. Specifically clinical indicators of MI management should be validated, which would allow more scientific and objective evaluation of MI simulation in the future.
Conclusions
This scoping review of the international literature aimed to determine if whole hospital-based simulation improves hospital response capability around MIs. Definitive conclusions were unable to be made, due to the low number of relevant articles identified, the lack of data, and the general paucity of strong scientific evidence. In general, all articles had positive conclusions with respect to the use of MI simulations. Several benefits were identified, and areas of improvement for future highlighted. However overall, there was a lack of validated evaluation, little evidence to definitively conclude that simulations improved preparation or management for real world MIs. Further research is required to optimize future responses to MI events.
Data availability
Dr Sacha Wynter, Emergency Department Registrar, Australian College of Emergency Medicine Trainee. Qualifications: Doctor of Medicine, Bachelor of Science
Dr Rosie Nash, School of Medicine, College of Health and Medicine, University of Tasmania Senior Lecturer, Public Health, Tasmanian School of Medicine. Qualifications: Bachelor of Pharmacology (Hons), Master of Professional Studies, PhD, Graduate Certificate in Research
Orchid: 0000-0003-3695-0887
Ms Nicola Gadd, Lecturer, Public Health, Tasmanian School of Medicine. Qualifications: Masters of Nutrition and Dietetic Practice
ORCID: 0000-0002-3014-2929
Acknowledgments
None.
Competing interests
The authors declare there are no conflicts of interest.
Appendix 1 Search Strings
PubMed search:
((simulation training[MeSH Terms]) OR (simulation)) AND ((disaster planning[MeSH Terms]) OR (disaster medicine[MeSH Terms])) AND ((major incident) OR (mass casualty incident[MeSH Terms]))
CINHAL search:
((major incident) or (mass casualty incident) or (mass casualty event) or (major critical incident) or (disaster)) AND ((disaster medicine) OR (disaster preparedness) OR (disaster planning)) AND ((simulation) OR (simulation learning))
ERIC search
((major incident) or (mass casualty incident) or (mass casualty event) or (major critical incident) or (disaster)) AND ((disaster medicine) OR (disaster preparedness) OR (disaster planning)) AND ((simulation) OR (simulation learning))
Nil results
((major incident) or (mass casualty incident) or (mass casualty event)) AND ((simulation) OR (simulation learning))
Nil results
((disaster medicine) OR (disaster preparedness) OR (disaster planning)) AND ((simulation) OR (simulation learning))
Nil results
Appendix 2 Excluded Articles after Quality Assessment
Paper 1: Burke RV, Kim TY, Bachman SL, et al . Using mixed methods to assess pediatric disaster preparedness in the hospital setting. Prehosp Disaster Med. 2014;29(6):569-575. Reference Burke, Kim and Bachman36
This article was excluded, due to a very significant risk of selection bias and low strength of evidence. In this study, a whole hospital simulation was conducted in 3 Los Angeles Hospitals, with staff from all 3 hospitals participating. Self-reported perception of knowledge improvement was assessed by a post course questionnaire only, of which only 20 participants completed. It was not disclosed by the authors how many individuals participated in the simulation. However, given the simulation occurred over 3 hospitals, involving the whole site, it is likely to be a significant number. Given the weak study design, and undisclosed survey completion rates, this study quality was found to be very low and was thus excluded from this review.
Paper 2: Verheul ML, Dückers M, Visser BB, et al . Disaster exercises to prepare hospitals for mass-casualty incidents: does it contribute to preparedness or is it ritualism? Prehosp Disaster Med. 2018;33(4):387-393. Reference Verheul, Dückers and Visser10
This paper was excluded, as the primary outcome recorded is not valid, and there were significant sources of bias. Reference Verheul, Dückers and Visser10 The authors retrospectively evaluated 32 MI simulation reports from Dutch hospitals, with each hospital supplying 2 reports (with a mean time of 26.1 mo between reports). The authors identified the number of items of improvement suggested in the initial report and compared this with the number of items of improvement suggested in the later report. The data had several limitations: they were collected retrospectively from heterogenous evaluation formats. They were also limited by the initial evaluators; the authors themselves identified no clear selection criteria and training among evaluators. However, most significantly, it is doubtful that the primary outcome of interest, the number of areas of improvement identified, accurately reflects improvement in MI management. There was no actual evaluation on improvement of areas identified, just the number identified. Given the data were collected by evaluators with no standardization, there are numerous possibilities for this difference. For example, improved engagement with the simulation, self-reflection from previous simulations, and differences between evaluators. Measuring the number of areas of improvement identified, with no actual evaluation into these areas, is not a valid outcome measure. The study was thus excluded as it lacks internal validity.
Appendix 3 Unique Points Identified
The 2020 English pediatric study focused on a unique aspect of MI preparation, improving pediatric discharges. The authors developed a discharge criterion that could be applied to hospital inpatients at the start of a MI to identify appropriate early discharges, thus increasing the hospitals surge capacity. Reference Bird, Braunold and Dryburgh-Jones2 This is a unique tool with clinical implications, which was appropriately evaluated by means of simulation in a Plan-Do-Act evaluation model. Not only does this article provide evidence to support this technique being implementation in other sites, but it also provided an excellent example in how to implement and evaluate new clinical tools from a systems perspective in major incidences.
The 2020 Swedish study also had a unique perspective, demonstrating by means of tabletop simulations that there was a correlation between proactive decision-making skills and staff procedural skills. Reference Murphy, Kurland and Rådestad44 While this study had a narrow focus, it did provide a clinically relevant outcome. This study provides evidence to support clinical, procedural staff being more highly involved in the command structure of MIs (where proactive decisions are required).