Hostname: page-component-78c5997874-j824f Total loading time: 0 Render date: 2024-11-04T20:59:47.642Z Has data issue: false hasContentIssue false

Enhancing Reporting of After Action Reviews of Public Health Emergencies to Strengthen Preparedness: A Literature Review and Methodology Appraisal

Published online by Cambridge University Press:  17 September 2018

Robert Davies
Affiliation:
Public Health Directorate, St Helens and Knowsley Teaching Hospitals NHS Trust, West Midlands, England
Elly Vaughan
Affiliation:
Bazian Ltd, An Economist Intelligence Unit Business, London, England
Graham Fraser
Affiliation:
Country Preparedness Section, European Centre for Disease Prevention and Control, Stockholm, Sweden
Robert Cook
Affiliation:
Bazian Ltd, An Economist Intelligence Unit Business, London, England
Massimo Ciotti
Affiliation:
Country Preparedness Section, European Centre for Disease Prevention and Control, Stockholm, Sweden
Jonathan E. Suk*
Affiliation:
Country Preparedness Section, European Centre for Disease Prevention and Control, Stockholm, Sweden
*
Correspondence and reprint requests to Dr Jonathan E. Suk, European Centre for Disease Prevention and Control (ECDC), Gustav III:s Boulevard 40, 16973 Solna, Sweden (e-mail: [email protected]).
Rights & Permissions [Opens in a new window]

Abstract

Objective

This literature review aimed to identify the range of methods used in after action reviews (AARs) of public health emergencies and to develop appraisal tools to compare methodological reporting and validity standards.

Methods

A review of biomedical and gray literature identified key approaches from AAR methodological research, real-world AARs, and AAR reporting templates. We developed a 50-item tool to systematically document AAR methodological reporting and a linked 11-item summary tool to document validity. Both tools were used sequentially to appraise the literature included in this study.

Results

This review included 24 highly diverse papers, reflecting the lack of a standardized approach. We observed significant divergence between the standards described in AAR and qualitative research literature, and real-world AAR practice. The lack of reporting of basic methods to ensure validity increases doubt about the methodological basis of an individual AAR and the validity of its conclusions.

Conclusions

The main limitations in current AAR methodology and reporting standards may be addressed through our 11 validity-enhancing recommendations. A minimum reporting standard for AARs could help ensure that findings are valid and clear for others to learn from. A registry of AARs, based on a common reporting structure, may further facilitate shared learning. (Disaster Med Public Health Preparedness. 2019;13:618-625)

Type
Systematic Review
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © 2018 Society for Disaster Medicine and Public Health, Inc.

Public health emergencies, such as infectious disease outbreaks, floods, and terrorist attacks, impact societies severely but are relatively rare for individual countries. However, this national rarity provides an impetus to systematically learn from emergencies when they do occur, so as to strengthen public health emergency preparedness and response planning. 1

One such learning approach is to conduct an after action review (AAR), or a lessons learned document. These documents are completed after a public health emergency has occurred and draw on quantitative and qualitative methods to identify strengths and weaknesses in the public health emergency preparedness system. By addressing any weaknesses identified, they aim to improve preparedness, response, and recovery capacities and capabilities, ultimately lessening the impact of future incidents.Reference Stoto 2 , Reference Geary 3

Typically, documentation and other quantitative fact-finding methods help establish a skeleton timeline of events, whereas different forms of qualitative investigation, such as personal testimony, provide richer insights into how and why events unfolded. Combined, these approaches aim to establish the root causes of the event and to identify what lessons can be learned for the future.Reference Stoto 2 Reference Stoto 9

Despite the crucial role of AARs in linking the past with the present and future, there is no widely used or standardized approach to conducting AARs of public health emergencies. Particularly, there is no indication of whether insights gained are valid or based on robust methodologies. 1 , Reference Stoto 9

This literature review aimed to identify the range of methods used to produce AARs to improve emergency preparedness planning and to develop appraisal tools to compare their methodological reporting and validity standards, with a focus on qualitative methods.

METHODS

Literature Search

We searched biomedical databases (Medline, Embase, Scopus) and gray literature sources (Google Advanced, Google Scholar) for AARs that described an enacted response to an emergency (theoretical or “table-top” exercises were excluded), were within the geographic scope of the literature review (the European Union, Australia, Canada, New Zealand, and the United States), and were published in English from January 2000 to August 2015.

Search strategies were structured around 2 major concepts: AARs and emergency preparedness. Searches combined free text and thesaurus terms (where available), including synonyms such as “post-event analysis” and “critical incident review” and techniques used within AARs such as “facilitated look back” and “root-cause analysis” (Supplemental Information [SI] 1). Additional search terms and synonyms were identified by scanning the abstracts of articles identified through a scoping search. Additional AARs were identified by searching the Endnote Library for a previous review undertaken for the European Centre for Disease Prevention and Control (ECDC), looking for evaluations of emergency response. 10 , Reference O’Brien, Taft and Geary 11

Reviews were sifted for relevance first on title and abstract and then on full-text review (Figure 1, PRISMA diagram). Studies excluded at the full-text stage can be found in SI-2.

Figure 1 PRISMA diagram

Development of Appraisal Tools

We developed 2 appraisal tools to systematically document the methods used in AARs, to compare methodological reporting and validity between diverse AARs, and to act as a benchmark of theoretical best practice.

We adapted the approach of WoloshynowychReference Woloshynowych, Rogers and Taylor-Adams 12 – which related to the analysis of after actions in health care – to an emergency public health context by triangulating it with 9 contemporary AAR templates.Reference Piltch-Loeb, Nelson and Kraemer 5 , 13 20 The templates were identified through targeted scoping searches in Google, using synonyms for AARs and templates. These templates were multi-sectorial, coming from after action reports, a significant event analysis, and peer assessments in the fields of US national defense, 14 US state government, 13 UK medicolegal, 17 Canadian health care insurance, 20 international emergency public health,Reference Piltch-Loeb, Nelson and Kraemer 5 , 16 a UK hospital,Reference Taylor-Adams and Vincent 15 and patient safety agencies (See SI-2). 18 , 19 Further tool modifications were made in consultation with an expert advisor to increase its relevance to emergency public health. This resulted in a 50-item appraisal tool (SI-3).

Adapting the approach of Piltch-Loeb,Reference Piltch-Loeb, Nelson and Kraemer 5 we developed an additional 11-point summary tool of factors that boosted methodological rigor in case study and qualitative data collection and analysis.

The original Piltch-Loeb 10-point tool remained intact with minor revisions in definitions to better reflect the context of AARs in emergency public health. We added an 11th factor to capture whether the AAR had ultimately achieved its aim of uncovering the root causes of preparedness, response and recovery activities, rather than more superficial causes. Definitions of the 11 points are included in SI-4.

Appraising the After Action Reviews

The 50-item appraisal tool (SI-3) and 11-item summary measure (SI-4) were applied sequentially to each AAR. First, the 50-item tool was used to systematically document the methods undertaken by each AAR, before being summarized in the 11-item measure, allowing for a simpler comparison of methodology and validity across diverse reviews.

AARs were reviewed against each item on the summary validity tool and assigned one of 3 codes. Fully met (++): These criteria have been fully and often comprehensively met, and we have little doubt that these criteria have been met. Partially met (+): The criteria have been met in some regards, but there is significant doubt about the comprehensiveness or there are clear elements missing, preventing a higher rating. Not met (-): These criteria are not met or have not been reported.

A sample of 3 AARs was independently coded by a second reviewer to test the reliability of the coding instrument and to clarify initial rating definitions. The second rater was blind to the first rater’s scores and rationales. Given the size of the sample, inter-coder agreement was not calculated. Differences between the 2 raters were discussed and changes agreed by consensus. This led to revisions in the wording of some criteria and scoring guidance to improve clarity and therefore scoring consistency. Definitions of the criteria and additional notes used to guide rating decisions are described in SI-3.

RESULTS

Overview

Our search identified 24 published AAR documents, relating to 22 distinct AARs (Table 1).

Table 1 Summary of 22 Included AARs

* A risk evaluation method that can be used to analyze and demonstrate causal relationships in high risk scenarios.

The reviews covered national and international responses to the 2009 A(H1N1) influenza pandemic (n = 8),Reference Masotti, Green and Birtwhistle 21 28 terrorist bombing incidents (n = 5), 29 33 industrial explosions (n = 6), 34 Reference Paltrinieri, Dechy and Salzano 39 hurricanes (n = 2),Reference Knox 40 ,Reference Brevard, Weintraub and Aiken 41 chemical contamination of drinking water (n = 1),Reference Terenzini 42 a heat wave (n = 1),Reference Adrot 43 and large-scale flooding (n = 1) (see Table 1).Reference Pitt 44

Appraisal of After Action Reviews

There was great diversity in the structure, scope, and level of methodological reporting in the 24 reviews identified, potentially reflecting a lack of a standardized approach (Table 2).Reference Masotti, Green and Birtwhistle 21 Reference Pitt 44 The majority drew heavily on qualitative methods, but the use of established techniques to ensure rigor was routinely missing from the published reports.

Table 2 Summary Validity Measures Reporting for 22 AARs (Including 2 Annexes Appraised Alongside the AAR)Reference Masotti, Green and Birtwhistle 21 Reference Pitt 44

* Overall validity score based on the following scoring: (++) = 2; (+) = 1; (-) = 0.

Validity boosting measures most frequently reported in the 24 reviews included spending adequate time to observe the setting, people, and incident documentation; sampling a diverse range of views; using multiple sources of data collection; and utilizing multiple perspectives during the analysis.Reference Masotti, Green and Birtwhistle 21 Reference Pitt 44 However, these techniques were generally reported in brief, with few reviews fully meeting all 4 basic validity dimensions.

The criteria that were most commonly unmet in these reports were acknowledging a theoretical basis for the review methodology; describing how the reviewers handled discordant evidence; having an external peer-review process; and ensuring respondents to the reviews had an opportunity to validate that their views had been reflected accurately in the final analysis and report (see Table 2).

The majority of AARs showing depth and insight (9 fully met this validity measure) also clearly reported using multiple data sources (7 of 9) and sustained engagement (5 of 9). Other AARs demonstrated depth and insight without reporting clear methods (see Table 2). 29 , 34 , 35 , Reference Pitt 44

Suggestions

Based on the systematic assessment of methods and validity measures in 24 AARs, we suggest 11 measures to improve the reporting and validity of reviews more widely (Table 3).

Table 3 Eleven Validity-Enhancing Considerations for Improving Review and Reporting of Public Health Emergency Events

PHEP = public health emergency preparedness.

* The development of an evidence-based minimum reporting standard for after action reviews, similar to the Consolidated Standards of Reporting Trials (CONSORT) statement for randomized controlled trials, may facilitate this process and comparisons between AARs. See http://www.consort-statement.org/.

DISCUSSION

To our knowledge, this is the first review to systematically document methods used in public health emergency preparedness AARs across a range of hazards and to formulate suggestions to improve future practice based on principles of qualitative research best practice.

The strengths of this review include our inclusive definition of an AAR, our inclusion of non-health-care specific after actions and reporting templates, and the development of tools rooted in after action methodological research. These tools were applied to a variety of real-world AARs in the field of emergency preparedness spanning multiple hazard types.

The most common data collection methods used by the 24 AARs were document review (typically preparedness plans and protocols compared to execution), focus groups, formal public consultations, in-depth interviews, public discussion forums, questionnaires, site visits, and workshops.

Most reviews (17 of 24) did not report a theoretical framework to guide investigation; of those that did, all reported a comparative or case study methodology. This represents a small fraction of the diverse range of approaches available to after action investigators, including the after action techniqueReference Flanaghan 4 , Reference Serrat 8 ; after action analysisReference Schwester 7 , 45 ; root-cause analysisReference Berry and Krizek 46 Reference Singleton, Debastiani and Rose 48 ; facilitated look-backsReference Aledort, Lurie and Ricci 49 ; peer assessment approachReference Piltch-Loeb, Nelson and Kraemer 6 ; realist evaluationReference Piltch-Loeb, Nelson and Kraemer 5 , Reference Stoto 9 ; bow-tie analysisReference Paltrinieri, Dechy and Salzano 39 ; and serious case reviews. 50

Underlying methodologies were frequently unreported, so the report validity remained ambiguous. Although a lack of reporting of basic methods to safeguard validity does not necessarily imply that they were not considered or followed, it does significantly increase doubt surrounding the methodological basis of the review and the validity of its conclusions.

Limitations

Our review searched for reports from a diverse range of after actions, but the analyzed sample was small (n = 24) and subject to reporting and selection bias, and may not represent the full spectrum of incident reports available. For example, we excluded 16 studies with insufficient methods for analysis (see SI-2: Excluded Studies) and all reviews not published in English.

Three of the 24 included reviews were used to test and develop early versions of both appraisal tools before their final application to the remaining 21 reports, further reducing the number of independent reviews appraised.

Most AAR reports were not clear on how their data analysis led to generalizable insights by reviewers or how discordant information was handled. 22 , 28 , 29 As such, it was not clear to what extent certain views or data had been explored or discounted, for example, if they did not fit with the emerging researcher consensus. This risked introducing perception bias into the analysis and conclusions drawn.

CONCLUSIONS

We suggest that the lack of methodological reporting provides a strong case for the development of evidence-based minimum reporting standard for AARs, akin to the CONSORT statement for randomized controlled trials. These standards could benefit after action reports in 2 ways. First, they may ensure that a wider range of robust methods is considered before and during the review, and, second, that methods are more clearly reported in the end report itself, allowing an external assessment of validity. The 11-point summary tool presented here allows a simple validity comparison to be made across a range of diverse AARs, which could be further developed and refined in the future.

It is noteworthy that critical incident registries have been adopted in transport, health care, and workplace safety industries, but not in emergency preparedness.Reference Piltch-Loeb, Nelson and Kraemer 5 We thus advocate an AAR registry (similar in nature to the US government’s Lessons Learned Information Sharing program) in Europe, to facilitate cross-border learning that will further strengthen emergency preparedness.Reference Savoia, Agboola and Biddinger 51 The 11-point summary validity tool presented here could contribute to such an initiative by promoting an AAR design that is as robust and credible as possible.

Supplementary material

To view supplementary material for this article, please visit https://10.1017/dmp.2018.82

Acknowledgment and Author Contributions

This publication is based upon a report produced by Bazian Ltd and commissioned by the ECDC under Direct Service Contract ECD.5860. Robert Davies provided input into project design, performed data extraction, performed data synthesis, and coauthored this manuscript; Elly Vaughan managed the project at Bazian, provided input into project design, designed and ran literature searches, performed data extraction, contributed to the synthesis, and coauthored this manuscript; Dr Robert Cook provided input into project design, reviewed draft reports, and provided project oversight; Dr Graham Fraser, Dr Massimo Ciotti, and Dr Jonathan Suk initiated the study and commissioned the work, provided technical guidance throughout the study, and coauthored this manuscript; Dr Katie Geary provided expert advice throughout the project design and execution, including refining the appraisal tools for a public health emergency context.

References

REFERENCES

1. ECDC. Meeting report: ad hoc advisory meeting on preparedness Stockholm, 15-16 May 2014. Stockholm: European Centre for Disease Prevention and Control. 2014. http://ecdc.europa.eu/en/publications/Publications/preparedness-meeting-2014.pdf. Accessed May 25, 2018.Google Scholar
2. Stoto, MA. Measuring and assessing public health emergency preparedness. J Public Health Manag Pract. 2013;19:S16S21.Google Scholar
3. Geary, K. (International SOS). Personal communication. Rob Davies (Bazian Ltd). 2015.Google Scholar
4. Flanaghan, JC. The critical incident technique. Psychol Bull. 1954;51(4):327358.Google Scholar
5. Piltch-Loeb, RN, Nelson, C, Kraemer, J, et al. Peer assessment of public health emergency response toolkit. Cambridge (MA): Harvard School of Public Health. 2014. https://blogs.commons.georgetown.edu/lamps/files/2013/10/Peer-Assessment-toolkit-Updated-1-17-14.pdf. Accessed May 25, 2018.Google Scholar
6. Piltch-Loeb, RN, Nelson, CD, Kraemer, JD, et al. A peer assessment approach for learning from public health emergencies. Improv Sys Pract. 2014; 129(Suppl 4):2834.Google Scholar
7. Schwester, RW. Handbook of Critical Incident Analysis. New York (NY): Routledge (Taylor & Francis Group). 2014.Google Scholar
8. Serrat, O. The critical incident technique. Washington, DC: Asian Development Bank. 2010.Google Scholar
9. Stoto, MA. Getting from what to why: using qualitative methods in public health systems research. Orlando (FL): PHSR IG Methods Panel. 2012. http://www.academyhealth.org/files/phsr/Qual-Stoto.pdf. Accessed May 26, 2018.Google Scholar
10. ECDC. Best practices in ranking emerging infectious disease threats: a literature review. Stockholm European Centre for Disease Prevention and Control. 2015. http://ecdc.europa.eu/en/publications/Publications/emerging-infectious-disease-threats-best-practices-ranking.pdf. Accessed May 26, 2018.Google Scholar
11. O’Brien, EC, Taft, R, Geary, K, et al. Best practices in ranking communicable disease threats: a literature review, 2015. Euro Surveill. 2016; 21(17):pii=30212.Google Scholar
12. Woloshynowych, N, Rogers, S, Taylor-Adams, S, et al. The investigation and analysis of critical incidents and adverse events in healthcare. Health Technol Assess. 2005; 9(19):1143, iii.Google Scholar
13. CGOES. Standardized emergency management system: after action report. California: California Governor’s Office of Emergency Services. 2013. http://www.caloes.ca.gov/PlanningPreparednessSite/Documents/Local%20Government%20After%20Action%20Report%20Template.doc. Accessed May 26, 2018.Google Scholar
14. HSEEP. After-action report/improvement plan. Washington (DC): Homeland Security Exercise and Evaluation Program. 2013. http://www.in.gov/dhs/files/AAR-IP_Template_Apr-13-2_Clean.docx. Accessed May 26, 2018.Google Scholar
15. Taylor-Adams, S, Vincent, C. Systems analysis of clinical incidents: the London protocol. Imperial College London. 2004. https://www1.imperial.ac.uk/resources/C85B6574-7E28-4BE6-BE61-E94C3F6243CE/londonprotocol_e.pdf. Accessed May 27, 2018.Google Scholar
16. ISOS. Significant event analysis reporting form. London: International SOS. 2015.Google Scholar
17. MDU. Medico-legal guide to serious event analysis. London: Medical Defence Union. 2014. http://www.themdu.com/~/media/files/mdu/publications/guides/significant%20event%20analysis/medico_-_legal_guide_to_significant_event_analysis.pdf. Accessed May 27, 2018.Google Scholar
18. NPSA. A quick guide to conducting a significant event audit. London: National Patient Safety Agency. 2008. http://www.nrls.npsa.nhs.uk/EasySiteWeb/getresource.axd?AssetID=61502&type=full&servicetype=Attachment. Accessed May 27, 2018.Google Scholar
19. NPSA. Significant event audit: guidance for primary care teams. London: National Patient Safety Agency. 2008. http://www.nrls.npsa.nhs.uk/EasySiteWeb/getresource.axd?AssetID=61501. Accessed May 27, 2018.Google Scholar
20. HIROC. Critical incidents and multi-patient events: risk resource guide. Toronto (ON): Healthcare Insurance Reciprocal of Canada. 2015. https://www.hiroc.com/getmedia/c110b394-c5a6-4e93-80b3-73da14436dbd/HIROC-Management-of-Critical-Incidents-April-2015.pdf.aspx?ext=.pdf. Accessed May 27, 2018.Google Scholar
21. Masotti, P, Green, ME, Birtwhistle, R, et al. PH1N1: a comparative analysis of public health responses in Ontario to the influenza outbreak, public health and primary care: lessons learned and policy suggestions. BMC Public Health. 2013; 13(1):687.Google Scholar
22. DSB. New influenza A virus (H1N1): a summary of a study on the national response in Norway. Tonsberg. 2011. http://www.dsb.no/Global/Publikasjoner/2011/Rapport/summary_new%20influenza_virus_H1N1.pdf. Accessed May 25, 2018Google Scholar
23. Socialstyrelsen, A(H1N1) 2009: an evaluation of Sweden’s preparations for and management of the pandemic. Stockholm. 2011. http://www.socialstyrelsen.se/Lists/Artikelkatalog/Attachments/18398/2011-8-4.pdf. Accessed May 25, 2018.Google Scholar
24. Hine, D. The 2009 influenza pandemic: an independent review of the UK response to the 2009 influenza pandemic. London. 2010. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/61252/the2009influenzapandemic-review.pdf. Accessed May 26, 2018.Google Scholar
25. WHO. Global survey on national vaccine deployment and vaccination plans for pandemic A(H1N1) 2009 vaccine – 2010: report of findings. Geneva. 2013. http://www.who.int/influenza_vaccines_plan/resources/2010_H1N1_NVDP_WHO_Survey.pdf. Accessed May 26, 2018.Google Scholar
26. EC. Assessment report on EU-wide pandemic vaccine strategies. Brussels. 2010. http://ec.europa.eu/health/communicable_diseases/docs/assessment_vaccine_en.pdf. Accessed May 26, 2018.Google Scholar
27. HPA. Assessment report on the EU-wide response to pandemic (H1N1) 2009. London. 2010. http://ec.europa.eu/health/communicable_diseases/docs/assessment_response_en.pdf. Accessed May 26, 2018.Google Scholar
28. WHO. Recommendations for good practice in pandemic preparedness: identified through evaluation of the response to pandemic (H1N1) 2009. Copenhagen. 2010. http://www.euro.who.int/__data/assets/pdf_file/0017/128060/e94534.pdf. Accessed May 28, 2018.Google Scholar
29. MEMA. After action report for the response to the 2013 Boston marathon bombings. Framingham (MA). 2014. http://www.mass.gov/eopss/docs/mema/after-action-report-for-the-response-to-the-2013-boston-marathon-bombings.pdf. Accessed May 28, 2018.Google Scholar
30. Goralnick, E, Halpern, P, Loo, S, et al. Leadership during the Boston marathon bombings: a qualitative after-action review. Disaster Med Public Health Prep. 2015;9(10):6.Google Scholar
31. Little, M, Cooper, J, Gope, M, et al. “Lessons learned”: a comparative case study analysis of an emergency department response to two burns disasters. Emerg Med Australas. 2012;24(4):420429.Google Scholar
32. Aylwin, CJ, König, TC, Brennan, NW, et al. Reduction in critical mortality in urban mass casualty incidents: analysis of triage, surge, and resource use after the London bombings on July 7, 2005. Lancet. 2006;368(9554):22192225.Google Scholar
33. OEM. After action report: Alfred P. Murrah Federal Building bombing 19 April 1995 in Oklahoma City, Oklahoma. Oklahoma (OK). 1995. https://www.ok.gov/OEM/documents/Bombing%20After%20Action%20Report.pdf. Accessed May 28, 2018.Google Scholar
34. HSE. The Buncefield incident 11 December 2005: the final report of the major incident investigation board volume 2. Liverpool. 2008. http://www.hse.gov.uk/comah/buncefield/miib-final-volume2a.pdf.Google Scholar
35. HSE. The Buncefield incident 11 December 2005: the final report of the major incident investigation board volume 1. Liverpool. 2008. http://www.hse.gov.uk/comah/buncefield/miib-final-volume1.pdf.Google Scholar
38. Tapster, C. Buncefield: multi-agency debrief report and recommendations. Hertford. 2007. http://www.hertsdirect.org/infobase/docs/pdfstore/bunrepdebrief.pdf.Google Scholar
39. Paltrinieri, N, Dechy, N, Salzano, E, et al. Lessons learned from Toulouse and Buncefield disasters: from risk analysis failures to the identification of atypical scenarios through a better knowledge management. Risk Anal. 2012;32(8):14041419.Google Scholar
40. Knox, CC. Analyzing after-action reports from Hurricanes Andrew and Katrina: repeated, modified, and newly created recommendations. J Emerg Manage. 2013;11(2):160168.Google Scholar
41. Brevard, SB, Weintraub, SL, Aiken, JB, et al. Analysis of disaster response plans and the aftermath of Hurricane Katrina: lessons learned from a level I trauma center. J Trauma Injury Infect Crit Care. 2008;65(5):11261132.Google Scholar
42. Terenzini, C. The report of the Blue Ribbon Committee on the water emergency of April 25-27, 2007 in Spencer, Massachusetts. Spencer (MA). 2007. http://www.spencerfire.org/WaterEmergency.pdf.Google Scholar
43. Adrot, A. Crisis response, organizational improvisation and the dispassionate communicative genre during the 2003 French heat wave. Paris. 2011. https://basepub.dauphine.fr/bitstream/handle/123456789/7969/Adrot_aim2011.PDF?sequence=1.Google Scholar
45. SIESWE. Evaluation of an innovative method of assessment: critical incident analysis. Glasgow: Scottish Institute for Excellence in Social Work Education. 2005. https://pure.strath.ac.uk/portal/files/11177868/sieswe_nam_evaluation_critical_incident_analysis_2005_02.pdf.Google Scholar
46. Berry, K, Krizek, B. Root cause analysis in response to a “near miss.” J Healthc Qual. 2000;22(2):1618.Google Scholar
47. Iedema, R, Jorm, C, Braithwaite, J. Managing the scope and impact of root cause analysis recommendations. J Health Organ Manag. 2008;22(6):569585.Google Scholar
48. Singleton, CM, Debastiani, S, Rose, D, et al. An analysis of root cause identification and continuous quality improvement in public health H1N1 after-action reports. J Public Health Manage Pract. 2014;20(2):197204.Google Scholar
49. Aledort, JE, Lurie, N, Ricci, KA, et al. Facilitated look-backs: a new quality improvement tool for management of routine annual and pandemic influenza. RAND Corporation. 2006. http://www.rand.org/pubs/technical_reports/TR320.html.Google Scholar
50. OFSTED. Learning lessons from serious case reviews 2009-2010: Ofsted’s evaluation of serious case reviews from 1 April 2009 to 31 March 2010. The Office for Standards in Education, Children’s Services and Skills. 2010. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/381110/Learning_20lessons_20from_20serious_20case_20reviews_202009-2010.pdf.Google Scholar
51. Savoia, E, Agboola, F, Biddinger, PD. Use of after action reports (AARs) to promote organizational and systems learning in emergency preparedness. Int J Environ Res Public Health. 2012;9:14.Google Scholar
Figure 0

Figure 1 PRISMA diagram

Figure 1

Table 1 Summary of 22 Included AARs

Figure 2

Table 2 Summary Validity Measures Reporting for 22 AARs (Including 2 Annexes Appraised Alongside the AAR)2144

Figure 3

Table 3 Eleven Validity-Enhancing Considerations for Improving Review and Reporting of Public Health Emergency Events