Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-22T21:42:56.957Z Has data issue: false hasContentIssue false

Quality improvement primer part 3: Evaluating and sustaining a quality improvement project in the emergency department

Published online by Cambridge University Press:  21 June 2018

Lucas B. Chartier*
Affiliation:
Department of Medicine, Division of Emergency Medicine, University of Toronto, Toronto, ON Emergency Department, University Health Network, Toronto, ON
Samuel Vaillancourt
Affiliation:
Department of Medicine, Division of Emergency Medicine, University of Toronto, Toronto, ON St. Michael’s Hospital, Emergency Department, Toronto, ON
Amy H. Y. Cheng
Affiliation:
Department of Medicine, Division of Emergency Medicine, University of Toronto, Toronto, ON St. Michael’s Hospital, Emergency Department, Toronto, ON
Antonia S. Stang
Affiliation:
Departments of Pediatrics and Community Health Sciences, Division of Emergency Medicine, University of Calgary, Calgary, AB Section of Emergency Medicine, Alberta Children’s Hospital, Calgary, AB
*
Correspondence to: Dr. Lucas B. Chartier, 200 Elizabeth St., RFE-GS-480, Toronto, ON M5G 2C4, Canada; Email: [email protected]

Abstract

Quality improvement (QI) and patient safety are two areas that have grown into important operational and academic fields in recent years in health care, including in emergency medicine (EM). This is the third and final article in a series designed as a QI primer for EM clinicians. In the first two articles we used a fictional case study of a team trying to decrease the time to antibiotic therapy for patients with sepsis who were admitted through their emergency department. We introduced concepts of strategic planning, including stakeholder engagement and root cause analysis tools, and presented the Model for Improvement and Plan-Do-Study-Act (PDSA) cycles as the backbone of the execution of a QI project. This article will focus on the measurement and evaluation of QI projects, including run charts, as well as methods that can be used to ensure the sustainability of change management projects.

Résumé

L’amélioration de la qualité (AQ) et la sécurité des patients sont deux domaines qui ont fini par devenir des champs opérationnel et universitaire importants au cours des dernières années en soins de santé, y compris en médecine d’urgence. Il s’agit du dernier article d’une série de trois, conçue comme une introduction à l’AQ à l’intention des cliniciens qui travaillent au service des urgences. Dans les deux premiers, il a été question d’une étude de cas fictive dans laquelle une équipe tentait de réduire le temps écoulé avant l’administration d’antibiotiques chez des patients atteints d’une sepsie et admis par le service des urgences. Ont aussi été présentés des concepts de planification stratégique, y compris des outils de participation active d’intervenants et d’analyse de causes profondes, ainsi que le Model for Improvement et les cycles « Planifier – Exécuter – Étudier – Agir » considérés comme la base de réalisation des projets d’AQ. Le troisième portera sur l’évaluation des projets d’AQ et les mesures utilisées, dont des organigrammes d’exploitation, ainsi que sur des méthodes susceptibles d’assurer la pérennité des entreprises de gestion de changements.

Type
Original Research
Copyright
Copyright © Canadian Association of Emergency Physicians 2018 

CLINICIAN’S CAPSULE

What is known about the topic?

The first two articles of this quality improvement (QI) Primer Series for emergency medicine (EM) clinicians reviewed foundational steps to prepare for and execute a QI project.

What did this study ask?

This article focused on the measurement, evaluation, and sustainability of QI projects.

What did this study find?

Run charts are used to represent the temporal relationship visually between interventions and the measures of interest. Sustainability of projects can be achieved through the use of specific frameworks, tools, and cultural change.

Why does this study matter to clinicians?

QI has grown into an important operational and academic field in EM in recent years, and a better understanding of its methodology would lead to greater improvement in patient care.

INTRODUCTION

Emergency departments (EDs) are a crucial point of access to care for millions of Canadians each year, but the fast-paced and complex nature of ED care can pose threats to patient safety. 1 , 2 Improvements in the quality and safety of ED care have the potential to affect patient outcomes meaningfully. Over the past two decades, which included the publication of two important Institute of Medicine reports on quality improvement (QI) and patient safety (PS), the number of QI and PS projects has grown substantially in all medical disciplines. 3 - 5

This is the third and final article in a series intended as a QI primer for emergency medicine (EM) clinicians; it builds on the example of a project that aims to decrease the time from triage to antibiotic therapy for patients admitted with sepsis. In the first article, we introduced the concept of strategic planning that included stakeholder analysis and engagement; the establishment of a core change team; and three tools for root cause analysis, Ishikawa (fishbone) diagrams, Pareto charts, and process mapping.Reference Chartier, Cheng, Stang and Vaillancourt 6 In the second article, we presented the four steps of a Plan-Do-Study-Act (PDSA) cycle and the Model for Improvement (MFI), a rapid-cycle testing method popularized by the Institute for Healthcare Improvement (IHI) that includes the determination of the aim, measures, and change ideas for a project.Reference Chartier, Stang, Vaillancourt and Cheng 7 This article will introduce the tools used in measuring and evaluating QI projects, such as monitoring the impact of interventions during the PDSA cycles and evaluating the sustainability of outcomes and new practices.

Run charts

Specific tools have been developed in the QI field to evaluate the impact of interventions.Reference Langley 8 Health care providers may be more familiar with analysis methods such as t-tests and chi-square tests, which are used at the end of a study to compare various populations or interventions. However, QI projects require more dynamic monitoring tools that can help inform the project in real time and detect signs of change early so interventions can be refined and retested.

Run charts are a visual way to represent data and demonstrate temporal relationships between various interventions and the measures of interest.Reference Perla, Provost and Murray 9 They are easy to construct without statistical programs or mathematical complexity. They are usually used to identify signals in the data and demonstrate the change (or lack thereof) in a selected quality measure before, during, and after a QI project.Reference Perla, Provost and Murray 9 Another more rigorous and resource-intensive tool used to measure the impact of an intervention that is beyond the scope of this article is a statistical process control (SPC) chart.Reference Provost and Murray 10 SPC charts are used to detect variability in a process with a focus on non-random (or “special cause”) variation.Reference Benneyan, Lloyd and Plsek 11

The core change team of your sepsis project decides to build a run chart to monitor the previously selected process measure of time from triage to antibiotic therapy over a six-month time frame (Figure 1). On the x-axis, you mark the time intervals at which you will collect your data. In the spirit of rapid-cycle testing, this time frame should be as small as is feasible with respect to local resources and logistics of data collection (e.g., weekly intervals). It is useful to collect and display a number of values for the period before the QI project started to obtain an appreciation of baseline performance. On the y-axis, you mark the quality measure of interest that is time to antibiotics. By convention, run charts always display a horizontal “centreline” at the level of the median, that is, the value on the y-axis at which one-half of the data points are above and one-half are below the line. You may also add another line to signify the target or aim of the project. Over the course of your project, your team fills the run chart with weekly measurements. You should also annotate the chart with the timing of the various interventions to show which interventions were associated with which effects over time.

Figure 1 Run chart of your sepsis project. The x-axis represents the weeks before (negative numbers) and during (positive numbers) your QI project; the y-axis represents the time from triage to antibiotics (in hours). The annotations represent the times at which the various change interventions were introduced and then iteratively tested by your team. The continuous horizontal line (i.e., the centreline) represents the median of the entire data set (4.5 hours) and the dashed line represents the project’s target time (three hours). IT = Information technology.

To assist in identifying signals of success or failure in run charts, certain rules derived from statistical probability calculations are useful.Reference Anhøj and Olesen 12 These rules help prevent the natural tendency to overreact to a single and recent data point.Reference Deming 13 For the rules to be applicable, collating at least 10 measurements is usually necessary. We present here four rules for the interpretation of run charts; interested readers can consult the article by Perla et al. to further understand the nuances of these rules.Reference Perla, Provost and Murray 9

  • Shift rule:

    • o At least six consecutive data points fall above or below the median line (points on the median are skipped). Given that the mathematical likelihood of being on either side of the median is one in two in a random sample (by definition 50% are above, and 50% are below the line), the likelihood of having six consecutive points (an arbitrarily chosen but agreed-upon number) is 0.5 ^ 6=0.016, which is well below the statistical significance level of p<0.05. Data points #16 (week 7) to #23 (week 14), shown as triangles in Figure 1, exemplify this rule. Data points #27 (week 18) to #35 (week 26) also represent a shift (unmarked).

  • Trend rule:

    • o At least five consecutive data points increase or decrease in value (numerically equivalent points are skipped; data points can cross the median line). Similar rules of probability can be used to derive the statistical significance of this rule.Reference Olmstead 14 Data points #25 (week 16) to #31 (week 22), shown as squares in Figure 1, exemplify this rule. Data points #19 (week 10) to #25 (week 16) also represent a trend (unmarked).

  • Run rule:

    • o A “run” is a series of points in a row that are all on the same side of the median, and the number of runs in a run chart is determined by counting the number of times that the data line fully crosses the median and adding one. In Figure 1, the data line crosses the median three times (indicated by arrows), so there are four runs. A non-random pattern is signalled by too few or too many runs, as compared with the total number of data points in a run chart, according to established rules based on a 5% risk of failing the run test if the data were truly random.Reference Perla, Provost and Murray 9 , Reference Provost and Murray 10 , Reference Swed and Eisenhart 15 See Appendix 1 for a table of the number of expected runs for run charts with 10 to 60 total data points. For example, if there are too few runs, it may be that a successful intervention has increased or decreased the measure of interest to the point where it is preventing the data line from regressing toward and across the median line, which would occur in a random sample. In the case of your sepsis project, four runs are fewer than the 12 expected in a run chart with 35 total data points, if the data were truly random; therefore, it signals a non-random change in the system.

  • Astronomical point rule:

    • o One data point is or many data points are subjectively quite different from the rest visually. This is not a statistical rule but rather a gestalt rule, and it should prompt questions as to whether the result is accurate, meaningful, or even worthy of consideration. Data point #31 (week 22), shown as a diamond in Figure 1, is an astronomical point, which could represent an issue in the data quality.

YOUR SEPSIS PROJECT IN ACTION

As shown in Figure 1, your core change team tested multiple interventions to try to decrease the time to antibiotics for patients with sepsis. Your first PDSA cycle involved an educational intervention, given that you had identified a lack of provider knowledge as an issue through clinician surveys. Unfortunately, as is often the case with this type of intervention, your team’s educational intervention was not successful in driving down the outcome of interest, possibly given the multiple competing priorities faced by your ED colleagues and systemic barriers preventing best practices.Reference Cafazzo and St-Cyr 16 , Reference Grimshaw, Eccles, Lavis, Hill and Squires 17 Your education sessions did, however, inform many colleagues about a quality issue that was not previously known to them. Your second cluster of PDSA cycles involved improving the flagging of stat orders, as described in the second article of this series.Reference Chartier, Stang, Vaillancourt and Cheng 7 Your run chart indicates that this intervention was associated with a demonstrable improvement. Your third PDSA cycle involved the creation of a new policy in facilitating communication between nurses and physicians about sicker patients. Although this policy was well received and likely improved the overall care of patients in your ED, it failed to target patients with sepsis specifically and led to a worsening in time to antibiotics. Finally, your fourth PDSA cycle involved the accelerated placement of patients with severe sepsis in stretchers for assessment. This was done in conjunction with the information technology (IT) department, which enabled a computerized function on the electronic patient tracking board. This last intervention, which ranks higher on the hierarchy of effectiveness, seemed to have improved the time to assessment and time to antibiotics for patients with sepsis. 18

Sustainability

Sustainability has been defined as “when new ways of working and improved outcomes become the norm.”Reference Maher, Gustafson and Evans 19 In other words, the implemented changes that led to improved performance are now ingrained in the workflow and do not require ongoing support to continue. Although sustainability is discussed last in this QI series, it may be one of the most important considerations in a QI project. Many QI experts believe that as hard as improving care is, sustaining the improvements is even harder.Reference Ham, Kipping and McLeod 20 Indeed, a Harvard Business Review article reported that up to 70% of organizational change is not sustained in the corporate world, and the National Health Service (NHS) in the United Kingdom found that one-third of QI projects were not sustained one year after completion.Reference Maher, Gustafson and Evans 19 , Reference Beer and Nohria 21 Some of the reasons identified for these shortcomings include: the waning enthusiasm or turnover of front-line providers as newer, more exciting projects are rolled out; the competing personal or professional interests of managers; the shifting priorities of leaders who support the project through their time and resources; and the tendency of QI teams to declare victory too soon that leads to a shift in focus away from an improvement that may not have been as stable and ingrained as was thought.Reference Buchanan, Fitzgerald and Ketley 22

One of the most important tasks of a QI project leader is to know when to transition from the more active project phase to the longer-term sustainability phase. There are a few factors your core change team should consider when determining the readiness of the project and system for sustainability 23 :

  1. 1) Your evaluation (e.g., on a run chart) demonstrates an improved level of performance that has been maintained for a reasonable period of time (weeks or months, depending on the project and measurement intervals).

  2. 2) The changes have been tested with various combinations of staff, as well as at different times and/or locations (if applicable).

  3. 3) There is infrastructure in place (e.g., equipment, supplies, and personnel) to support the project in the long run. This does not mean that significant resources must be dedicated to the project, but rather that the intensity of changes must match the ability of the system to support and maintain them.

  4. 4) There are mechanisms and people in place to continue monitoring system performance (if it is feasible and is felt to encourage compliance with new processes).

There are a few factors that have been shown to increase the likelihood of sustainability of change projects. Your core change team should consider these while designing the various interventions and change ideas, as they are associated with front-line workers’ uptake and long-term commitment. The Highly Adoptable Improvement model suggests that the success of a health care improvement project depends on the balance between the providers’ perceived value of the initiative and the resulting change in workload.Reference Hayes 24 Additional workload, if not met with added capacity (e.g., resources or support), may lead to increased burden, workarounds, errors, and resistance.Reference Hayes 24 While workers may be willing to alter their standard work processes in the short run if there were a perceived benefit, this change would unlikely be sustained, unless it would make their workload lighter or the value gained would otherwise be substantial and visible.Reference Hayes 24 A few other questions to ask yourself and your team about the changes tested and implemented include whether the changes 25 :

  1. 1) Offer a clear advantage compared with the previous work processes

  2. 2) Are compatible with the system in place and providers’ values

  3. 3) Are simple and easy to use

  4. 4) Have a demonstrable and observable impact on the front-line workers

One useful model that can be used to assess the probability of the sustainability of a project is the NHS sustainability model developed by QI experts and front-line workers.Reference Maher, Gustafson and Evans 19 Using practical advice, it helps teams identify opportunities to increase the likelihood that the changes would be sustained, both in the planning and testing stages.Reference Maher, Gustafson and Evans 19 Table 1 shows the three sections and 10 factors included in the model, as well as a selection of the included questions in the model that may help you improve your interventions.

Table 1 The sustainability model

PDSA=Plan-Do-Study-Act.

Modified with permission from Maher et al.Reference Maher, Gustafson and Evans 19

Culture

Batalden and Davidoff, two pioneers of QI science, acknowledged the importance of culture when they stated that measured performance improvement was the result of the application of generalizable scientific evidence to a particular context.Reference Batalden and Davidoff 26 In other words, the settings, habits, and traditions in which a project operates may be as important to its success and sustainability as the change ideas themselves, or even more so.Reference Kaplan, Brady and Dritz 27

There are many ways to understand the environment in which a project operates. One simple, yet effective, method is to break the system down into various segments: the micro, meso, and macro levels.Reference Batalden and Splaine 28 There are various ways to conceptualize these levels, but for the purpose of a local QI project, the micro level is your small core change team and the clinical unit in which the changes are introduced. Ensuring that the team is multidisciplinary and that front-line workers are receptive to change are important factors to consider at the micro level. Planning for and advertising small wins early on in the project is also a good way to generate enthusiasm and gain momentum.Reference Kotter 29 The meso level is a constellation of departments and people who interact with your QI project. For your sepsis project, this would likely involve the entire ED (including medical and nursing leadership), laboratory, microbiology and infectious disease departments, and health IT department. Partnering with key players in each of these departments would likely help in the success of your project. The macro level refers to the organization in which your project takes place, including the senior leadership of your hospital. It can also include the health system in which your hospital operates. Aligning your QI project with hospital-level or external forces may increase the likelihood of long-term success.Reference Kaplan, Provost, Froehle and Margolis 30 For example, framing the aim of your project to decrease the time to antibiotics as a factor in achieving reduced morbidity and mortality could align with your organizational goal of being a high-reliability organization.

QI methods to sustain improvements

Once your project has reached a steady state, you will want to build safeguards to ensure its sustainability. There are many different methods available to sustain change in health care.Reference Ogrinc and Headrick 31 One useful method consists of using visual management tools. One practical way to use this for your sepsis project would be to create a performance board, as shown in Figure 2, that displays an outcome of interest (e.g., time to antibiotics) over relevant time periods. Colours are often used to demonstrate the successes and shortcomings of the project conspicuously.

Figure 2 Performance board for your sepsis project. Green background = better than objective; yellow background = less than 10% worse than objective; red background = more than 10% worse than objective.

Another QI method that may help sustain gains is creating a standard work process, which is a simple written or visual description of best practices with respect to the relevant process of care. Although front-line workers sometimes frown upon such one-size-fits-all approaches, decreasing the variability in evidence-based care processes has been associated with improved outcomes.Reference Toussaint 32 , Reference Ballard, Ogola and Fleming 33 The nature of the standard work can be variable and includes posters, training, audits, performance reviews, order sets, and IT constraints, but the specific tool must be tailored to the specific local environment.

A final QI method that can be used to improve sustainability is the improvement huddle. Huddles are regular, short meetings involving all members of a clinical unit that serve as reminders of ongoing projects.Reference Toussaint 32 They can be used to review past, current, and expected levels of performance; discuss reasons for high and low performance; brainstorm change ideas for future PDSA cycles; and assign responsibilities for new ideas and projects.

For your sepsis project, your team elects to continue with the process of stat orders and to strengthen the partnership with the IT department, as they are felt to be the two most sustainable changes in your ED. Given that your ED has a strong culture of peer accountability, you also partner with the ED nurse manager to convene all staff every day for five minutes for team huddles, to review the performance board for the department, and to notify the team of next steps (with regard to this sepsis and other projects).

DISSEMINATION

QI projects are generally local endeavours aimed at improving the care of patients in a specific institution. As a result of this local focus, few project leads think of sharing the lessons they have learned. However, there is tremendous learning to be gained from reading about what has worked, and what has not, in similar settings. There is no reason to reinvent the wheel every time a team wants to tackle sepsis in their ED, as dozens of similar institutions worldwide have already implemented successful initiatives. We suggest that at the outset of a project, teams identify prior work that could inform their own through the use of a Google Scholar search or the Turning Research Into Practice (TRIP) medical database website. Teams should also explore scholarly dissemination options for sharing their lessons learned, which may require obtaining research ethics board approval or exemption from their local institution at the outset of the project. The A pRoject Ethics Community Consensus Initiative (ARECCI) Ethics Screening Tool is a useful tool to determine the types of ethical risks involved and appropriate type of ethics review required for a QI project. 34 Teams should strongly consider adhering to the Standards for Quality Improvement Reporting Excellence (SQUIRE) Guidelines (http://squire-statement.org). There are also respected peer-reviewed journals that focus on the publication of QI projects and studies, such as BMJ Quality & Safety and BMJ Open Quality. Many QI projects, while possibly not suitable for peer-reviewed publication, can still be disseminated through abstracts, posters, or presentations at local rounds or at medical specialty or QI conferences (e.g., Annual Scientific Assembly of the Canadian Association of Emergency Physicians and Health Quality Transformation conference).

CONCLUSION

This article concludes our three-part QI primer for EM clinicians. In the first article, we discussed the work required to prepare for a QI project, including stakeholder engagement and the use of tools to understand the current state of the system. In the second article, we introduced the Model for Improvement to define an effective project and systematically test interventions through rapid-cycle testing. In this final article, we presented methods to evaluate and sustain a QI project, including run charts and their associated rules, the sustainability model, and various QI sustainability tools such as visual management, standard work processes, and huddles. Now that your sepsis team has successfully implemented useful changes in your department, it may be time to turn your attention to longer-term sustainability and consider starting another project to build on the momentum gained.

Acknowledgements

The authors would like to acknowledge the mentorship of Dr. Eddy Lang for his support in the development of this series and Ms. Carol Hilton for her review and improvement of the manuscript.

Competing interests

None declared.

SUPPLEMENTARY MATERIAL

To view supplementary material for this article, please visit https://doi.org/10.1017/cem.2018.380

References

REFERENCES

1.Calder LA, Forster A, Nelson M, Leclair J, Perry J, Vaillancourt C, et al. Adverse events among patients registered in high-acuity areas of the emergency department: a prospective cohort study. CJEM 2010;12(5):421-30.10.1017/S1481803500012574Google Scholar
2.Stang AS, Wingert AS, Hartling L, Plint AC. Adverse events related to emergency department care: a systematic review. PLoS One 2013;8(9):e74214.Google Scholar
3.Kohn LT, Corrigan J, Donaldson MS. To err is human: building a safer health system. Washington, DC: National Academy Press; 2000.Google Scholar
4.Institute of Medicine (U.S.). Committee on Quality of Health Care in America. Crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academy Press; 2001.Google Scholar
5.Stelfox HT, Palmisani S, Scurlock C, Orav EJ, Bates DW. The “To Err is Human” report and the patient safety literature. Qual Saf Health Care 2006;15(3):174-8.10.1136/qshc.2006.017947Google Scholar
6.Chartier, LB, Cheng, AHY, Stang, AS, Vaillancourt, S. Quality improvement primer part 1: preparing for a quality improvement project in the emergency department. CJEM 2018;20(1):104-111.10.1017/cem.2017.361Google Scholar
7.Chartier, LB, Stang, AS, Vaillancourt, S, Cheng, AHY. Quality improvement primer part 2: executing a quality improvement project in the emergency department. CJEM 2017;epub, 10.1017/cem.2017.393.Google Scholar
8.Langley, GJ. The improvement guide: a practical approach to enhancing organizational performance, 2nd ed San Francisco: Jossey-Bass; 2009.Google Scholar
9.Perla, RJ, Provost, LP, Murray, SK. The run chart: a simple analytical tool for learning from variation in healthcare processes. BMJ Qual Saf 2011;20(1):46-51.10.1136/bmjqs.2009.037895Google Scholar
10.Provost, LP, Murray, SK. The health care data guide: learning from data for improvement. 1st ed. San Francisco: Jossey-Bass; 2011.Google Scholar
11.Benneyan, JC, Lloyd, RC, Plsek, PE. Statistical process control as a tool for research and healthcare improvement. Qual Saf Health Care 2003;12(6):458-464.10.1136/qhc.12.6.458Google Scholar
12.Anhøj, J, Olesen, AV. Run charts revisited: a simulation study of run chart rules for detection of non-random variation in health care processes. PLoS One 2014;9(11):e113825.10.1371/journal.pone.0113825Google Scholar
13.Deming, W. The New Economics: For Industry, Government, Education. Cambridge, MA: MIT Press; 1994.Google Scholar
14.Olmstead, P. Distribution of sample arrangements for runs up and down. Ann Math Stat 1945;17:24e33.Google Scholar
15.Swed, FS, Eisenhart, C. Tables for testing randomness of grouping in a sequence of alternatives. Ann Math Stat 1943;14:66e87.10.1214/aoms/1177731494Google Scholar
16.Cafazzo, JA, St-Cyr, O. From discovery to design: the evolution of human factors in healthcare. Healthc Q 2012;15(Spec No):24-29.10.12927/hcq.2012.22845Google Scholar
17.Grimshaw, JM, Eccles, MP, Lavis, JN, Hill, SJ, Squires, JE. Knowledge translation of research findings. Implement Sci 2012;7(1):50.10.1186/1748-5908-7-50Google Scholar
18.Institute for Safe Medical Practices [Internet]. Medication error prevention “toolbox”; 1999. Available at: https://www.ismp.org/newsletters/acutecare/articles/19990602.asp (accessed April 3, 2017).Google Scholar
19.Maher, L, Gustafson, D, Evans, A. Sustainability model and guide; 2011. Available at: http://www.qihub.scot.nhs.uk/media/162236/sustainability_model.pdf2010 (accessed April 3, 2017).Google Scholar
20.Ham, C, Kipping, R, McLeod, H. Redesigning work processes in health care: lessons from the National Health Service. Milbank Q 2003;81(3):415-439.10.1111/1468-0009.t01-3-00062Google Scholar
21.Beer, M, Nohria, N. Cracking the code of change. Harv Bus Rev 2000;78(3):133-141.Google Scholar
22.Buchanan, D, Fitzgerald, L, Ketley, D, et al. No going back: a review of the literature on sustaining organizational change. Int J Manag Rev 2005;7(3):189-205.10.1111/j.1468-2370.2005.00111.xGoogle Scholar
23.Institute for Healthcare Improvement (IHI). 5 Million Lives Campaign. Getting Started Kit: Sustainability and Spread. Cambridge, MA: IHI; 2008.Google Scholar
24.Hayes, C. Highly Adoptable Improvement Model; 2015. Available at: http://www.highlyadoptableqi.com/index.html2015 (accessed April 3, 2017).Google Scholar
25.National Health Service Scotland Quality Improvement Hub [Internet]. The spread and sustainability of quality improvement in healthcare; 2015. Available at: http://www.qihub.scot.nhs.uk/media/835521/spread%20and%20sustainability%20study%20review%20(web).pdf2015 (accessed April 3, 2017).Google Scholar
26.Batalden, PB, Davidoff, F. What is “quality improvement” and how can it transform healthcare? Qual Saf Health Care 2007;16(1):2-3.10.1136/qshc.2006.022046Google Scholar
27.Kaplan, HC, Brady, PW, Dritz, MC, et al. The influence of context on quality improvement success in health care: a systematic review of the literature. Milbank Q 2010;88(4):500-559.10.1111/j.1468-0009.2010.00611.xGoogle Scholar
28.Batalden, P, Splaine, M. What will it take to lead the continual improvement and innovation of health care in the twenty-first century? Qual Manag Health Care 2002;11(1):45-54.10.1097/00019514-200211010-00008Google Scholar
29.Kotter, J. Leading change: why transformation efforts fail. Harv Bus Rev 1995;(March/April):57-68.Google Scholar
30.Kaplan, HC, Provost, LP, Froehle, CM, Margolis, PA. The Model for Understanding Success in Quality (MUSIQ): building a theory of context in healthcare quality improvement. BMJ Qual Saf 2012;21(1):13-20.10.1136/bmjqs-2011-000010Google Scholar
31.Ogrinc, GS, Headrick, L. Joint Commission Resources Inc. Fundamentals of health care improvement: a guide to improving your patients’ care . Oak Brook Terrace: Joint Commission Resources; 2008.Google Scholar
32.Toussaint, J. A management, leadership, and board road map to transforming care for patients. Front Health Serv Manage 2013;29(3):3-15.10.1097/01974520-201301000-00002Google Scholar
33.Ballard, DJ, Ogola, G, Fleming, NS, et al. The impact of standardized order sets on quality and financial outcomes. In: Advances in Patient Safety: New Directions and Alternative Approaches (Vol. 2: Culture and Redesign) (eds. Henriksen K, Battles JB, Keyes MA, et al. ). Rockville: Agency for Healthcare Research and Quality; 2008, 257-269.Google Scholar
34.Alberta Innovates – Health Solutions. ARECCI Ethics Screening Tool; 2010. Available at: https://albertainnovates.ca/our-health-innovation-focus/a-project-ethics-community-consensus-initiative/arecci-ethics-guideline-and-screening-tools/ (accessed September 12, 2017).Google Scholar
Figure 0

Figure 1 Run chart of your sepsis project. The x-axis represents the weeks before (negative numbers) and during (positive numbers) your QI project; the y-axis represents the time from triage to antibiotics (in hours). The annotations represent the times at which the various change interventions were introduced and then iteratively tested by your team. The continuous horizontal line (i.e., the centreline) represents the median of the entire data set (4.5 hours) and the dashed line represents the project’s target time (three hours). IT = Information technology.

Figure 1

Table 1 The sustainability model

Figure 2

Figure 2 Performance board for your sepsis project. Green background = better than objective; yellow background = less than 10% worse than objective; red background = more than 10% worse than objective.

Supplementary material: File

Chartier et al. supplementary material

Appendix S1

Download Chartier et al. supplementary material(File)
File 103.5 KB