Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-26T03:46:19.620Z Has data issue: false hasContentIssue false

Comparing Bloodstream Infection Rates: The Effect of Indicator Specifications in the Evaluation of Processes and Indicators in Infection Control (EPIC) Study

Published online by Cambridge University Press:  21 June 2016

Barbara I. Braun*
Affiliation:
Division of Research, Joint Commission on Accreditation of Healthcare Organizations, Oakbrook Terrace, Illinois
Stephen B. Kritchevsky
Affiliation:
J. Paul Sticht Center on Aging, Wake Forest University School of Medicine, Winston-Salem, North Carolina
Linda Kusek
Affiliation:
Division of Research, Joint Commission on Accreditation of Healthcare Organizations, Oakbrook Terrace, Illinois
Edward S. Wong
Affiliation:
Infectious Diseases Section, McGuire Veterans Affairs Medical Center and Medical College of Virginia, Richmond, Virginia
Steven L. Solomon
Affiliation:
Division of Healthcare Quality Promotion, Centers for Disease Control and Prevention, Atlanta, Georgia
Lynn Steele
Affiliation:
Division of Healthcare Quality Promotion, Centers for Disease Control and Prevention, Atlanta, Georgia
Cheryl L. Richards
Affiliation:
Division of Research, Joint Commission on Accreditation of Healthcare Organizations, Oakbrook Terrace, Illinois
Robert P. Gaynes
Affiliation:
Division of Healthcare Quality Promotion, Centers for Disease Control and Prevention, Atlanta, Georgia
Bryan Simmons
Affiliation:
Quality Management, Methodist Health Systems, Memphis, Tennessee
*
Division of Research, Joint Commission on Accreditation of Healthcare Organizations, One Renaissance Boulevard, Oakbrook Terrace, IL 60181 ([email protected])

Abstract

Objective.

Bloodstream infection (BSI) rates are used as comparative clinical performance indicators; however, variations in definitions and data-collection approaches make it difficult to compare and interpret rates. To determine the extent to which variation in indicator specifications affected infection rates and hospital performance rankings, we compared absolute rates and relative rankings of hospitals across 5 BSI indicators.

Design.

Multicenter observational study. BSI rate specifications varied by data source (clinical data, administrative data, or both), scope (hospital wide or intensive care unit specific), and inclusion/exclusion criteria. As appropriate, hospital-specific infection rates and rankings were calculated by processing data from each site according to 2-5 different specifications.

Setting.

A total of 28 hospitals participating in the EPIC study.

Participants.

Hospitals submitted deidentified information about all patients with BSIs from January through September 1999.

Results.

Median BSI rates for 2 indicators based on intensive care unit surveillance data ranged from 2.23 to 2.91 BSIs per 1000 central-line days. In contrast, median rates for indicators based on administrative data varied from 0.046 to 7.03 BSIs per 100 patients. Hospital-specific rates and rankings varied substantially as different specifications were applied; the rates of 8 of 10 hospitals were both greater than and less than the mean. Correlations of hospital rankings among indicator pairs were generally low (rs = 0-0.45), except when both indicators were based on intensive care unit surveillance (rs = 0.83).

Conclusions.

Although BSI rates seem to be a logical indicator of clinical performance, the use of various indicator specifications can produce remarkably different judgments of absolute and relative performance for a given hospital. Recent national initiatives continue to mix methods for specifying BSI rates; this practice is likely to limit the usefulness of such information for comparing and improving performance.

Type
Original Articles
Copyright
Copyright © The Society for Healthcare Epidemiology of America 2006

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.O'Grady, NP, Alexander, M, Dellinger, EP, et al. Guidelines for the prevention of intravascular catheter-related infections. Centers for Disease Control and Prevention. MMWR Recomm Rep 2002; 51 (RR-10): 129.Google ScholarPubMed
2.Wenzel, RP, Edmond, MB. The impact of hospital-acquired bloodstream infections. Emerg Infect Dis 2001; 7:174177.Google Scholar
3.Jarvis, WR. Selected aspects of the socioeconomic impact of nosocomial infections: morbidity, mortality, cost, and prevention. Infect Control Hosp Epidemiol 1996; 17:552557.CrossRefGoogle ScholarPubMed
4.Mermel, LA. Prevention of intravascular catheter-related bloodstream infections. Ann Intern Med 2000; 133:392402 [published correction appears in Ann Intern Med 2000; 133:5].Google Scholar
5.Pittet, D, Tarara, D, Wenzel, RP. Nosocomial bloodstream infection in critically ill patients: excess length of stay, extra costs, and attributable mortality. JAMA 1994;271:15981601.Google Scholar
6.Nosocomial infection rates for interhospital comparison: limitations and possible solutions: a report from the National Nosocomial Infections Surveillance (NNIS) system. Infect Control Hosp Epidemiol 1991; 12:609621.Google Scholar
7.National Nosocomial Infections Surveillance (NNIS) System Report, data summary from January 1992 through June 2003, issued August 2003. Am J Infect Control 2003; 31:481498.Google Scholar
8.National Healthcare Quality Report. Rockville, MD: Agency for Healthcare Research and Quality, US Department of Health and Human Services; 2003. Publication 04-RG003.Google Scholar
9.Glover, L. Pa. hospitals underreport infection rates. Pittsburgh Business Times. October 1, 2004. Available at; http://www.bizjournals.com/pitts-burgh/stories/2004/10/04/storyl.html?page = 3. Accessed October 4, 2004.Google Scholar
10.McKibben, L, Horan, T, Tokars, JI, et al. Guidance on public reporting of healthcare-associated infections: recommendations of the Healthcare Infection Control Practices Advisory Committee. Healthcare Infection Control Practices Advisory Committee. Am J Infect Control 2005; 33:217226.Google Scholar
11.Wong, ES, Rupp, ME, Mermel, L, et al. Public disclosure of healthcare-associated infections: the role of the Society for Healthcare Epidemiology of America. Infect Control Hosp Epidemiol 2005; 26:210212.Google Scholar
12.Kritchevsky, SB, Simmons, BP, Braun, BI. The Project to Monitor Indicators: a collaborative effort between the Joint Commission on Accreditation of Healthcare Organizations and the Society for Healthcare Epidemiology of America. Infect Control Hosp Epidemiol 1995; 16:3335.Google Scholar
13.Kritchevsky, SB, Braun, BI, Wong, ES, et al. Impact of hospital care on incidence of bloodstream infection: the Evaluation of Processes and Indicators in Infection Control study. Emerg Infect Dis 2001; 7:193196.Google Scholar
14.Braun, BI, Kritchevsky, SB, Wong, ES, et al. Preventing central venous catheter-associated primary bloodstream infections: characteristics of practices among hospitals participating in the Evaluation of Processes and Indicators in Infection Control (EPIC) study. Infect Control Hosp Epidemiol 2003; 24:926935.Google Scholar
15.Braun, BI, Koss, RG, Loeb, JM. Integrating performance measurement data into the Joint Commission accreditation process. Eval Health Prof 1999; 22:283297.Google Scholar
16.Garner, JS, Jarvis, WR, Emori, TG, Horan, TC, Hughes, JM. CDC definitions for nosocomial infections, 1988. Am J Infect Control 1988;16:128140.Google Scholar
17.Core health data elements: report of the National Committee on Vital and Health Statistics. Washington, DC: The National Committee on Vital and Health Statistics; August 1996. Available at: http://ncvhs.hhs.gov/ncvhsrl.htm. Accessed October 27, 2004.Google Scholar
18.Puckett, CD. The Educational Annotation of ICD-9-CM. 5th ed. Reno, NV: Channel Publishing Ltd; 2004.Google Scholar
19.Iezzoni, LI. Assessing quality using administrative data. Ann Intern Med 1997; 127:666674.CrossRefGoogle ScholarPubMed
20.Osborn, CE. Benchmarking with national ICD-9-CM coded data. J AHIMA 1999; 70:5969.Google Scholar
21.Romano, PS, Chan, BK, Schembri, ME, Rainwater, JA. Can administrative data be used to compare postoperative complication rates across hospitals? Med Care 2002; 40:856867.Google Scholar
22.Dimick, JB, Welch, HG, Birkmeyer, JD. Surgical mortality as an indicator of hospital quality: the problem with small sample size. JAMA 2004; 292:847851.Google Scholar
23.Hannan, EL, Racz, MJ, Jollis, JG, Peterson, ED. Using Medicare claims data to assess provider quality for CABG surgery: does it work well enough? Health Serv Res 1997; 31:659678.Google Scholar
24.Wright, SB, Huskins, WC, Dokholyan, RS, Goldmann, DA, Platt, R. Administrative databases provide inaccurate data for surveillance of long-term central venous catheter-associated infections. Infect Control Hosp Epidemiol 2003; 24:946949.Google Scholar
25.Trick, WE, Zagorski, BM, Tokars, JI, et al. Computer algorithms to detect bloodstream infections. Emerg Infect Dis 2004; 10:16121620.CrossRefGoogle ScholarPubMed
26.Emori, TG, Edwards, JR, Culver, DJ, et al. Accuracy of reporting nosocomial infections in intensive-care-unit patients to the National Nosocomial Infections Surveillance System: a pilot study. Infect Control Hosp Epidemiol 1998; 19:308316.CrossRefGoogle Scholar
27.Metzger, BS, White, N, Ray, S, Blumberg, HM. Poor validity of JCAHO-GHA ORYX indicator using a discharge abstract-based method to determine surgical-site infection rates [abstract]. J Investig Med 2001;49: 125A. Abstract 665.Google Scholar
28.Sands, KE, Yokoe, DS, Hooper, DC, et al. Detection of postoperative surgical-site infections: comparison of health plan-based surveillance with hospital-based programs. Infect Control Hosp Epidemiol 2003;24: 741743.Google Scholar
29.Kritchevsky, SB, Braun, BI, Gross, PA, Newcomb, CS, Kelleher, CA, Simmons, BP. Definition and adjustment of Cesarean section rates and assessments of hospital performance. Int J Qual Health Care 1999; 11:283291.Google Scholar
30.Agency for Healthcare Research and Quality. Databases and related resources from HCUP. Fact sheet. Rockville, MD: Agency for Healthcare Research and Quality; September 2002. Publication No. 02-P030. Available at: http://www.ahrq.gov/data/hcup/datahcup.htm. Accessed October 27, 2004.Google Scholar
31.National Quality Forum. National Voluntary Consensus Standards for Hospital Care: An Initial Performance Measure Set. Washington, DC: National Quality Forum; 2003. Available at: http://www.qualityforum.org. Accessed October 19, 2004.Google Scholar
32.JCAHO and CMS to align quality measures: PI efforts will benefit. Healthcare Benchmarks Qual Improv 2004; 11:121124.Google Scholar
33.Gaynes, RP, Solomon, S. Improving hospital-acquired infection rates: the CDC experience. Jt Comm J Qual Improv 1996; 22:457467.Google Scholar
34.Yokoe, DS, Anderson, J, Chambers, R, et al. Simplified surveillance for nosocomial bloodstream infections. Infect Control Hosp Epidemiol 1998; 19:657660.Google Scholar
35.Hugonnet, S, Sax, H, Eggimann, P, Chevrolet, JC, Pittet, D. Nosocomial bloodstream infection and clinical sepsis. Emerg Infect Dis 2004; 10:7681.Google Scholar
36.Tide rises on pay for performance with voluntary reporting initiative. Healthc Financ Manage 2004; 58:120121.Google Scholar