Hostname: page-component-586b7cd67f-t7czq Total loading time: 0 Render date: 2024-11-28T15:16:47.461Z Has data issue: false hasContentIssue false

Preliminary Assessment of an Automated Surveillance System for Infection Control

Published online by Cambridge University Press:  02 January 2015

Marc-Oliver Wright*
Affiliation:
Infection Control and Hospital Epidemiology, University of Maryland Medical Center
Eli N. Perencevich
Affiliation:
Department of Epidemiology and Preventive Medicine, University of Maryland, and the Veterans Affairs Maryland Health Care System, Baltimore, Maryland
Christopher Novak
Affiliation:
Cereplex Inc., Gaithersburg, Maryland
Joan N. Hebden
Affiliation:
Infection Control and Hospital Epidemiology, University of Maryland Medical Center
Harold C. Standiford
Affiliation:
Infection Control and Hospital Epidemiology, University of Maryland Medical Center
Anthony D. Harris
Affiliation:
Department of Epidemiology and Preventive Medicine, University of Maryland, and the Veterans Affairs Maryland Health Care System, Baltimore, Maryland
*
Department of Infection Control and Hospital Epidemiology, University of Maryland Medical Center, 29 South Greene Street, Suite 400, Baltimore, MD 21201

Abstract

Background and Objective:

Rapid identification and investigation of potential outbreaks is key to limiting transmission in the healthcare setting. Manual review of laboratory results remains a cumbersome, time-consuming task for infection control practitioners (ICPs). Computer-automated techniques have shown promise for improving the efficiency and accuracy of surveillance. We examined the use of automated control charts, provided by an automated surveillance system, for detection of potential outbreaks.

Setting:

A 656-bed academic medical center.

Methods:

We retrospectively reviewed 13 months (November 2001 through November 2002) of laboratory-patient data, comparing an automated surveillance application with standard infection control practices. We evaluated positive predictive value, sensitivity, and time required to investigate the alerts. An ICP created 75 control charts. A standardized case investigation form was developed to evaluate each alert for the likelihood of nosocomial transmission based on temporal and spatial overlap and culture results.

Results:

The 75 control charts were created in 75 minutes and 18 alerts fired above the 3-sigma level. These were independently reviewed by an ICP and associate hospital epidemiologist. The review process required an average of 20 minutes per alert and the kappa score between the reviewers was 0.82. Eleven of the 18 alerts were determined to be potential outbreaks, yielding a positive predictive value of 0.61. Routine surveillance identified 5 of these 11 alerts during this time period.

Conclusion:

Automated surveillance with user-definable control charts for cluster identification was more sensitive than routine methods and is capable of operating with high specificity and positive predictive value in a time-efficient manner.

Type
Original Articles
Copyright
Copyright © The Society for Healthcare Epidemiology of America 2004

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.Adams, K, Corrigan, JM, eds. Officials Should Target 20 Key Areas to Transform Health Care System [press release]. Washington, DC: Institute of Medicine; 2003.Google Scholar
2.Perl, TM. Surveillance, reporting and the use of computers. In: Wenzel, RP, ed. Prevention and Control of Nosocomial Infections, ed. 3. Baltimore: Williams & Wilkins; 1997:127161.Google Scholar
3.Shlaes, DM, Gerding, DN, John, JF Jr, et al.Society for Healthcare Epidemiology of America and Infectious Diseases Society of America Joint Committee on the Prevention of Antimicrobial Resistance: guidelines for the prevention of antimicrobial resistance in hospitals. Clin Infect Dis 1997;25:584599.CrossRefGoogle Scholar
4.Peterson, LR, Brossette, SE. Hunting health care-associated infections from the clinical microbiology laboratory: passive, active, and virtual surveillance. J Clin Microbiol 2002;40:14.Google Scholar
5.Gerberding, JL. Health-care quality promotion through infection prevention: beyond 2000. Emerg Infect Dis 2001;7:363366.Google Scholar
6.Wright, MO. Using computer technology to collect and manage data. In: Arias, K, ed. APIC Infection Control Toolkit Series: Surveillance Programs in Healthcare Facilities. Washington, DC: Association for Professionals in Infection Control and Epidemiology; 2003.Google Scholar
7.Gustafson, , Tracy, L. Practical risk-adjusted quality control charts for infection control. Am J Infect Control 2000;28:406411.Google Scholar
8.Benneyan, JC. Statistical quality control methods in infection control and hospital epidemiology: Part I. Introduction and basic theory. Infect Control Hosp Epidemiol 1998;19:195214.Google Scholar
9.Paucha, A, Mikiewicz, B, Gniadkowski, M. Diversification of Escherichia coli expressing an SHV-type extended-spectrum β-lactamase (ESBL) during a hospital outbreak: emergence of an ESBL-hyperproducing strain resistant to expanded-spectrum cephalosporins. Antimicrob Agents Chemother 1999;43:393396.CrossRefGoogle Scholar
10.Gerding, DN, Johnson, S, Peterson, LR, Mulligan, ME, Silva, J Jr. Clostridium difficile-associated diarrhea and colitis. Infect Control Hosp Epidemiol 1995;16:459477.Google Scholar
11.Dessau, RB, Steenberg, P. Computerized surveillance in clinical microbiology with time series analysis. J Clin Microbiol 1993;31:857860.CrossRefGoogle ScholarPubMed
12.Bouam, S, Girou, E, Brun-Buisson, C, Karadimas, H, Lepage, E. An intranet-based automated system for the surveillance of nosocomial infections: prospective validation compared with physicians' self-reports. Infect Control Hosp Epidemiol 2003;24:5155.Google Scholar
13.Brossette, SE, Sprague, AP, Hardin, JM, Waites, KB, Jones, WT, Moser, SA. Association rules and data mining in hospital infection control and public health surveillance. J Am Med Inform Assoc 1998;5:373381.Google Scholar
14.Moser, SA, Jones, WT, Brossette, SE. Application of data mining to intensive care unit microbiologic data. Emerg Infect Dis 1999;5:454457.CrossRefGoogle ScholarPubMed
15.Hymel, PA, Brossette, SE. Data mining-enhanced infection control surveillance: sensitivity and specificity. Presented at the 11th Annual Meeting of the Society for Healthcare Epidemiology of America; April 1-3, 2001; Toronto, Ontario, Canada.Google Scholar
16.Backer, HD, Bissell, SR, Vugia, DJ. Disease reporting from an automated laboratory-based reporting system to a state health department via local county health departments. Public Health Rep 2001;116:257265.CrossRefGoogle ScholarPubMed
17.Effler, P, Ching-Lee, M, Bogard, A, Ieong, MC, Nekomoto, T, Jernigan, D. Statewide system of electronic notifiable disease reporting from clinical laboratories: comparing automated reporting with conventional methods. JAMA 1999;282:18451850.Google Scholar
18.Stern, L, Lightfoot, D. Automated outbreak detection: a quantitative retrospective analysis. Epidemiol Infect 1999;122:103110.CrossRefGoogle ScholarPubMed
19.Lazarus, R, Kleinman, KP, Dashevsky, I, DeMaria, A, Piatt, R. Using automated medical records for rapid identification of illness syndromes (syndromic surveillance): the example of lower respiratory infection. BMC Public Health 2001;1:9.Google Scholar
20.Burke, JP. Surveillance, reporting, automation, and interventional epidemiology. Infect Control Hosp Epidemiol 2003;24:1012.Google Scholar
21.Haley, RW, Culver, DH, White, JW, et al.The efficacy of infection surveillance and control programs in preventing nosocomial infections in US hospitals. Am J Epidemiol 1985;121:182205.CrossRefGoogle ScholarPubMed