Hostname: page-component-586b7cd67f-2plfb Total loading time: 0 Render date: 2024-11-24T23:30:33.663Z Has data issue: false hasContentIssue false

Using Samples to Estimate the Sensitivity and Specificity of a Surveillance Process

Published online by Cambridge University Press:  02 January 2015

Emma S. McBryde*
Affiliation:
Victorian Infectious Diseases Service, Melbourne, Australia Hospital Acquired Infection Surveillance System Coordinating Centre, Victoria, Australia School of Mathematical Sciences, Queensland University of Technology, Brisbane, Australia
Heath Kelly
Affiliation:
Victorian Infectious Diseases Reference Laboratory, Melbourne, Australia
Caroline Marshall
Affiliation:
Victorian Infectious Diseases Service, Melbourne, Australia Royal Melbourne Hospital, VICNISS Hospital Acquired Infection Surveillance System Coordinating Centre, and Centre for Clinical Research Excellence in Infectious Diseases, University of Melbourne, Melbourne, Australia
Philip L. Russo
Affiliation:
Hospital Acquired Infection Surveillance System Coordinating Centre, Victoria, Australia
D. L. Sean McElwain
Affiliation:
School of Mathematical Sciences, Queensland University of Technology, Brisbane, Australia
Anthony N. Pettitt
Affiliation:
School of Mathematical Sciences, Queensland University of Technology, Brisbane, Australia
*
Victorian Infectious Diseases Service, Royal Melbourne Hospital, Grattan St., Parkville, Melbourne, VIC, Australia ([email protected])

Abstract

Determining sensitivity and specificity of a postoperative infection surveillance process is a difficult undertaking. Because postoperative infections are rare, vast numbers of negative results exist, and it is often not reasonable to assess them all. This study gives a methodological framework for estimating sensitivity and specificity by taking only a small sample of the number of patients who test negative and comparing their findings to the reference or “gold standard” rather than comparing the findings of all patients to the gold standard. It provides a formula for deriving confidence intervals for these estimates and a guide to minimum requirements for sampling results.

Type
Concise Communications
Copyright
Copyright © The Society for Healthcare Epidemiology of America 2008

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.Friedman, ND, Russo, PL, Bull, AL, Richards, MJ, Kelly, H. Validation of coronary artery bypass graft surgical site infection surveillance data from a statewide surveillance system in Australia. Infect Control Hosp Epidemiol 2007;28:812817.Google Scholar
2.Huotari, K, Lyytikainen, O, Seitsalo, S. Patient outcomes after simultaneous bilateral total hip and knee joint replacements. J Hosp Infect 2007;65:219225.Google Scholar
3.Ehrenkranz, NJ, Schultz, JM, Richter, EI. Recorded criteria as a “Gold Standard” for sensitivity and specificity estimates of surveillance of nosocomial infection: a novel method to measure job performance. Infect Control Hosp Epidemiol 1995;16:697702.Google Scholar
4.Davison, AC, Hinkley, DV. Bootstrap Methods and Their Application. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge: Cambridge University Press; 1997.Google Scholar
5.Mannien, J, van der Zeeuw, AE, Wille, JC, van den Hof, S. Validation of surgical site infection surveillance in The Netherlands. Infect Control Hosp Epidemiol 2007;28:3641.Google Scholar
6.Little, RJA, Rubin, DB. Statistical Analysis with Missing Data. 2nd ed. Wiley series in probability and statistics. Hoboken, NJ: Wiley; 2002.Google Scholar