Hostname: page-component-586b7cd67f-g8jcs Total loading time: 0 Render date: 2024-11-26T18:21:30.905Z Has data issue: false hasContentIssue false

Identifying Opportunities to Improve Accuracy of NHSN Reporting: Lessons Learned From State Health Department Validations.

Published online by Cambridge University Press:  02 November 2020

Suparna Bagchi
Affiliation:
Centers for Disease Control and Prevention
Bonnie Norrick
Affiliation:
Centers for Disease Control and Prevention
Nigel Lewis
Affiliation:
CACI
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Background: State Health Departments (SHDs) have systematically studied the validity of healthcare-associated infection (HAI) surveillance data submitted by healthcare facilities in their jurisdictions to the Centers for Disease Control and Prevention’s (CDC’s) National Healthcare Safety Network (NHSN) for central-line–associated bloodstream infections (CLABSIs), catheter-associated urinary tract infections (CAUTIs), surgical site infections following colon and abdominal hysterectomy procedures (SSI COLO and HYST), methicillin-resistant Staphylococcus aureus and Clostridioides difficile laboratory identified (MRSA and CDI LabID respectively) events. These studies are a key source of information about data quality and completeness serving as an impetus and a guide for improving the caliber of NHSN’s HAI data. Methods: We contacted SHD HAI coordinators in all states for an inventory of HAI validation studies. We used data from these studies to calculate pooled mean sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for HAI case determinations. HAI case reporting “error rates” were computed as the proportion of mismatches (underreport and overreport) among the medical records reviewed by SHDs and reasons for misclassification were categorized. Results: SHD validation studies varied by HAI type (range, 4 studies for MRSA LabID and 23 for CLABSI). Pooled mean sensitivity of HAI reporting ranged from 73.1% (COLO SSI) to 92.7% (CDI LabID). Pooled mean specificity and PPV exceeded 90% for all HAIs. LabID event validations demonstrated the lowest NPV (58.8% for MRSA and 55.1% for CDI). Error rates of HAI reporting to NHSN ranged from 2.5% (HYST SSI) to 13.6% (MRSA LabID). Common errors identified during CLABSI and CAUTI validations were incorrect application of general NHSN and CLABSI- and CAUTI-specific definitions. Incorrect secondary BSI attribution was the most frequently identified reason by CLABSI SHD validations (64.7%). Of all operative procedure-associated misclassifications, inconsistent surveillance practices (66.6%), incorrect NHSN operative procedure category assignment (55.5%), and misapplication of general organ-space and/or site-specific infection criteria (44.4%) were identified as the most common shortcomings. Among MRSA and CDI LabID validations, missed case finding due to failure to review candidate events and gaps in understanding the 14-day reporting rule of LabID protocol were identified as predominant reasons for inaccurate reporting. Conclusions: SHD HAI data validations identified specific targets for additional surveillance training, especially CLABSI determinations and application of the protocol rules for MDRO/CDI LabID case determinations. Further work is also needed to assure that data sources in addition to wound cultures are used for SSI determinations and that postdischarge SSI surveillance is more vigorous and comprehensive.

Funding: None

Disclosures: None

Type
Oral Presentations
Copyright
© 2020 by The Society for Healthcare Epidemiology of America. All rights reserved.