Background
Infection Prevention and Control (IPC) programs are responsible for surveillance and mitigation strategies to prevent healthcare-associated infections (HAIs), including but not limited to central line-associated bloodstream infections (CLABSIs), catheter-associated urinary tract infections (CAUTIs), and surgical site infections (SSIs). Surveillance definitions for HAIs are strictly determined by the National Healthcare Safety Network (NHSN). Accurate HAI adjudication is critical to ensure data integrity given the wide-reaching impacts of HAI metrics on public reputation and value-based purchasing (including hospital reimbursement and penalties); this benchmarking system requires reliable surveillance definition application across institutions to ensure meaningful comparison to peer hospitals.
Though NHSN surveillance definitions are clearly delineated, several studies have demonstrated that application of these definitions varies. For example, 19 state health departments conducted CAUTI validation and found the classification error rate from 2015 to 2020 was 2.4%, with 66% being underreported and 34% overreported. Reference Bagchi, Watkins, Norrick, Scalise, Pollock and Allen-Bridson1 Similarly, a 2018 publication of the validation efforts of 23 state health departments showed the pooled error rate of CLABSI cases was 4.4%. Reference Bagchi, Watkins, Pollock, Edwards and Allen-Bridson2 Marked underreporting of colon SSIs has also been documented. In one blinded, retrospective review of 30 Connecticut acute care hospitals in 2012, it was determined that 34% of cases were not reported to NHSN. Reference Backman, Carusillo, D’aquila, Melchreit and Fekieta3 There are several potential causes for HAI misclassification, including variation in case finding methods, inaccurate interpretation of surveillance definitions, overreliance on clinical judgment, and denominator inaccuracy (due to coding errors or data integrity issues). Reference Bagchi, Watkins, Norrick, Scalise, Pollock and Allen-Bridson1–Reference Backman, Carusillo, D’aquila, Melchreit and Fekieta3
NHSN clearly states that failure to meet clinical definitions for infection should not result in non-reporting of cases that meet surveillance definitions, as “the application of these standardized criteria, and only these criteria, in a consistent manner allows confidence in aggregation and analysis of data.” 4 It has been recommended that IPC programs “participate in external validation programs if available within their states and intermittently perform internal validation for agreement among infection preventionists (IPs) within the same program.” Reference Dhar, Sandhu, Valyko, Kaye and Washer5 Surveillance notably comprises 25.4% of IP efforts and is thus a key skill for IPs to master. Reference Landers, Davis, Crist and Malik6 However, there is little published data about the optimal structure and outcomes of internal validation programs.
We hypothesized that creation of a systematic internal validation review would create increased consistency in the application of NHSN surveillance definitions across the UCHealth metropolitan region.
Methods
The UCHealth metropolitan region is comprised of four hospitals and three ambulatory surgical centers. Ten full-time IPs review all cases of potential CLABSI, CAUTI, and SSI to arbitrate whether NHSN surveillance definitions are met for HAI. The IPs had a median of 4.4 years of experience working as a hospital-based IP (IQR 2.5 years), and 90% had earned CIC certification.
In October 2021, the UCHealth metropolitan region IP team structure transitioned from a subject matter expert (SME) model (in which each IP reviews all potential cases for a specific HAI) to a unit-based model (in which each IP reviews all HAIs on their assigned units in addition to performing SSI surveillance for one to three surgery types, such as hysterectomy or total knee arthroplasty). The goal of this organizational change was to create redundancy in IP skills. However, given that IPs began conducting surveillance on less-familiar HAIs, the team concurrently instituted weekly “Inter-rater Reliability (IRR)” meetings to review all potential cases of HAI as a group.
From August 17, 2022, through December 22, 2023, the IPC team convened one-hour IRR meetings via teleconferencing. Attendees included all UCHealth metropolitan region IPs, the IPC manager, and the IPC medical directors. During these meetings, each IP presented key details for all cases evaluated for CLABSI and CAUTI on their respective units and assigned SSI. To maximize efficiency, an online form was created; once all required patient details were entered, the case was automatically uploaded to a line list, and a slide was populated for presentation (see Figure 1). During the meeting, IPs received feedback from the team, and a final case determination was based on team consensus through open dialog and final medical director input. If there was discordant interpretation, a formal inquiry was sent to NHSN, and responses were collated for future reference.
The number of cases reviewed, case determinations changed, and formal inquiries to NHSN were tracked.
Results
During the study period, the IP team convened 71 weekly IRR meetings and reviewed 609 potential HAI cases; 56 were evaluated for CAUTI, 165 for CLABSI, and 388 for SSI (see Table 1). Of these, 486 (79.8%) were confirmed as HAIs. Based on collaborative team review, 41 cases (41/609, 6.7%) were changed from reportable to non-reportable—including 19 cases evaluated for CLABSI and 22 evaluated for SSI. Six cases (6/609, 1.0%) were changed from non-reportable to reportable—1 CAUTI case, 1 CLABSI case, and 4 SSI cases. Nineteen reportable cases (19/609, 3.1%) remained reportable but required a change in definition: the depth of infection was changed in 18 SSI cases, and one case of secondary BSI attribution was changed. A total of 29 formal inquiries were sent to NHSN to clarify surveillance definitions.
Abbreviations: HAI, healthcare-associated infection; IRR, inter-rater reliability; CAUTI, catheter-associated urinary tract infection; CLABSI, central line-associated bloodstream infection; MBI, mucosal barrier injury; LCBI, laboratory-confirmed bloodstream infection; BSI, bloodstream infection; SSI, surgical site infection; PATOS, present at time of surgery; NHSN, National Healthcare Surveillance Network.
* Confirmed Cases = Cases confirmed to represent HAI; includes cases identified as PATOS or meeting alternate NHSN exclusion criteria.
** Near Miss Cases = Cases reviewed that are suspected to be true infections but do not meet NHSN surveillance criteria.
*** Definition Change = Change in SSI depth of infection.
Discussion
This study demonstrates a novel team-based structure to internally validate HAI reporting across a multi-hospital healthcare system. Our experience demonstrates that this model is a practical tool with beneficial impacts in three domains:
Assuring data integrity and timeliness
Team-based review of HAI cases improves consistency in application of NHSN case definitions. This was highlighted in the setting of CLABSI, as adjudication was changed in 21 cases (21/165, 12.7%) after IRR. In addition, the dynamic team discussion highlights areas of uncertainty in interpretation of NHSN surveillance definitions; as a result, the team developed a systematic way to query NHSN and collate replies to ensure the knowledge gained is maintained for future use. Finally, the weekly IRR meeting structure creates accountability for IPs in providing real-time reporting of newly identified HAI cases, allowing trends to be quickly identified.
Enhancing team performance and awareness
Within a unit-based IP model, individual team members often work independently, functioning as a representative of the IPC team on their respective units. This allows IPs to build strong, effective relationships with unit leadership and front-line staff; it also permits IPs to note rate trends across HAIs, allowing for more expeditious evaluation of root causes. Though each IP functions independently, the IRR meeting structure breaks down these silos, fostering opportunities for collaboration. For example, team members gain in-depth awareness of each metric’s global performance, share best practices, and collectively identify effective interventions.
Accelerating individual professional growth
This model fosters the development of individual IPs. During IRR meetings, IPs learn from each other’s questions and clarifications—learnings that are missed if cases are reviewed with IPC medical directors in ad hoc conversations. In addition, it offers the opportunity for senior IPs to provide peer mentoring to early-career IPs. Finally, the IRR structure strengthens IP’s knowledge of all HAI definitions, thus making them more well-rounded and equipped for future leadership positions.
There are important challenges to the external application of our team-based internal validation model and meeting structure. First, this model functions best within a unit-based IPC team structure, as it requires engagement of all team members for dynamic discussion. Second, the IRR meeting requires an additional hour-long meeting per week with associated preparation time. Given that this internal review process represents an investment of time, it necessitates hospital support and value of data integrity. Finally, it is critical to note that the issue of imperfect inter-rater reliability is very likely present at equal or greater extents between and across institutions. As such, hospital leadership buy-in is of particular importance given that HAI standardized infection ratios (SIRs) may temporally increase as data integrity improves—which may be perceived as worsening performance when compared to peer hospitals with less accurate reporting. Ultimately, valid comparison and accurate benchmarking will require structures to ensure high IRR across institutions while case definitions require clinical judgment (in contrast to LabID metrics, which require no case review).
In summary, this model offers a unique, team-based approach to internal validation of HAIs, improving consistency in the application of NHSN case definitions across a multi-hospital system. In addition, we found that this structure offered opportunities to enhance team performance and accelerate individual IP professional growth.
Acknowledgments
We greatly appreciate the time and effort of the Infection Prevention and Control staff at the UCHealth Metropolitan Region hospitals who participated in this intervention and contributed to optimization of this workflow. We also acknowledge the UCHealth Quality & Safety Department for its emphasis on the value of data integrity.
Financial support
No financial support was provided relevant to this article.
Competing interests
All authors report no conflicts of interest relevant to this article.