We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
This journal utilises an Online Peer Review Service (OPRS) for submissions. By clicking "Continue" you will be taken to our partner site
http://www.editorialmanager.com/iche/default.aspx.
Please be aware that your Cambridge account is not valid for this OPRS and registration is required. We strongly advise you to read all "Author instructions" in the "Journal information" area prior to submitting.
To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To send this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Antimicrobial susceptibility patterns across US pediatric healthcare institutions are unknown. A national pooled pediatric antibiogram (1) identifies nationwide trends in antimicrobial resistance, (2) allows across-hospital benchmarking, and (3) provides guidance for empirical antimicrobial regimens for institutions unable to generate pediatric antibiograms.
Methods.
In January 2012, a request for submission of pediatric antibiograms between 2005 and 2011 was sent to 233 US hospitals. A summary antibiogram was compiled from participating institutions to generate proportions of antimicrobial susceptibility. Temporal and regional comparisons were evaluated using χ² tests and logistic regression, respectively.
Results.
Of 200 institutions (85%) responding to our survey, 78 (39%) reported generating pediatric antibiograms, and 55 (71%) submitted antibiograms. Carbapenems had the highest activity against the majority of gram-negative organisms tested, but no antibiotic had more than 90% activity against Pseudomonas aeruginosa. Approximately 50% of all Staphylococcus aureus isolates were methicillin resistant. Western hospitals had significantly lower proportions of S. aureus that were methicillin resistant compared with all other regions tested. Overall, 21% of S. aureus isolates had resistance to clindamycin. Among Enterococcus faecium isolates, the prevalence of susceptibility to ampicillin (25%) and vancomycin (45%) was low but improved over time (P < .01), and 8% of E. faecium isolates were resistant to linezolid. Southern hospitals reported significantly higher prevalence of E. faecium with susceptibilities to ampicillin, vancomycin, and linezolid compared with the other 3 regions (P < .01).
Conclusions.
A pooled, pediatric antibiogram can identify nationwide antimicrobial resistance patterns for common pathogens and might serve as a useful tool for benchmarking resistance and informing national prescribing guidelines for children.
Most US states have enacted or are considering legislation mandating hospitals to publicly report hospital-acquired infection (HAI) rates. We conducted a survey of infection control professionals and found that state-legislated public reporting of HAIs is not associated with perceived improvements in infection prevention program process measures or HAI rates.
Little is known about whether those performing healthcare-associated infection (HAI) surveillance vary in their interpretations of HAI definitions developed by the Centers for Disease Control and Prevention's National Healthcare Safety Network (NHSN). Our primary objective was to characterize variations in these interpretations using clinical vignettes. We also describe predictors of variation in responses.
Design.
Cross-sectional study.
Setting.
United States.
Participants.
A sample of US-based members of the Society for Healthcare Epidemiology of America (SHEA) Research Network.
Methods.
Respondents assessed whether each of 6 clinical vignettes met criteria for an NHSN-defined HAI. Individual- and institutional-level data were also gathered.
Results.
Surveys were distributed to 143 SHEA Research Network members from 126 hospitals. In total, 113 responses were obtained, representing at least 61 unique hospitals (30 respondents did not identify a hospital); 79.2% (84 of 106 nonmissing responses) were infection preventionists, and 79.4% (81 of 102 nonmissing responses) worked at academic hospitals. Among the 6 vignettes, the proportion of respondents correctly characterizing the vignettes was as low as 27.3%. Combining all 6 vignettes, the mean percentage of correct responses was 61.1% (95% confidence interval, 57.7%–63.8%). Percentage of correct responses was associated with presence of a clinical background (ie, nursing or physician degrees) but not with hospital size or infection prevention and control department characteristics.
Conclusions.
Substantial heterogeneity exists in the application of HAI definitions in this survey of infection preventionists and hospital epidemiologists. Our data suggest a need to better clarify these definitions, especially when comparing HAI rates across institutions.