Healthcare workers (HCWs) have been confronted with several major infectious disease outbreaks in recent years, including severe acute respiratory syndrome (SARS), Ebola, Middle Eastern Respiratory Syndrome (MERS), and most recently, coronavirus disease 2019 (COVID-19) caused by the novel coronavirus, severe acute respiratory coronavirus virus 2 (SARS-CoV-2).1 Frontline HCWs are particularly vulnerable to contracting these infectious diseases because of airborne, droplet, and/or direct contact transmission.Reference Chughtai, Chen and Macintyre2–Reference Smith9 The proper use of personal protective equipment (PPE), which is the final defense in the hierarchy of controls, is vital to preventing contamination and the transmission of disease.10
Studies have shown poor adherence with PPE protocols; on average, only ∼50% of HCWs follow the donning and doffing procedures correctly.Reference Mitchell, Roth and Gravel11,Reference Tomas, Kundrapu and Thota12 Apart from having regular training on the appropriate use of PPE, the Centers for Disease Control and Prevention (CDC) also recommends the use of a trained observer or buddy to monitor each step of the donning and doffing process to improve adherence.13 The trained observer should visually confirm and document each step and provide immediate feedback if there is any deviation from the protocol.
The use of a trained buddy onsite is an effective method by which an additional staff member shares the responsibility to ensure correct donning and doffing procedures, and potentially improves HCW safety. However, it can be a resource-demanding task, especially during a pandemic period when PPE and staff shortages can be an issue. An onsite buddy needs to wear PPE while observing the doffing procedure, which needs to occur in a designated PPE removal area. During the COVID-19 pandemic, many hospital staff were furloughed.14 The loss of staff to furlough or sickness creates a challenge to consistently have staff available onsite to monitor PPE donning and doffing procedures.
In our previous study,Reference Segal, Bradley and Williams15 we explored the idea of having an experienced remote buddy using video to carry out the PPE monitoring task, and we compared them with an onsite buddy. In 30 procedural scenarios with 195 steps including 45 errors, the remote buddy had a positive predictive value of 98.3% for detecting errors and negative predictive value of 100%.
Currently artificial intelligence (AI) is being used in the fight against COVID-19 by assisting in outbreak detection, contact tracing, screening, triage evaluation, remote monitoring, and temperature measuring.Reference Scott and Coiera16 New technology is being developed to utilize an AI machine to monitor donning and doffing processes, via its spatial recognition capability and programmable decision support system. An AI software called the Blue Mirror was recently developed by Fysight (Auckland, NZ) to run on a commercially available tablet with a camera, which incorporates a 100% touchless interaction process. The design of the software allows the tablet to be used as a mirror, with visual and audio guidance to the donning and doffing process. AI real-time feedback on the adherence of PPE donning and doffing process is provided, with the ability to be viewed concurrently by a remote human buddy, who provides additional support and audio rectification feedback when required.
In this pilot simulation study, we assessed the performance of this human–AI machine collaboration system regarding its monitoring accuracy of PPE donning and doffing process, when compared to an onsite buddy. Our secondary aim was to determine the degree of AI autonomy at the current stage of the technology development.
Methods
In this simulation study, we predesigned 15 donning and 15 doffing procedural scenarios with embedded errors in some of the steps. There were 7 steps in the donning procedure and 6 steps in the doffing procedure, for a total of 195 steps. Each scenario contained a different number and type of error throughout the steps (Appendices 1 and 2 online). Examples of the embedded errors include the following: hair was exposed after putting on a hat cover, hand hygiene duration was too short, and face was touched accidentally during removal of the eye protection. One designated investigator performed all the donning and doffing procedures according to the predesigned scenarios, including the intentional errors. Another investigator was present to ensure that each step, including any predetermined error, was strictly followed.
The onsite buddy and the human–AI machine collaboration system monitored the 15 donning and 15 doffing procedures for errors. They provided immediate verbal and visual feedback to the donning and doffing person on whether they could proceed to the next step or whether an error was detected, which needed to be rectified. To avoid interference between the onsite buddy and the human–AI machine system, the assessments were performed separately and sequentially. The designated donning and doffing person performed the exact same predesigned scenarios twice, first to the onsite buddy then to the human–AI machine system.
The onsite buddy was in the room with the donning and doffing person. An independent observer was present to record the monitoring accuracy of the onsite buddy according to the predesigned scenarios: whether each step was passed correctly, failed incorrectly, failed correctly, or passed incorrectly. The result was recorded directly onto a standardized Excel spreadsheet (Microsoft, Redmond, WA).
The AI monitoring was performed via a tablet device (iPad Air, Apple, Cupertino, CA), which was installed with the Blue Mirror software. The tablet device was placed in front of the donning and doffing person so that the full body could be visualized (Fig. 1). The donning and doffing person would manually select the donning or doffing program using touchscreen on the tablet device each time before commencing the procedure. The application allowed the tablet device to function like a digital “mirror,” together with visual and audio guidance to the donning and doffing process. The AI technology provided instant feedback visually and verbally on whether the PPE was donned or doffed correctly. If an error was detected, the donning and doffing person could not move onto the next step until the error was rectified. A built-in function from the software program was also used to allow a remote human buddy to act as a support person for the AI technology. The remote buddy was able to view the procedures in a separate room, using a computer device with the linked program. The remote buddy provided audio feedback to the donning and doffing person only if the AI technology made a mistake. An independent observer was present to record the monitoring accuracy of the human–AI machine system according to the predesigned scenarios: whether the procedural step was passed correctly, whether the AI detected the error autonomously, whether the remote buddy assisted to identify the error, whether both the AI and the remote buddy missed the error, and whether the AI or the remote buddy created a nonexistent error.
The buddies in this study were senior frontline HCWs who were on the COVID-19 intubation team. They all had significant simulation training and were highly experienced at providing observation feedback.
Statistical analysis
The accuracy of PPE monitoring by the onsite buddy and the human–AI machine collaboration system was expressed as sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and overall accuracy. The degree of AI autonomy without the assistance from the remote buddy was also presented similarly. We used κ statistics to examine the agreement between onsite buddy and the AI technology. The κ value was interpreted as follows: <0.41 was poor, 0.41–0.60 was moderate, 0.61–0.8 was good and 0.81–1.0 was very good.Reference Altman17 Data were analyzed using Stata version 13.0 software (Statacorp, College Station, TX). This prospective observational study was approved by the Melbourne Health Human Research Ethics Committee (no. QA 2020104).
Results
In total, 195 steps with 55 embedded errors were observed by the onsite buddy and the human–AI machine collaboration system. The overall accuracy of the onsite buddy was 99%. Only 2 mistakes were made by the onsite buddy, both were during doffing: (1) the doffing person touched the front of the eye protection during removal and (2) the doffing person touched the front of the mask during removal. Thus, the sensitivity was 96.4% and the NPV was 98.6% (Table 1).
Note. PPE, personal protective equipment; PPV, positive predictive value; NPV, negative predictive value.
a Error was present.
b No error was present.
c Error was detected.
d No error was detected.
The human–AI machine collaboration system had an overall 100% accuracy in the PPE monitoring procedures, with a very good agreement with the onsite buddy (κ coefficient, 0.97). The AI technology was able to perform autonomously and accurately, without the remote human buddy’s rectification in 173 (89%) of 195 steps. It required support from the remote buddy to identify 18 (32.7%) of 55 total errors; it correctly identified 37 of 55 errors, with a sensitivity of 67.3% (Table 1). The typical errors that were missed by the AI technology included hair exposure during donning and touching the contaminated part of the gown during doffing. It also generated 4 nonexisting errors, all of which were rectified by the remote buddy. For example, the AI technology indicated that the hand hygiene was too short in duration in one step and that the doffing person touched the eye protection during removal in another step.
Discussion
In this millennium, several major outbreaks have resulted in large numbers of HCWs becoming infected.Reference Xiao, Fang, Chen and He5 Lessons learned from previous outbreaks identified several systemic factors for the high infection rate among HCWs, including poor preparation by the institution for such events, inadequate HCW infection control education and poor adherence with PPE. Although HCWs might not be able to modify all the variables in the hierarchy of controls to reduce their risk of infection,10 they can certainly improve flawed use of the PPE, which has previously been implicated in the high death rate of HCWs in certain outbreaks.18 The CDC has stated that HCWs must train in donning and doffing of PPE and must demonstrate their competency through testing and assessment before caring for patients.19 The World Health Organization (WHO) guidelines also state that it is important to observe and check HCW adherence with correct PPE use.20
One of the issues experienced at many hospitals has been the inconsistent availability of an onsite buddy. Cognitive aid charts and mirrors were usually provided to help facilitate self-assessment of the donning and doffing procedures. Given the poor adherence of following these protocols,Reference Mitchell, Roth and Gravel11,Reference Tomas, Kundrapu and Thota12,Reference Wotherspoon and Conroy21–Reference Kwon, Burnham and Reske24 it is not ideal to simply rely on self-assessment. Human factors, including fatigue and burnout among HCWs may also affect safety performance.25 Providing spoken instructions during donning and doffing, and using simulation training have been shown to lead to fewer errors and reduced medical contamination rates.Reference Verbeek, Rajamaki and Ijaz26 The human–AI machine collaboration system can indeed be used for these 2 tasks independently without the presence of a remote human buddy.
The surge capacity of supplying PPE during the COVID-19 pandemic has been stretched and, in some places, exceeded. Suggested strategies for conserving PPE include extending use to multiple patient encounters, reusing PPE or using homemade PPE as a last resort.27 The human–AI machine collaboration system has the potential to alleviate some usage of PPE by freeing up the onsite buddy and allowing for a remote buddy to provide oversight support. This reduces their risk of exposure and infection. Additionally, staff who have had to self-isolate or furlough can still be used as part of the workforce to assist in the PPE monitoring task. This is potentially useful in communities, countries, or remote locations where PPE supply or human resource is limited, however, having access to a tablet device and wireless network could be challenging for some.
Currently, no study in the literature has examined the accuracy of PPE monitoring. In our study, a human–AI machine collaboration system provided a 100% accuracy for PPE monitoring procedures. Its sensitivity was higher than the onsite buddy (100% vs 96.4%). Although this was not statistically significant, it was clinically relevant because any errors that go unnoticed by an onsite buddy during the donning and doffing sequence could potentially lead to contamination and disease transmission. The current version of the AI software independently and autonomously identified 67.3% of all the errors in the donning and doffing process. The drawbacks of implementing the human–AI machine collaboration system include installation and maintenance costs, and reliance on a stable internet connection. As AI software advances, it may be able to function independently and accurately without the remote buddy support, after which an Internet connection will no longer be required.
This study is the first to investigate the use of AI in monitoring PPE. The major strength of this pilot study was that the outcome was defined, that is, whether the intentional errors were identified. It was not subject to any personal judgement. However, our study had several limitations. First, this was a simulation study with only 1 designated person performing the predesigned donning and doffing scenarios. Further studies are required to examine its efficacy in clinical setting. Second, our onsite observation buddies were all highly skilled. This might have accounted for the perfect specificity and positive predictive results and the very high sensitivity and negative predictive results. There might also have been a Hawthorne effect from the buddies being hypervigilant. Lastly, we did not examine HCW attitudes and acceptance of this technology for PPE monitoring.
In conclusion, this pilot study showed that the human–AI machine collaboration system was accurate in monitoring PPE donning and doffing procedures in a simulated environment. Such a system may be able to serve as a substitute or enhancement to an onsite buddy. Ongoing advance and refinement of the AI system is currently taking place to improve its performance and autonomy. Further studies are required to evaluate its ongoing efficacy and the clinical application of this technology, especially to investigate whether it could have a significant impact on HCW infection rates.
Supplementary material
To view supplementary material for this article, please visit https://doi.org/10.1017/ice.2022.169
Acknowledgment
The AI software was supplied by Blue Mirror, Fysight (Auckland, NZ), at no cost during the study period.
Financial support
No financial support was provided relevant to this article.
Conflict of interest
R.S., P.B., D.W., K.L., R.K., P.M., and I.N. were consultants in the codevelopment of AI PPE buddy and have minority stock “rights or options” in Blue Mirror joint venture. RCdAN is the codeveloper of AI PPE buddy and has stock options or “rights” in the Blue Mirror joint venture.