Health care service delivery for Canada’s vulnerable older adult population occurs in a number of settings and involves diverse groups of health providers, professions, and services. When the health status and care needs of older persons’ (≥ 65 years of age) change, they can be transferred from one health care setting to another (e.g., from their residential facility to acute care settings). Care during transitions of older persons can be fragmented, delayed, not evidence informed, and unsafe (Anderson, Allan, & Finucane, Reference Anderson, Allan and Finucane2000; Coleman, Reference Coleman2003; Crilly, Chaboyer, & Wallis, Reference Crilly, Chaboyer and Wallis2006; Reid et al., Reference Reid, Cummings, Cooper, Abel, Bissell and Estabrooks2013; Riaz & Brown, Reference Riaz and Brown2019; Trahan, Spiers, & Cummings, Reference Trahan, Spiers and Cummings2016). Poor quality of care transitions between residential long-term care (LTC) facilities or community care settings and acute care settings is linked to increased length of stay in hospital, increased dissatisfaction among providers and patients, increased risk of adverse patient events, and decreased quality of health care (Callahan et al., Reference Callahan, Arling, Tu, Rosenman, Counsell and Stump2012; Coleman & Berenson, Reference Coleman and Berenson2004; Crilly et al., Reference Crilly, Chaboyer and Wallis2006; McCloskey, Reference McCloskey2011; Riaz & Brown, Reference Riaz and Brown2019; Scott, Reference Scott2010; Tisminetzky et al., Reference Tisminetzky, Gurwitz, Miozzo, Gore, Lessard and Yarzebski2019). Additionally, although there are established quality indicators for care delivery within facility-based care settings (e.g., Resident Assessment Instrument [RAI] indicators), whether these indicators are applicable and used for transitions remains unclear (Hutchinson et al., Reference Hutchinson, Milke, Maisey, Johnson, Squires and Teare2010). A particular concern is that persons who rely on others during transitions, such as older persons with moderate to severe dementia, receive optimal patient-centered care (Banerjee, Reference Banerjee2007).
Health systems require valid and reliable measures of quality to monitor, improve, and maintain high standards of care delivery for frail older persons during care transitions. Clinicians, health care managers, and policy makers are responsible for ensuring that care delivery for older persons across health care settings is monitored and evaluated based on the best available standards. When quality indicators (QIs) are identified and reported in areas of care delivery with high potential for improvement, they can provide measures for quality of care and improved patient outcomes (Hibbard, Stockard, & Tusler, Reference Hibbard, Stockard and Tusler2005; Kraska, Krummenauer, & Geraedts, Reference Kraska, Krummenauer and Geraedts2016).
This study examined the state of established QIs for vulnerable older adults experiencing transition(s) among multiple care settings, which could be between: (1) continuing care and community settings (LTC/nursing homes; assisted or supportive living facilities that provide accommodation, meals, and personal care for those who are medically and physically stable; and independent living with or without home care support); (2) emergency or non-emergency transport via ambulance, hereafter referred to as emergency medical services (EMS); (3) emergency departments (EDs); and (4) hospital in-patient settings (see Figure 1 for settings included). Our aim was to develop and validate a ranked set of evidence-based QIs for evaluating quality of care provided during care transition, and our objectives were to:
-
1. Systematically review the current state of QI literature for care transitions experienced by older persons
-
2. Validate QIs for older persons’ care transitions through a Delphi process
-
3. Evaluate the feasibility of implementing the full set of QIs across care transitions
-
4. Translate findings into practice through an integrated knowledge translation approach
Methods
During Phase 1 we conducted a systematic scoping review, informed by Arksey and O’Malley’s framework, in which researchers select the research question, search related studies, select eligible studies, and synthesize and tabulate key information to derive a report of findings (Arksey & O’Malley, Reference Arksey and O’Malley2005). We used the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines to guide reporting of the review (Moher, Liberati, Tetzlaff, Altman, & PRISMA Group, Reference Moher, Liberati, Tetzlaff and Altman2009). A University Health Research Ethics Board (PRO00069167) provided ethics approval for Phase 2: The Delphi Process and steering committee feasibility review.
Inclusion and Exclusion Criteria
We defined QIs as indicators developed through a predetermined systematic process in which primary data collection and/or stakeholder involvement (Delphi process or expert panel) occurred in the identification or review of indicators (De Koning, Reference De Koning2007). We included all literature examining QIs applied in care settings where older persons receive care during transitions: residential care facilities (LTC/nursing homes, assisted living facilities, independent living with home care support), EMS, EDs, and hospital in-patient settings. We included all types of QIs (structure, process, and outcome). We placed no limitation on year of publication. We excluded literature examining QIs focused on (1) provision of care not within or directly leading to care transitions, (2) care delivery of a specific disease or condition not directly related to the transition process, and/or (3) individuals under the age of 65 (e.g., studies on maternal or child health). We included studies published in English only, as that was the only language shared among team members.
Search Strategy
An academic health sciences librarian assisted in developing the search strategy. Search terms included “quality indicator/standard of care/benchmarking/outcome measures”, “quality of health care/process assessment”, and “quality improvement/quality assurance”. Electronic databases searched included Cochrane Database of Systematic Reviews, Elton B. Stephens Company (EBSCO)host Cumulative Index to the Nursing and Allied Health (CINAHL) Plus, Institute for Scientific Information (ISI) Web of Science, Ovid Embase, Ovid MEDLINE®, and Scopus. Records were downloaded into Endnote™ and duplicates were removed. We actively sought grey literature in academic, government, and institutional Web sites that generated reports of QIs, but did not include theoretical articles, commentaries, or practice guidelines that did not include QIs. We used a previously pilot-tested, Microsoft Access electronic form for data screening and extraction (Tate et al., Reference Tate, Hewko, McLane, Baxter, Perry and Armijo-Olivo2019). See Appendix 1 for detailed search strategy.
Screening Procedures
Six research team members (K.T., S.L., R.L., F.C., G.G.C., B.H.R.) met to affirm inclusion and exclusion criteria. Following removal of duplicates, one of four partnered reviewers (K.T., S.L., R.L., F.C.) independently screened every abstract. Partnered reviewers met after review of an initial 200 abstracts to ensure consistent interpretation of the inclusion and exclusion criteria. Discrepancy meetings occurred throughout screening to compare results and ensure clarity of inclusion criteria. When reviewers could not reach consensus through discussion, the senior author (G.G.C.) made the final decision. One of four partnered reviewers (K.T., S.L., R.L., F.C.) independently screened each full text manuscript using similar procedures.
Data Extraction
The following seven data elements were extracted from each study: (1) study characteristics (e.g., year of publication and year[s] of data collection, health care setting, theoretical framework and objectives); (2) study design; (3) identified quality indicators; (4) methods for developing QIs and data source; (5) results; (6) study limitations; and (7) study conclusions. One of four reviewers (K.T., S.L., R.L., F.C.) independently extracted data from each included article, and then each extraction was verified by a second reviewer. We did not appraise study quality, as expert panelists would appraise all possible QIs during the Delphi process (which would be different from the level of quality of the study if the study itself was not on the entirety of the QI development, or was about more than QI development).
Delphi Process for Evaluation
Before the Delphi process in Phase 2, team members reviewed and categorized indicators to avoid duplicate entries and clarify indicator parameters. To map indicators to the most relevant quality domain (Institute of Medicine [U.S.], 2001), six reviewers were paired, and then independently coded extracted indicators from each included study according to: care setting (sending continuing care or community setting [residential care facility, home living setting], transport 1, ED, hospital/in-patient, or other continuing care setting, and, if applicable, transport 2, receiving seniors’ facilities/home living setting) as seen in Figure 1; Donabedian framework domain (structure, process, outcome); and Institute of Medicine (IOM) Domains of Quality (safe, effective, patient-centred, timely, efficient, equitable). Discrepancy meetings between partnered reviewers were held after coding was completed to ensured agreement among reviewers.
Integrated Knowledge Approach
We invited experts via e-mail to join our expert panel to review coded QIs across care transitions through a Delphi process using online surveys. We searched for and approached potential expert panelists based on their roles as authors and practice experts from relevant literature, and through suggestions from research team members. The e-mail invitation letter included a link to a Google Form survey to record their willingness to participate. To keep track of both affirmed and declined responses, only the names and e-mail addresses were recorded. No other identifying information was collected. The expert panelist participation record was kept in a password-protected document accessible only by the local research team. We aimed to recruit at least 20 expert panelist members to ensure a diverse panel (Boulkedid, Abdoul, Loustau, Sibony, & Alberti, Reference Boulkedid, Abdoul, Loustau, Sibony and Alberti2011).
The Delphi process methods were adapted from Boulkedid et al. (Reference Boulkedid, Abdoul, Loustau, Sibony and Alberti2011). The adapted method had previously been used by a member of our research team (Schull et al., Reference Schull, Hatcher, Guttmann, Leaver, Vermeulen and Rowe2010). Study data were collected and managed using REDCap* electronic data capture tools (Harris et al., Reference Harris, Taylor, Thielke, Payne, Gonzalez and Conde2009). We provided each expert panelist with a unique survey link and participant identifier. An invitation and three subsequent e-mail reminders were sent, approximately a week apart from each other, based on a schedule adapted from Dillman, Smyth, and Christian’s (Dillman, Reference Dillman, Christian and Smyth2014) method.
Round 1
Expert panelists were asked to rate each QI on five domains using five-point Likert scales: scientific soundness, validity, feasibility, relevance, and importance (Boulkedid et al., Reference Boulkedid, Abdoul, Loustau, Sibony and Alberti2011; Schull et al., Reference Schull, Guttman, Leaver, Vermeulen, Hatcher and Rowe2011). We provided information to panelists about candidate indicators from original sources including numerator (number of cases that met the QI criteria) and denominator (total number of cases subject to meeting QI criteria), source(s), applicable care setting, and method of QI development. Identified indicators were organized into five transition care settings (sending continuing care setting [residential care facility or home living setting], transport 1, ED, hospital, transport 2, receiving continuing care setting). We strategically assigned expert panelists across these five settings so that a variety of experts from different specialties (i.e., researchers, clinicians, decision makers, older adults), but with the most expertise in care delivery in that particular setting rated each indicator (i.e., researchers focusing on ED care and geriatricians with experience in ED were assigned to evaluate ED QIs). We used all responses (fully and partially completed) to classify each indicator as retained, borderline, or discarded. Participants added comments and rationales for each indicator rating to allow for qualitative feedback between rounds (Boulkedid et al., Reference Boulkedid, Abdoul, Loustau, Sibony and Alberti2011; Schull et al., Reference Schull, Hatcher, Guttmann, Leaver, Vermeulen and Rowe2010). Four to seven experts rated each indicator, and all responses were weighted equally and combined. Indicators with a median score ≥ 4 on soundness and at least one of the importance or relevance measures were retained. Indicators with scores between 3.0 and 3.9 on soundness and at least one of the importance or relevance measures were borderline and kept for repeat assessment (Boulkedid et al., Reference Boulkedid, Abdoul, Loustau, Sibony and Alberti2011; Schull et al., Reference Schull, Hatcher, Guttmann, Leaver, Vermeulen and Rowe2010). Any indicator with a score < 3.0 on soundness was discarded.
Round 2
To maintain panelists’ continued engagement, we provided feedback between Rounds 1 and 2. Experts were given median scores and their initial individual score for each borderline indicator from Round 1. Lists of retained and discarded indicators from Round 1, including ID number and QI name, were sent to each panelist. Qualitative feedback from participants in Round 1 was used to clarify parameters of QIs. In Round 2, expert panelists were asked either to keep or discard each borderline indicator using the same information provided in Round 1. Experts were divided into two groups, each comprising a variety of different specialties (i.e., we aimed to have researchers, clinicians, and older adults with lived experiences, as well as representatives from various care settings distributed evenly between groups). Participants reviewed many of the same indicators from Round 1 in Round 2. Borderline indicators that received a vote of keep from at least half of the panelists were retained; remaining indicators were reclassified as discarded. Retained indicators were further assessed for feasibility and accessibility.
Feasibility Review
A steering committee completed a feasibility review of the final indicators from the Delphi rounds to determine whether the current Canadian administrative databases captured each indicator, and how easily such data could be retrieved. The Older Persons’ Transitions in Care (OPTIC) steering committee consisted of research team members (academics, data specialists, and health system decision makers) with substantive research, clinical, and administrative data expertise who were representative of various care settings. Prior to the in-person feasibility review, steering committee members searched for available national health systems databases (as well as databases in one Western Canadian province) and extracted data elements that could be used to measure indicators under review. Databases identified and reviewed were Canadian Institute for Health Information’s (CIHI) National Ambulatory Care Reporting System (NACRS) and Discharge Abstract Databases (DAD), Alberta Continuing Care Information System (ACCIS), Canadian Patient Experiences Reporting System (CPERS), Continuing Care Reporting System (CCRS), Pharmaceutical Information Network (PIN), and regional databases in Edmonton and Calgary. From these databases, we identified relevant individual data elements (e.g., reported 30-day readmission rates, new medication “flags” that could be used to identify if persons left hospital with new prescriptions) for each indicator, unit of analysis captured, and whether collection of elements was mandatory, optional, or conditionally mandatory.
The OPTIC research team categorized each indicator as either an (1) established QI currently measured with a data set, (2) indicator for which data elements are collected but not used, (3) indicator for which some applicable databases/elements exist but may or may not be collected, or (4) indicator for which no applicable database/elements are currently captured. These categorizations were independently completed by one team member, verified by another, and sent to the data expert for review of indicator data availability prior to the feasibility review. During the in-person feasibility review, the OPTIC steering committee reviewed and discussed individual indicators when it was unclear if and how current data in Canadian administrative databases could be used to measure them. The steering committee determined whether capture of retained indicators was feasible with existing data, required enhanced data collection, and/or was clinically valuable for improving care during transitions of older persons.
Results
Search Results
Our electronic database search yielded 10,487 unique records. Following abstract/title screening, 1,615 articles were retrieved for full text screening, of which a final 41 articles met inclusion criteria. Twelve other sources from grey literature searches met inclusion criteria, for a total of 53 articles. See Figure 2 for PRISMA Flow diagram of search and screening results.
From the 53 articles, 326 candidate QIs were identified for review through the Delphi process. After coding into applicable domains, the 326 QIs (n = 266 established and n = 60 developing) included 35 (10.7%) structure, 212 (65.0%) process, and 79 (24.2%) outcome indicators. QIs were categorized into timeliness (25%), effectiveness (n = 24%), safety (21%), patient-centredness (19%), efficiency (10%), and equity (<1%). See Figure 3 for a visual display of review results by Donabedian framework, IOM quality domain, and care setting.
Delphi Process Results
Round 1
Thirty-three of 39 invited experts initially agreed to participate. Participants included researchers on transitional or geriatric care or gerontology, clinicians and decision makers with experience in quality management and/or related research, and older adults with experience as an informal caregiver or recipient of care during a care transition. Twenty-two experts completed the survey for Round 1, three partially completed the survey, five did not complete the survey, and three had to withdraw prior to Round 1 completion because of time constraints.
Of the 326 indicators included in Round 1, 80 were classified as “retained”, 92 were classified as “discarded” and 154 were classified as “borderline”. The 154 borderline indicators were included in the Round 2 survey, while the 80 retained indicators were moved forward for feasibility review by the steering committee. Although no clear patterns of response emerged based on expert specialty, the majority of indicators from Round 1 were discarded based on lack of “clinical importance or relevance”. Specifically, participants felt that some quality indicators were not relevant in Canadian contexts, supported by a Delphi panelist stating, “This is only relevant to UK or Australia ED contexts” in reference to the QI, “Proportion of patients re-attending the ED seen by a more senior member of the ED medical staff (Middle Grade or Consultant)”, and another Delphi participant stating (in reference to the QI) “Availability of ED observation beds”.
[Availability of ED observation beds] is very different in different health care systems – in the United States observation beds are often a means to address billing for ED services
Other experts felt that some indicators were not clinically meaningful (e.g., length of stay [LOS] in acute care services).
LOS – hard to determine what is appropriate since this is determined by complexity of the patient. To get a better understanding of transitions and quality of care and patient flow, it is critical to look at unnecessary LOS in acute care (aka Alternate level of care – ALC: patients who are in an acute care bed who no longer need the intensity of care provided by that unit).
Or they may have felt that some indicators were no longer important based on more current best practices (e.g., proportion of LTC residents who experienced an unintentional discontinuation of their statins upon returning to their LTC residence after an acute-care admission).
[Statins] are often not indicated or no longer effective. Not sure why we would pick Statins to gauge “unintentional discontinuation”.
Round 2
Of 22 experts who completed Round 1 surveys, 19 participated in Round 2. A total of 154 borderline indicators were split into two different surveys of 77 indicators each to ensure survey completion. Of the borderline group of indicators, 100 additional quality indicators were retained.
After both rounds, a total of 180 indicators was retained, while 146 were discarded. Retained indicators generally covered a similar range of transition settings, Donabedian framework types, IOM domains of quality, and care settings compared with the initial identified indicators. However, notable changes among retained indicators included fewer indicators that spanned multiple settings, and fewer indicators specific to transitions and palliative care. Qualitative feedback was not solicited for this round, as the intent was to provide feedback and clarify QI parameters (if possible) for Delphi panelists between rounds (Boulkedid et al., Reference Boulkedid, Abdoul, Loustau, Sibony and Alberti2011; Schull et al., Reference Schull, Hatcher, Guttmann, Leaver, Vermeulen and Rowe2010). See Figure 4 for Delphi process classification results.
Feasibility Review
Following the OPTIC steering committee’s review of the 180 retained QIs for feasibility, 7 indicators were feasible based on current use by the CIHI, 31 additional indicators were reconsidered feasible and retained, and 142 indicators were deemed not feasible. Indicators were not feasible if (1) individual chart review was required to ensure data availability (n = 46), (2) procedures described in the indicator were not currently being performed (n = 6), (3) the indicator was not known to be documented (n = 17), (4) further indicator clarification was required in order to reasonably capture the indicator within current data platforms (n = 8), and/or (5) the indicator lacked clinical value or relevance based on current Canadian information systems (e.g., for the indicator “time from first contact with emergency and urgent care systems [EUCS] service to definitive care”, more targeted measures for specific conditions could be tracked and used more meaningfully than a general indicator) (n = 7).
The final set of 38 feasible indicators in 21 articles (see Appendix 2) included the following transition care settings: ED (n = 18), seniors’ facilities (n = 4), transport (n = 1), hospital (n = 5), palliative care (n = 7), and multiple settings (n = 3). See Figure 5 for results of the feasibility review by Donabedian framework, IOM quality domain, and care setting, and Table 1 for characteristics of included and feasible QIs.
Note. Sources for included Quality Indicators can be seen in Appendix 2. IOM = Institute of Medicine; LTC = long-term care; A&E = accident & emergency department; CTAS = Canadian Triage Acuity Scale.
The steering committee identified knowledge gaps during their deliberations for the feasibility review. These include lack of standardized QI development applied in practice, no feasible indicators related to equity (e.g., age, sex/gender, race), a paucity of appropriate assessments (or documentation of assessments) of older persons across settings, and little to no screening done for baseline function, delirium, dementia, or cognitive impairment. Many proposed indicators require individual chart review.
Discussion
Using a robust mixed-method design and an integrated knowledge translation approach, this study identified 326 QIs cited in the literature and explored the feasibility of their reporting using standard administrative health databases. After an expert panel review, only 38 QIs were feasible to capture with existing databases and documentation practices within the Canadian context. The majority of feasible indicators relate to acute care settings, outcomes, and process indicators, and aligned with the IOM quality domain of effectiveness. Few available and feasible indicators were identified from EMS transport and seniors’ residential care settings, structure indicators, or IOM domains of patient-centredness and equity.
Of the QIs identified in this review, many can be used to monitor and improve transitions to and from EDs and in-patient settings, particularly pertaining to timeliness and safety in the process of care delivery. Target wait times from ED arrival to disposition for older adults are often not met and when older adults are hospitalized, they are at high risk of experiencing adverse events such as medication-related errors and in-hospital death (Cummings et al., Reference Cummings, McLane, Reid, Tate, Cooper and Rowe2020; Riaz & Brown, Reference Riaz and Brown2019; Tisminetzky et al., Reference Tisminetzky, Gurwitz, Miozzo, Gore, Lessard and Yarzebski2019). Although many older patients are discharged back to the community, they experience high rates of repeat ED visits and unplanned hospitalizations largely attributed to unresolved problems and limited discharge planning (Ahn, Hussein, Mahmood, & Smith, Reference Ahn, Hussein, Mahmood and Smith2020; Brennan, Chan, Killeen, & Castillo, Reference Brennan, Chan, Killeen and Castillo2015; Doupe et al., Reference Doupe, Palatnick, Day, Chateau, Soodeen and Burchill2012). Identified QIs, although not comprehensive, offer an initial framework to build a suite of QIs for various transitions for older adults. QIs discarded during feasibility review could be re-evaluated as electronic health records evolve, to determine if their capture could be feasible by adding or mandating data elements. Further, QIs discarded based on relevance to Canadian contexts could be reviewed to determine whether they could be clinically important if modified and tested here.
The lack of feasible indicators outside of acute care settings is concerning. Issues that occur during the onset of transfer, such as incomplete or missing data on resident condition and goals of care, can negatively influence care throughout the transition process (Griffiths, Morphet, Innes, Crawford, & Williams, Reference Griffiths, Morphet, Innes, Crawford and Williams2014). Although data are available for care delivery within continuing care settings (such as RAI-Minimum Data Set [MDS] 2.0 nursing home data), (Estabrooks, Knopp-Sihota, & Norton, Reference Estabrooks, Knopp-Sihota and Norton2013) we found a lack of rigorously developed indicators for processes leading up to a decision to transfer and for the initial patient transfer process from continuing care settings. Despite existing research regarding trigger events leading to transfer to acute care services for older persons, only one feasible QI related to a trigger event (falls) was identified and it was only captured as an element of LTC admission, not of transfer from continuing care to acute care services (Cummings et al., Reference Cummings, McLane, Reid, Tate, Cooper and Rowe2020; Dwyer, Stoelwinder, Gabbe, & Lowthian, Reference Dwyer, Stoelwinder, Gabbe and Lowthian2015). Other QI reviews on care delivery for older adult populations report that most indicators focus on examinations and treatment for a specific disease, although limited measures are available to monitor safety and quality concerns where care services intersect (Joling et al., Reference Joling, van Eenoo, Vetrano, Smaardijk, Declercq and Onder2018; Laugaland, Aase, & Barach, Reference Laugaland, Aase and Barach2011). Our results confirm the scarcity of available, feasible indicators related to transition onset. These types of indicators are integral in elucidating early concerns in transitions, determining a reference point of patient condition and context influencing perceived quality of the transition, and identifying and evaluating potentially avoidable transitions.
Our review highlights that despite guidelines being available for standardized QI development, validation and prioritization of many QIs do not meet standards of rigor (Kötter, Blozik, & Scherer, Reference Kötter, Blozik and Scherer2012). Many QIs were validated through consensus and lacked reported empirical testing; therefore, they still require better reporting on their development methods, pilot testing, operationalization with properly developed numerators and denominators (where applicable), and evaluation through more robust quantitative and mixed-methods designs (Kötter et al., Reference Kötter, Blozik and Scherer2012; Terrell et al., Reference Terrell, Hustey, Hwang, Gerson, Wenger and Miller2009; Wakai et al., Reference Wakai, O’Sullivan, Staunton, Walsh, Hickey and Plunkett2013). Unfortunately, QIs have been used in applied research or practice without the preceding research necessary to ensure validity and utility of these measures after their initial identification (Mansoor & Al-Kindi, Reference Mansoor and Al-Kindi2017; Saver et al., Reference Saver, Martin, Adler, Candib, Deligiannidis and Golding2015). Moreover, some QIs (e.g., thresholds for certain types of screening related to cancer, diabetes, and dementia, as well as QIs for prescribing practices for diabetes) that are currently being used in hospital settings and are tied to financial incentives, are selected because of their measurement ease and availability rather than because of their evidence base or representation as true markers of care quality (Saver et al., Reference Saver, Martin, Adler, Candib, Deligiannidis and Golding2015). Even among QI sets considered to be of high quality (interRAI-Home Care QIs, Agency for Healthcare Research and Quality prevention QI sets, and Assessing Care of Vulnerable Elders [ACOVE]-3 indicator sets), only ACOVE-3 indicators have scored high enough for methodological quality based on “scientific evidence” (Burkett, Martin-Khan, & Gray, Reference Burkett, Martin-Khan and Gray2017; De Koning, Reference De Koning2007; Joling et al., Reference Joling, van Eenoo, Vetrano, Smaardijk, Declercq and Onder2018; Wenger et al., Reference Wenger, Roth, Shekelle, Amin, Bedsine and Blazer2007). Further study will ensure that QIs for older persons’ care transitions meet established standards of development, and will determine resources required to capture data to measure QIs (van Teijlingen & Hundley, Reference van Teijlingen and Hundley2002).
Our findings suggest that little to no systematic screening for baseline function, delirium, dementia, or cognitive impairment is occurring and feasibly captured as older persons transition through acute care settings (Cummings et al., Reference Cummings, McLane, Reid, Tate, Cooper and Rowe2020). Some care activities may be performed, but are not documented, some are documented but are not easy to capture, and some may not be performed at all in current care settings.
Tracking of current available indicators relies primarily on chart review, potentially from multiple care settings. Having standardized documentation that prompts certain assessments or activities to be completed (vs. solely free-text charting) offers a robust opportunity to improve both care provided and continuity in care (Hustey & Palmer, Reference Hustey and Palmer2010; Terrell et al., Reference Terrell, Brizendine, Bean, Giles, Davidson and Evers2005; Zafirau, Snyder, Hazelett, Bansal, & McMahon, Reference Zafirau, Snyder, Hazelett, Bansal and McMahon2012). Antiquated and fragmented electronic tracking systems need to be consolidated and advanced to allow health care decision makers to better evaluate and improve older persons’ care during transitions, in recognition of their distinct care needs (Allen, Hutchinson, Brown, & Livingston, Reference Allen, Hutchinson, Brown and Livingston2014). Standardized electronic documentation (e.g., drop-down menus, checklists) (McLane et al., Reference McLane, Tate, Reid, Rowe, Estabrooks and Cummings2022) also needs to be completed across care settings to maximize benefits of using large clinical and administrative databases efficiently. Standardized electronic documentation allows for reliable, feasible tracking, and enhances the quality and completeness of the data tracked (Vuokko, Mäkelä-Bengs, Hyppönen, Lindqvist, & Doupi, Reference Vuokko, Mäkelä-Bengs, Hyppönen, Lindqvist and Doupi2017). Provincial policies, clinical guidelines, and practice standards should provide direction and governance related to data specifications and documentation practices that will allow for effective data integration across care settings and regions.
The electronic capture of valid and reliable data can be used for secondary purposes, such as creation of QI dashboards for audit-feedback targeted at improving care for older persons (Lloyd, Reference Lloyd2017; Vuokko et al., Reference Vuokko, Mäkelä-Bengs, Hyppönen, Lindqvist and Doupi2017). With a standardized electronic data platform, related QIs can be captured together and thereby support display of QI information with statistical interpretations for knowledge users. (Schall et al., Reference Schall, Cullen, Matthews, Pennathur, Chen and Burrell2017). This is a necessary step to incorporate concepts of statistical process control (using statistics to monitor and improve quality), health informatics, and meaningful use of indicators in health care systems to consider context and missing data to drive change (Lloyd, Reference Lloyd2017; Office of the National Coordinator for Health Information Technology, 2015; Spath, Reference Spath2013; Tashobya et al., Reference Tashobya, Dubourg, Ssengooba, Speybroeck, Macq and Criel2016). Ensuring data completeness has the potential to reduce the amount of superfluous data being captured and thereby reduce resources needed to retrieve such data (Arthofer & Girardi, Reference Arthofer and Girardi2017).
No feasible equity indicators were identified that clearly compared care received by older persons to care received by the general population or by older persons living in their homes. However, risk-adjusted QIs can statistically account for the influence of variables such as age, sex, and chronic conditions on the values and subsequent interpretation of QIs (Joling et al., Reference Joling, van Eenoo, Vetrano, Smaardijk, Declercq and Onder2018). Unfortunately, many QIs identified in this study, and in another review of QIs in older persons’ community care, are neither risk adjusted nor accompanied by strategies for risk adjustment in published reports (Joling et al., Reference Joling, van Eenoo, Vetrano, Smaardijk, Declercq and Onder2018). Having almost no information on how care is provided for older persons compared with other populations is alarming, as older persons are identified as one of the most disadvantaged and vulnerable patient groups (Johnstone & Kanitsaki, Reference Johnstone and Kanitsaki2008). It is imperative that future research related to care transitions focus on development and validation of feasible equity indicators with parameters that include comparators by age (Williams & Mohammed, Reference Williams and Mohammed2009). A minimum set of essential, cross-setting transition QIs are needed, and should be rigorously developed, validated, and evaluated using available guidelines.
Limitations and Strengths
The systematic review component of this study may be limited by publication and selection bias. Key weaknesses in QIs for transitions were related to validation, empirical testing, and reporting of their development. Difficulties emerged when seeking knowledgeable experts in both older persons’ transitions in care and QIs. Many potential panelists were acknowledged as experts in older persons’ care but were unfamiliar with what constituted rigor in QI development, despite criteria being described and available on the online surveys. This study only examined feasibility related to data capture of QIs in Canadian contexts, and our findings may not be transferable to other regions in which health policy, health care delivery systems, and health informatics systems differ.
Strengths of our study included systematic selection of indicators through trained and independent research staff, and the diversity and number of experts included in our Delphi process and steering committee for feasibility review. A comprehensive search strategy was used to mitigate publication bias and to avoid selection bias. Efforts to maintain rigor were evident through individual coding, extraction, and consensus methods used in the Delphi process and feasibility review. Diversity in both the expert panel and steering committee reduced risk of monopolization of one discipline or setting, allowing for representation from stakeholders across the continuum of care.
Conclusion
Although numerous QIs have been developed and reported, the number of feasible QIs for older persons’ transitions in care is distressingly small. QIs that do exist for older persons’ transitions in care are primarily for acute care settings, and almost none exist for tracking transitions across settings. A set of cross-setting transition QIs is needed, and should be developed, validated, and properly operationalized using available guidelines. Measurement and documentation practices need to be improved, to increase the feasibility of capturing QIs rather than having a system complacent about adapting and implementing QIs that conform to current poor reporting practices. Future QI development should focus on standardized electronic reporting systems to better track data across settings. Each setting involved in care transitions should be held accountable for improving the quality of care experienced by older persons during transitions.
Acknowledgements
We acknowledge the contributions of research assistants Rory Lepage and Francisca Claveria, who participated in abstract and full-text screening and initial indicator coding, and Stephanie Couperthwaite, who assisted in the survey development process and REDCap survey administration. Finally, we thank all the Delphi expert panelists who gave permission to be named: Tammy Hopper, James L. Silvius, Navjot Virk, Ingrid Crowther, Karen Fruetel, Deniz Cetin-Sahin, Isabelle Vedel, Tammy Damberger, Machelle Wilchesky, Michael J Bullard, Jenny Basran, Barbara Liu, John Muscedere, Erika Dempsey, Angela Gulay, Douglas Faulder, Cliff Mitchell, Alison Hutchinson, and Denise S. Cloutier.
Author Contributions
All authors participated in the feasibility review process and interpretation of phase 2 study findings, participated on the OPTIC Steering Committee, and reviewed and contributed to draft manuscript versions. Authors K.T., S.L., J.H.L., R.C.R., G.G.C., and G.E..C. were involved in categorizing quality indicators (QIs) and interpretation of study findings for phase 1. K.T., S.L., B.H.R., R.C.R., and G.C.C. contributed to clarification of inclusion and exclusion criteria for phase 1. Author R.E.B. drafted preliminary results of the Delphi process. Author J.B. provided specific content expertise on administrative databases in the feasibility review and contributed to the interpretation of phase 2 findings. Authors B.H.R., J.H.L., G.E.C., C.A.E., and G.G.C. provided expertise regarding clinical practice and clinical importance of QIs, and contributed to the interpretation of phase 2 findings. Author K.T. drafted the initial manuscript with S.L., and G.G.C., as senior author, reviewed and edited all versions of the manuscript.
Funding Statement
This project titled “Development of Quality Indicators for Older Persons’ Transitions across Care Settings: A Systematic Review and Delphi Process” (G.G. Cummings as nominated principal investigator) was funded by the Canadian Institutes of Health Research. The first author (K.T.) was funded by the Canadian Frailty Network for Phase 1 of this study. The senior author on this paper (G.G.C.) was supported by the University of Alberta Centennial Professorship during the time of this study. C.A.E. is supported through a Tier 1 Canada Research Chair in Knowledge Translation. B.H.R. held a Tier I Canada Research Chair in Evidence-Based Emergency Medicine during the time of this study.
Appendix 1: MULTIFILE Search Strategy
-
1. quality indicators, health care/ or benchmarking/
-
2. (benchmark* or trigger tool*).ti,ab,kf.
-
3. ((quality adj3 (indicator* or measure* or metric*)) or (quality adj3 criteri*) or performance indicator* or performance measure* or clinical indicator* or clinical measure* or outcome indicator* or ((performance or clinical or outcome) adj3 metric*)).ti,ab,kf.
-
4. ((quality and (standard* or measure* or indicator* or metric*)) or (performance and (indicator* or measure* or metric*))).ti,kf.
-
5. (practice guidelines as topic/ or practice guideline.pt. or ((clinical or practice) adj guideline*).ti,ab,kf.) and (((safe* or efficien* or effective* or timel* or equit* or patient cent*) adj3 (care or service*)) or quality or indicator*).ti,ab,kf.
-
6. (“quality of health care”/ or “outcome assessment (health care)”/ or “Process Assessment (Health Care)”/ or quality assurance, health care/) and (((safe* or efficien* or effective* or timel* or equit* or patient cent*) adj3 (care or service*)) or indicator*).ti,ab,kf.
-
7. audit.ti,ab,kf,hw. and (((safe* or efficien* or effective* or timel* or equit* or patient cent*) adj3 (care or service*)) or quality or indicator*).ti,ab,kf.
-
8. or/1-7
-
9. nursing homes/ or Intermediate Care Facilities/ or skilled nursing facilities/ or homes for the aged/
-
10. (((extended care or long term care or intermediate or skilled or residential) adj2 (facilit* or facilities)) or residential care).ti,ab,kf.
-
11. (assisted living or lodge or lodges).ti,ab,kf.
-
12. emergency medical services/ or advanced trauma life support care/ or emergency medical service communication systems/ or exp emergency service, hospital/ or emergency services, psychiatric/
-
13. (emergency adj2 (room* or center* or centre* or facilit* or department* or ward* or service*)).ti,ab,kf.
-
14. or/9-13
-
15. 8 and 14
-
16. home care services/ or home health nursing/
-
17. (((home or community) adj2 care) or ((home or community) and (supportive living or supportive care))).ti,ab,kf.
-
18. 16 or 17
Appendix 2: Sources for included Quality Indicators
Australian Commission on Safety and Quality in Health Care and NSW Therapeutic Advisory Group Inc. (2014). National quality use of medicines indicators for Australian hospitals (ACSQHC), Sydney. Retrieved 26 January 2021 from https://www.safetyandquality.gov.au/sites/default/files/migrated/SAQ127_National_QUM_Indicators_V14-FINAL-D14-39602.pdf
Berenholtz, S. M., Dorman, T., Ngo, K., & Pronovost, P. J. (2002). Qualitative review of intensive care unit quality indicators. Journal of Critical Care, 17(1), 1–12.
Coleman, P., & Nicholl, J. (2010). Consensus methods to identify a set of potential performance indicators for systems of emergency and urgent care. Journal of Health Services Research & Policy, 15(Suppl. 2), 12–18.
College of Emergency Medicine UK. (2011). Emergency department clinical quality indicators: A CEM guide to implementation. Retrieved 26 January 2021 from http://www.dickyricky.com/Medicine/Guidelines/RCEM%20-%20Royal%20College%20of%20Emergency%20Medicine/2011_03%20CEM5832%20Quality%20Indicators.pdf
Earle, C. C., Neville, B. A., Weeks, J. C., Landrum, M. B., Souza, J. M., Ayanian, J. Z., et al. (2005). Evaluating claims-based indicators of the intensity of end-of-life cancer care. International Journal for Quality in Health Care, 17(6), 505–509. https://doi.org/10.1093/intqhc/mzi061
Earle, C. C., Park, E. R., Lai, B., Weeks, J. C., Ayanian, J. Z., & Block, S. (2003). Identifying potential indicators of the quality of end-of-life cancer care from administrative data. Journal of Clinical Oncology, 21(6), 1133–1138.
Gagnon, B., Mayo, N. E., Hanley, J., & MacDonald, N. (2004). Pattern of care at the end of life: Does age make a difference in what happens to women with breast cancer? Journal of Clinical Oncology, 22(17), 3458–3465.
Grunfled, E., Urquhart, R., Mykhalovskiy, E., Folkes, A., Johnston, G., Burge, F. I., et al. (2008). Toward population-based indicators of quality end-of-life care: Testing stakeholder agreement. Cancer, 112(10), 2301–2308.
Health Quality Ontario. (2021). System performance: Indicator library. Retrieved 26 January 2021 from https://www.hqontario.ca/System-Performance.
Joint Commission. (2015). Specifications manual for Joint Commission National Quality Core. Retrieved 14 October 2018 from https://manual.jointcommission.org/releases/TJC2015B1/TableOfContentsTJC.html
Maritz, D., Hodkinson, P., & Wallis, L. (2010). Identification of performance indicators for emergency centres in South Africa: Results of a Delphi study. International Journal of Emergency Medicine, 3(4), 341–349.
Research ANd Development (RAND) Health Corporation. (2007). Assessing care of vulnerable elders-3 quality indicators. Journal of the American Geriatric Society, 55(S2), 465–487.
Research Triangle Institute. (2012). Nursing home MDS 3.0 quality measures: Final analytic report. Retrieved 26 January 2021 from https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/NursingHomeQualityInits/Quality-Measures-Archive
Saliba, D., Solomon, D., Rubenstein, L., Young, R., Schnelle, J., Roth, C., et al. (2005). Feasibility of quality indicators for the management of geriatric syndromes in nursing home residents. Journal of the American Medical Directors Association, 6(3), S50–S59.
Santana, M. J., & Stelfox, H. T. (2013). Trauma quality indicator consensus: Development and evaluation of evidence-informed quality indicators for adult injury care. Annals of Surgery, 259(1), 186–192.
Schull, M. J., Guttmann, A., Leaver, C. A., Vermeulen, M., Hatcher, C. M., Rowe, B. H., et al. (2011). Prioritizing performance measurement for emergency department care: consensus on evidence-based quality of care indicators. Canadian Journal of Emergency Medicine, 13(5), 300–309.
Shrank, W. H., Polinski, J. M., & Avor, J. (2007). Quality indicators for medication use in vulnerable elders. Journal of the American Geriatrics Society, 55(Suppl. 2), S373–S382.
Tregunno, D., Baker, R. G., Barnsley, J., & Murray, M. (2004). Competing values of emergency department performance: Balancing multiple stakeholder perspectives. Health Services Research, 39(41), 771–792. https://doi.org/10.1111/j.1475-6773.2004.00257.x
United Kingdom Department of Health. (2010). Accident and emergency clinical quality indicators: Data definitions. Retrieved 14 October 2018 from https://webarchive.nationalarchives.gov.uk/20130105030902/ http://www.dh.gov.uk/en/Publicationsandstatistics/Publications/PublicationsPolicyAndGuidance/DH_122868
Wakai, A., O’Sullivan, R., Staunton, P., Walsh, C., Hickey, F., & Plunkett, P. K. (2013). Development of key performance indicators for emergency departments in Ireland using an electronic modified-Delphi consensus approach. European Journal of Emergency Medicine, 20(2), 109–114. https://doi.org/10.1097/MEJ.0b013e328351e5d8
Welch, S. J., Asplin, B. R., Stone-Griffith, S., Davidson, S. J., Augustine, J., & Schuur, J. (2011). Emergency department operational metrics, measures and definitions: Results of the second performance measures and benchmarking summit. Annals of Emergency Medicine, 58(1), 33–40.