Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-24T00:02:02.210Z Has data issue: false hasContentIssue false

Call for information, call for quality in mental health care

Published online by Cambridge University Press:  18 December 2012

A. Lora*
Affiliation:
Department of Mental Health, Lecco Hospital, Lecco, Italy
*
Address for correspondence: Dr Antonio Lora, Dipartimento di Salute Mentale, Ospedale di Lecco, Via dell'Eremo 9/11, 23900 Lecco, Italy. (Email: [email protected])
Rights & Permissions [Opens in a new window]

Abstract

The quality of routine mental health care is not optimal, it can vary greatly from region to region and among providers; in many occasions, it does not correspond to the standards of evidence-based mental health. To bridge this gap, the promotion of a systematic use of the information available for quality assurance would be most helpful, but measuring the quality of mental health care is particularly challenging. Quality measurement can play a key role in transforming health care systems, and the routine measurement of quality, using clinical indicators derived from evidence-based practice guidelines, is an important step to this end. In Italy, the use of clinical indicators is still sporadic: over the last 5 years only three projects have been aimed at analysing, in a structured way, the quality of care in severe mental illness, and two of these were led by the Italian Society of Psychiatric Epidemiology. Not only in Italy but also at global level there is an urgent need for the implementation of mental health information systems that could lead to a substantial improvement in information technology. Once this has been achieved, a common set of clinical indicators, agreed upon at the regional and national level and useful for benchmarking and for comparing mental health services, could be defined. Finally, using the implementation strategies, a system of quality improvement at both regional and local levels will be built.

Type
Editorials
Copyright
Copyright © Cambridge University Press 2012

Quality of care and information: the mental health gap

As the Institute of Medicine pointed out in its influential book Crossing the Quality Chasm (Institute of Medicine, 2001), the last 20 years has seen a chasm occurring between the capacity to develop evidence-based mental health and that of implementing it in day-to-day practice. This means a growing gap between what a patient routinely receives in terms of mental health interventions and what he should receive. As frequently assessed (Lehman et al., Reference Lehman and Steinwachs1998; McGlynn et al., Reference McGlynn, Asch, Adams, Keesey, Hicks, DeCristofaro and Kerr2003; Institute of Medicine, 2006; Lora et al., Reference Lora, Conti, Leoni and Rivolta2011), the quality of routine mental health care is not optimal, it varies greatly from region to region and among providers, and in many occasions such care does not correspond to the standards that evidence-based mental health puts forward.

Also, the information we have on the quality-of-care provided in mental health systems is poor; in fact not only is there a gap in providing good-quality care, but also there is an information gap. There is little knowledge of how severe mental illnesses are treated in most health care systems: in many countries, for most diseases, potential quality problems and their prevalence and incidence are unknown (Mainz, Reference Mainz2003a). The Decision Support 2000 + , the Substance Abuse and Mental Health Services Administration (SAMHSA) program on information needs of the US mental health services, states that the quality of information will determine the quality of care in mental health services. If good quality information is not developed, we will not have good-quality care. It is not so long since the Institute of Medicine (2006) underlined the importance of information and infrastructure for the development of mental health services, and investment in quality measurement and reporting systems will substantially increase opportunities for quality improvement (Mainz & Bartels, Reference Mainz and Bartels2006).

In the meantime, to bridge this gap, we must promote the systematic use of any information available on quality assurance; the trouble here is that the measuring of quality in the mental health sector is particularly challenging. Major obstacles are the poor informative infrastructure and the limited implementation of information systems in mental health services. In many countries, information systems are still undeveloped, especially in the mental health area (WHO, 2011), and this area is far behind the rest of health system in the use of information technology. Data elements necessary to measure quality are incomplete, or, in many settings, even missing, and even when data collection does occur, it tends to be inconsistent across the different organizations. Although, a mental health information system represents the main tool for evaluating patients in contact with services and undergoing treatment, alone it is not sufficient to evaluate quality-of-care. What is needed is the merging of different health databases, like those related to pharmaceutical prescriptions; this would result in linked electronic health information and would allow us to move towards a clinically oriented information system (McGlynn et al., Reference McGlynn, Cheryl, Damberg, Kerr and Brook1998). However, the linking of different information sources is not frequent in the mental health sector, not only because of the lack of an information infrastructure but also because of resistance to an inappropriate use of the privacy concept.

Furthermore, the possibility to compare and benchmark mental health services has long been hampered by a lack of common and standardized quality measures and strategies that would allow their implementation. Throughout the world, the mental health services sector lags behind in the development and implementation of performance and quality measures. This can be put down to the absence of solid evidence with regard to ‘appropriate’ mental health care, the limited descriptions of mental health activities achievable from existing clinical data, the need to develop valid and clearly defined quality measures and the lack of an adequate infrastructure to implement them (Kilbourne et al., Reference Kilbourne, Keyser and Pincus2010).

Accountability for services delivered and funding received is becoming a key component in the public mental health system, and information on quality-of-care is an essential component of the accountability of mental health services. Indeed, governments fund organizations to deliver services that benefit service users, thus, in return, these organizations must ensure, and demonstrate that the funds have been used to achieve the stated outcomes in the most effective and efficient way possible (State of Ontario, 2012). In the mental health sector, accountability is defined as ‘the obligation to demonstrate that policies and programs are achieving intended results’. Thus, it can be seen that ‘intended results’ must be quite explicit when agreeing upon the goals and objectives of mental health services within a defined health region. The ‘degree of progress’ towards such stated goals and objectives is defined as ‘performance’ (Mc Ewan & Goldner, Reference McEwan and Goldner2001) Thus, accountability focuses on results that are measurable and, where possible, evidence-based, so it can be clearly seen that without ‘high-quality information’ accountability remains only wishful thinking.

Clinical indicators in mental health

To acquire insight into the quality of care provided, one can take measurements with indicators. In the context of mental health care, indicators are measures that summarize information relevant to the mental health service and the population that it serves, and can therefore be used to measure change (World Health Organization, 2005). More specifically, clinical/quality indicators describe the performance that should occur for a particular type of patient or for related outcomes; these can then be used to evaluate whether the patient's care is consistent with indicators based on evidence-based standards of care (Mainz, Reference Mainz2003b). Indicators provide a quantitative basis for clinicians, organizations and planners whose aim is to achieve improvement in both care and the processes by which patient care is provided. Indicator measurement and monitoring serve many purposes, making it possible to document the quality of care, make comparisons (benchmarking) over time between organizations (e.g. mental health services), set priorities, support accountability, regulation and accreditation, as well as quality improvement and a patient's choice of provider (Mainz Reference Mainz2003a, Reference Mainzb). Baars et al. (Reference Baars, Evers, Arntz and van Merode2010) then added performance management, where information on performance is used for planning and control, and to predict future performance. Following the framework outlined by Hermann & Palmer (Reference Hermann and Palmer2002) and Hermann et al. (Reference Hermann, Mattke, Somekh, Silfverhielm, Goldner, Glover, Pirkis, Mainz and Chan2006), indicators should present the following necessary characteristics: measure the technical quality provided (not interpersonal or consumer perspectives), be focused on quality of care (not on cost or health care utilization), be built on a single item not (on a multi-item scale), be useful for quality assessment at the health care system level (rather than at the provider level), be constructed from administrative data using uniform coding systems (e.g., ICD or DSM codes) (rather than requiring dedicated data collection or non-standardized data elements). Indicators are based on standards of care, which can be evidence-based, derived from academic literature (see, for example, Cochrane Collaboration literature syntheses, meta-analyses, or randomized controlled trials), or, when scientific evidence is lacking, determined by an expert panel of health professionals in a consensus process based on their experience.

Measuring quality of mental health care in Italy

In contrast with the rich international literature (Spaeth-Rublee et al., Reference Spaeth-Rublee, Pincus and Huynh2010; Lauriks et al., Reference Lauriks, Buster, de Wit, Arah and Klazinga2012), the Italian experience in the use of clinical indicators is still very sporadic. Indeed, Italy's psychiatric reform in 1978, and recent legislation, devolved the responsibilities of planning, coordinating and delivering mental health care to the Regions, and this process has led to relevant differences between macro-geographical areas. Today, 30 years after the psychiatric reform, inequalities still remain, not only in terms of resources and service delivery but also in terms of information technology and the use of information systems (Lora, Reference Lora2009). In 2010, a ‘mental information system’ was approved at the national level, but it has not yet been implemented in many Regions. Thus, the lack of updated and good quality information at the regional level has hampered accountability and quality measurement initiatives, including the development of clinical indicators.

Nevertheless, some experience has been gained: the SIEP-DIRECT Project (DIscrepancies between Routine practice and Evidence in psychiatric Community Treatments provided to people with Schizophrenia) promoted by the Italian Society of Psychiatric Epidemiology (Ruggeri et al., Reference Ruggeri, Lora and Semisa2008). The goal of this project was to define a set of indicators able to measure the gap between routine practices delivered in Italian Departments of Mental Health (DMHs) and recommendations drawn from the National Institute for Clinical Excellence (NICE, 2004) guidelines on schizophrenia. About 100 indicators, both quantitative and qualitative, were developed, analysing five different areas: common elements in all phases of schizophrenia; first episode treatment; crisis treatment; promoting recovery; and aggressive behaviour management. In most instances, the NICE recommendations were judged to be appropriate for the Italian context, and the indicators were fairly easy to use. The most severe, frequently evidence-practice, discrepancies encountered in the 19 analysed DMHs were: lack of written material, guidelines and information, to be systematically provided to users; lack of intervention monitoring and evaluation; difficulty in implementing specific and evidence-based interventions; difficulty in considering a patient's family members as requiring targeted support, figures who should also be regularly involved in the patient care process. With regard to the implementation of the indicators: in the pilot areas professionals actively participated in the data collection, but because of the informative burden related to data collection the resulting indicator sets appeared to be not sufficiently feasible for routine implementation.

Also, Bollini et al. (Reference Bollini, Pampallona, Nieddu, Bianco, Tibaldi and Munizza2008) developed a set of clinical indicators to evaluate the conformance of clinical practice with some guidelines for care in schizophrenia (the Schizophrenia Patient Outcomes Research Team, McEvoy and colleagues Guidelines, and the National Institute for Health and Clinical Excellence). For each indicator, the team defined criteria for eligibility (requirements to be met to qualify for evaluation), for conformance (criteria to be satisfied to comply with each recommendation), and for moderators (factors that could justify the lack of application of a given recommendation). These indicators were tested on a random sample of 807 patients with schizophrenia or schizoaffective disorders cared for in the Italian Piedmont region. A set of 15 indicators was derived, nine concerning pharmacological treatment and six general care and psychosocial rehabilitation. Moderators, such as patient or family refusal of antipsychotic treatment and the patient's level of disability, helped justify a considerable proportion of non-conformant care.

In the last 2 years, the Italian Society of Psychiatric Epidemiology has lead a new project, QuISMI (Quality Indicators in Severe Mental Illness), which, through a set of clinical indicators, is aimed at evaluating routine quality-of-care in severe mental illnesses. The QuISMI project uses the relevant experience acquired in SIEP DIRECT'S, but the data collection results are more feasible, the indicators being less numerous and completely from health information systems. Experts, through Delphi rounds, identified 41 clinical indicators for schizophrenia, 33 for bipolar disorders and 13 for depression. The indicators were subsequently applied to the Lombardy Region's health databases, merging data from different information systems (mental health activities, non-psychiatric hospital admissions, somatic health interventions and pharmaceutical prescriptions). The sample consisted of 28 191 patients with schizophrenic disorder, 7752 with bipolar disorder and 19 271 with depressive disorders cared for during 2009 by the Region's Departments of Mental Health (DMHs). Benchmarking was adopted to evaluate the DMHs. The indicators were analysed by the quality dimension of the quality (i.e. accessibility, continuity, appropriateness, safety and sentinel events) and by the phases of care (onset, acute phase and maintenance), showing the strengths and weaknesses of mental health care in Lombardy. This tool is useful for evaluating quality-of-care in the mental health system, and evaluation could be done routinely using current information system data. The results of this project, financed by the Lombardy Region, have shown that it is possible to evaluate, routinely and extensively, quality-of-care without any burden to mental health professionals.

A shared agenda

In the last 10 years the importance of quality assessment in mental health care has grown, and several measures of performance have been proposed to assess and redress gaps in evidence-based mental health care. In a recent review, Lauriks et al. (Reference Lauriks, Buster, de Wit, Arah and Klazinga2012) analyzed more than 300 indicators produced by more than 50 stakeholders. This tumultuous scenario indicates a growing interest in quality assessment, but at the same time reflects the marked differences between mental health systems and stakeholder expectations. Quality measurement is a key driver to transform the health care system, and the routine measurement of quality using performance measures, derived from evidence-based practice guidelines, is an important step to this end (Conway & Clancy Reference Conway and Clancy2009).

In the Italian context, a new agenda for quality assurance in mental health services is required. In 1978, the psychiatric reform closed mental hospitals and developed community care, but the time has come to take a good in-depth look at the situation and to systematically improve the quality of community care. The first step of this new agenda should be the implementation of a national mental health information system and a substantial improvement in information technology. This is essential in Italy, where there are still important regional differences: the information gap between regions is vast and needs to be tackled.

The second step is to define a common set of clinical indicators, agreed at the regional and national levels and useful for benchmarking and comparing mental health services. Regulatory agencies and Departments of Health at the regional/national level should aim for a list of priorities related to quality-of-care. At the country level, a specific set of indicators should be developed, in line with the characteristics of the mental health system and the priorities of the different stakeholders (governments, regulators, users, families and professionals).

The third step is to build a system of quality improvement. Registration of the clinical indicators is not a purpose in itself; it is the basis on which to develop and evaluate improvement strategies. Improvement interventions themselves generally consist of two steps. First, the evaluation results are reported to the care providers: the feedback. The literature shows that feedback is an effective improvement strategy that, on average, leads to an improvement of 10–15%. (Van der Weijden et al., Reference Van der Weijden, Grol, Grol, Wensing and Eccles2005). At the regional/national level, a ‘dashboard’ containing clinical indicators and a benchmark for severe mental illness should be adopted, taking away from mental health services the burden of producing data. Dashboards are critical tools to disseminate information on quality measurement and to improve accountability. The second step is that unsatisfactory results must trigger quality improvement projects: national/regional Departments of Health Regions have a crucial role in promoting and sustaining these initiatives, as shown by the Danish Indicators Project (Mainz, Reference Mainz, Krog, Bjørnshave and Bartels2004). At the local level, the impact of feedback can be maximized, linking it to periodic audits and using implementation strategies based on evidence within the framework of local quality improvement initiatives (Wollersheim et al., Reference Wollersheim, Hermens, Hulscher, Braspenning, Ouwens, Schouten, Marres, Dijkstra and Grol2007). As stated by Mainz & Bartels (Reference Mainz and Bartels2006) ‘there is evidence indicating that quality measurement and quality monitoring combined with feedback, auditing and public disclosure of measurement data leads to improvement of the quality of care’.

However, measurement and accountability are, alone, not sufficient, we need to involve, from the start of this process, mental health professionals, health care providers, users, families and policy-makers. Particularly mental health professionals, as end users of clinical indicators, should be on board: their insight into the clinical meaningfulness and the feasibility of application to routine patient care is important. Their acceptance of the indicators is crucial if these are to be applied to monitor uptake of evidence-based care, as well as for sustaining clinical quality improvement over time.

Our interest is not in measurement for the sake of measurement. Ultimately, the measures should be used by stakeholders to actually improve care. These proposed improvements in the quality measurement process can facilitate a culture of ‘measurement-based care’ (Kilbourne et al., 2010; Harding et al., Reference Harding, Rush, Arbuckle, Trivedi and Pincus2011), which is defined as enhanced precision and consistent use of high-quality information in disease assessment, tracking and treatment to achieve optimal outcomes. Using clinical indicators and fostering a culture of measurement-based care are essential steps to ensure accountability and to add value to community care, ensuring best practices to patients and families.

Declaration of interest

None

References

Baars, IJ, Evers, SM, Arntz, A, van Merode, GG (2010). Performance measurement in mental health care: present situation and future possibilities. International Journal of Health Planning and Management 25, 198214.Google Scholar
Bollini, P, Pampallona, S, Nieddu, S, Bianco, M, Tibaldi, G, Munizza, C (2008). Indicators of conformance with guidelines of schizophrenia treatment in mental health services. Psychiatric Services 59, 782–91.CrossRefGoogle ScholarPubMed
Conway, PH, Clancy, C (2009). Transformation of health care at the front line. Journal of American Medical Association 18, 763765.Google Scholar
Harding, KJ, Rush, AJ, Arbuckle, M, Trivedi, MH, Pincus, HA (2011). Measurement-based care in psychiatric practice: a policy framework for implementation. Journal of Clinical Psychiatry 72, 11361143.Google Scholar
Hermann, RC, Palmer, RH (2002). Common ground: a framework for selecting core quality measures for mental health and substance abuse care. Psychiatric Services 53, 281287.Google Scholar
Hermann, RC, Mattke, S, Somekh, D, Silfverhielm, H, Goldner, E, Glover, G, Pirkis, J, Mainz, J, Chan, JA (2006). Quality indicators for international benchmarking of mental health care. International Journal for Quality in Health Care 18 (Suppl. 1), 3138.Google Scholar
Institute of Medicine (2001). Crossing the Quality Chasm – A New Health System for the 21st Century Committee on Quality of Health Care in America. National Academy Press: Washington, D.C.Google Scholar
Institute of Medicine (2006). Improving the Quality of Health Care for Mental and Substance Conditions. National Academy Press: Washington, D.C.Google Scholar
Kilbourne, AM, Keyser, D, Pincus, HA (2010). Challenges and opportunities in measuring the quality of mental health care. Canadian Journal of Psychiatry 55, 549557.CrossRefGoogle ScholarPubMed
Lauriks, S, Buster, MC, de Wit, MA, Arah, OA, Klazinga, NS (2012). Performance indicators for public mental healthcare: a systematic international inventory. BMC Public Health 20, 214.Google Scholar
Lehman, A, Steinwachs, D, the Co-investigators of the PORT Project (1998). Translating research into practice: The Schizophrenia Patient Outcomes Research Team (PORT) Treatment Recommendations. Schizophrenia Bulletin 24, 11–10.Google Scholar
Lora, A (2009). An overview of the mental health system in Italy. Annali dell’ Istituto Superiore di Sanità; 45, 516.Google ScholarPubMed
Lora, A, Conti, V, Leoni, O, Rivolta, AL (2011). Adequacy of treatment for patients with schizophrenia spectrum disorders and affective disorders in Lombardy, Italy. Psychiatric Services G, 62, 10791084.Google Scholar
Mainz, J (2003a). Defining and classifying clinical indicators for quality improvement. International Journal for Quality in Health Care 15, 523530.CrossRefGoogle ScholarPubMed
Mainz, J (2003b). Developing evidence-based clinical indicators: a state of the art methods primer. International Journal for Quality in Health Care 15 (Suppl. 1), i5i11.Google Scholar
Mainz, J, Krog, BR, Bjørnshave, B, Bartels, P (2004). Nationwide continuous quality improvement using clinical indicators: the Danish National Indicator Project. Int J Qual Health Care, 16 (Suppl. 1), i4550.Google Scholar
Mainz, J, Bartels, D (2006). Nationwide quality improvement–how are we doing and what can we do? International Journal for Quality in Health Care 18, 7980.CrossRefGoogle Scholar
McEwan, K, Goldner, EM (2001). Accountability and Performance Indicators for Mental Health Services and Supports: A Resource Kit. Health Canada: Ottawa.Google Scholar
McGlynn, EA, Cheryl, L, Damberg, CL, Kerr, E, Brook, R (1998). Health Information Systems: Design Issues and Analytic Applications. RAND Health: Santa Monica, CA.Google Scholar
McGlynn, EA, Asch, SM, Adams, J, Keesey, J, Hicks, J, DeCristofaro, A, Kerr, EA (2003). The quality of health care delivered to adults in the United States. New England Journal of Medicine 26, 26352645.Google Scholar
NICE (2002). Schizophrenia: core interventions in the treatment and management of schizophrenia in primary and secondary care Clinical guidelines.Google Scholar
Ruggeri, M, Lora, A, Semisa, D, SIEP-DIRECT'S Group (2008). The SIEP-DIRECT'S Project on the discrepancy between routine practice and evidence. An outline of main findings and practical implications for the future of community based mental health services. Epidemiologia e Psichiatria Sociale 17, 358368.Google Scholar
Spaeth-Rublee, B, Pincus, HA, Huynh, PT, IIMHL Clinical Leaders Group, Mental Health Quality Indicator Project (2010). Measuring quality of mental health care: a review of initiatives and programs in selected countries. Canadian Journal of Psychiatry 55, 539548.Google Scholar
State of Ontario (2012). Mental Health Accountability Framework. Retrieved 20 October 2012 from http://www.health.gov.on.ca/en/common/ministry/publications/reports/mh_accountability/mh_accountability.aspx#domains.Google Scholar
Van der Weijden, T, Grol, R (2005). Feedback and reminders. In Improving Patient Care; The Implementation of Change in Clinical Practice (ed. Grol, R, Wensing, M and Eccles, M), pp. 158172. Elsevier: Edinburgh.Google Scholar
Wollersheim, H, Hermens, R, Hulscher, M, Braspenning, J, Ouwens, M, Schouten, J, Marres, H, Dijkstra, R, Grol, R (2007). Clinical indicators: development and applications. Netherlands Journal of Medicine 65, 1522.Google Scholar
World Health Organization (2005). Mental Health Information Systems. WHO: Geneva.Google Scholar
World Health Organization (2011). Mental Health Atlas. WHO: Geneva.Google Scholar