Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-11-24T17:40:02.603Z Has data issue: false hasContentIssue false

An evidence-based approach to routine outcome assessment

Commentary on … Use of Health of the Nation Outcome Scales in Psychiatry

Published online by Cambridge University Press:  02 January 2018

Rights & Permissions [Opens in a new window]

Summary

Routine use of Health of the Nation Outcome Scales (HoNOS) has not produced the anticipated benefits for people using mental health services. Four HoNOS-specific reasons for this are: low relevance to clinical decision-making; not reflecting service user priorities; being staff-rated; and having a focus on deficits. More generally, the imposition of a centrally chosen measure on the mental health system leads to a clash of cultures, since frontline workers do not need a standardised measure to treat individuals. A better approach might be to use research from the emerging academic discipline of implementation science to inform the routine use of a standardised measure that is chosen by the people who will use it and hence is more concordant with existing clinical processes. This is illustrated using a case study of successful implementation of the Camberwell Assessment of Need (CAN) in community mental health services across Ontario, Canada.

Type
Commentary
Copyright
Copyright © The Royal College of Psychiatrists 2012 

Reference Delaffon, Anwar and NoushadDelaffon and colleagues (2012, this issue) review Health of the Nation Outcome Scales (HoNOS) publications, leading to the ‘surprising finding’ that the process of translating aggregated data to benefit individual service users is in its infancy. In this commentary, I focus on the use of outcome measures and suggest why their finding may not be so surprising.

HoNOS: the wrong measure?

HoNOS is a problematic basis for routine outcome assessment in four ways. First, it was developed, as the name suggests, for public health purposes and has not proven highly useful for clinical decision-making (Reference Jacobs and MoranJacobs 2010). Therefore, clinical buy-in was always going to be limited, since it is not a measure that greatly helps to do the job – treating individuals.

Second, the items reflect the preoccupations of the mental health system, but not perhaps of people using services. First impressions matter and it is noteworthy that ‘Aggression’ is the first HoNOS item.

Third, HoNOS as widely used is staff-rated. In modern services, the ultimate arbiter of the success of treatment should be the service user. The self-rated version HoNOS-SR (Reference StewartStewart 2009) has not been widely used.

Finally, the focus on deficits is inconsistent with the policy focus on recovery and well-being (Reference SladeSlade 2010).

A clash of cultures

The benefits of using standardised measures in routine care appear self-evident. Who could disagree that mental healthcare should be focused on improving outcome or that outcome should be assessed using reliable measures? The problem is that there is disagreement about exactly these points.

Empirical evidence suggests that workers do not prioritise outcome. When frontline providers are asked how their work should be monitored, outcome is last on the list, after (in ascending order of rated importance) service use, access, process and satisfaction indicators (Reference Valenstein, Mitchinson and RonisValenstein 2004).

As noted by Delaffon et al, psychiatrists do not use standardised outcome measures. This is not because of an absence of measures – there is no shortage of measures reported in research studies. The reason is one of culture – standardised outcome assessment is not needed by clinicians to ‘do the job’. Imposing an outcomes measure on a system that uses other forms of clinical decision-making – clinical judgement informed by ethics, economics (as previously discussed in this journal; Reference Byford and BarrettByford 2010) and public protection concerns (also previously examined in this journal; Reference Brookes and BrindleBrookes 2010) – leads to a clash of cultures. Despite the development of coherent conceptual frameworks (National Institute for Mental Health in England 2005) and clarity about the intended benefits (Reference SladeSlade 2002a), this cultural gap has been found when introducing routine outcome assessment into both in-patient (Reference Puschner, Schofer and KnaupPuschner 2009) and community settings (Reference Slade, McCrone and KuipersSlade 2006).

Starting with a centrally chosen measure and then trying to get it used in the mental health system is the wrong approach (Reference SladeSlade 2002b). A better approach is based on the evolving discipline of implementation science (Reference Tansella and ThornicroftTansella 2009). This is illustrated in a case study.

An empirical approach to routine outcome assessment

In 2006, Ontario, Canada (population 12 million) began the Community Mental Health Common Assessment Project (CMHCAP), with the aim of choosing and implementing a common tool for use in all 300 Ontario community mental health services (Reference Smith and TrauerSmith 2010). The project staff were recruited for relevant subject matter expertise:

  1. change management

  2. clinical and business analysis

  3. procurement

  4. project management expertise

  5. communications

  6. consumers

  7. technical expertise

  8. adult education.

A central aim was to ensure that implementation was owned by, and of benefit to, community mental health services.

Phase 1 (2006–2007) involved choosing a measure and was led by a partnership of consumer, sector and planning leadership. Over 8 months, 70 criteria were identified and 80 measures evaluated, followed by a full evaluation and presentations by advocates for 26 long-listed measures. A final shortlist of eight measures was produced, from which the Camberwell Assessment of Need (CAN; Reference Phelan, Slade and ThornicroftPhelan 1995) was chosen to underpin the Ontario Common Assessment of Need (OCAN; www.ccim.on.ca/CMHA/OCAN).

In Phase 2 (2008–2009), sector-led working groups oversaw the development of additional data elements and training requirements. This service-level ownership led to 50 of the 300 services volunteering to take part in the pilot, from which 16 were chosen to test OCAN and associated processes. Findings informed modifications and all 16 pilot sites continued to use OCAN post-pilot.

In Phase 3 (2009–2012), OCAN is being rolled out across all community mental health services in Ontario, informed by consumer working groups, case studies and pilot findings.

The implementation approach can be understood within the four-stage Replicating Effective Programs (REP) framework (Box 1) (Reference Kilbourne, Neumann and PincusKilbourne 2007). The four REP preconditions were met. Need was identified in Phase 1, an evidence-based measure was chosen, barriers were identified and addressed and a draft package was developed for piloting. The three REP pre-implementation activities were undertaken. A community working group of relevant stakeholders led the pilot, the pilot led to OCAN modifications, and logistical barriers were reduced by the CMHCAP technical expertise. The implementation activities comprised: training from professional adult educators with teleconference support and online training; technical assistance from a helpline (1600 contacts) and online information portal (100 hits a week); evaluation input from an external consultant; focus groups and online surveys; and ongoing support through presentations, newsletters, community consultations, conferences, regional meetings and sector champions. The final REP stage of maintenance and evolution is now the focus of CMHCAP activity in Phase 3.

BOX 1 The four-stage Replicating Effective Programs (REP) framework

Stage 1: Preconditions

Activities:

  1. identify need

  2. identify effective intervention

  3. identify barriers

  4. draft package

Stage 2: Pre-Implementation

Activities:

  1. community working group

  2. pilot test package

  3. orientation

Stage 3: Implementation

Activities:

  1. training and technical assistance

  2. evaluation

  3. ongoing support

  4. feedback and refinement

Stage 4: Maintenance and evolution

Activities:

  1. organisational/financial changes

  2. national dissemination

  3. re-customising delivery as need arises

Evaluation indicates that 84% of consumers felt that the assessment helps their worker to understand them better and 74% that it was useful for assessing their needs (Reference Smith and TrauerSmith 2010). In addition, 81% of staff stated that OCAN provided an accurate assessment and 56% that it identified a fuller range of needs than clinical judgement. An evaluation involving more than 100 consumers identified 91% as satisfied or very satisfied with OCAN (Reference PautlerPautler 2010). Routine outcome assessment can produce benefits for people using, and working in, services.

Footnotes

See pp. 173–179, this issue.

Declaration of Interest

M.S. has contributed to the development and dissemination of the CAN.

References

Brookes, G, Brindle, N (2010) Compulsion in the community? The introduction of supervised community treatment. Advances in Psychiatric Treatment 16: 245–52.Google Scholar
Byford, S, Barrett, B (2010) Ethics and economics: the case for mental healthcare. Advances in Psychiatric Treatment 16: 468–73.Google Scholar
Delaffon, V, Anwar, Z, Noushad, F et al (2012) Use of Health of the Nation Outcome Scales in psychiatry. Advances in Psychiatric Treatment 18: 173–9.Google Scholar
Jacobs, R, Moran, V (2010) Uptake of mandatory outcome measures in mental health services. Psychiatrist 34: 338–43.Google Scholar
Kilbourne, AM, Neumann, MS, Pincus, HA et al (2007) Implementing evidence-based interventions in health care: application of the Replicating Effective Programs framework. Implementation Science 2: 42.Google Scholar
National Institute for Mental Health in England (2005) Outcomes Measures Implementation Best Practice Guidance. NIMHE.Google Scholar
Pautler, K (2010) Evaluation of the Ontario Common Assessment of Need (OCAN) North East LHIN Consumer/Survivor Programs: Consumer and Staff Perspectives. Community Care Information Management.Google Scholar
Phelan, M, Slade, M, Thornicroft, G et al (1995) The Camberwell Assessment of Need: the validity and reliability of an instrument to assess the needs of people with severe mental illness. British Journal of Psychiatry 167: 589–95.CrossRefGoogle ScholarPubMed
Puschner, B, Schofer, D, Knaup, C et al (2009) Outcome management in in-patient psychiatric care. Acta Psychiatrica Scandinavica 120: 308–19.CrossRefGoogle ScholarPubMed
Slade, M (2002a) Routine outcome assessment in mental health services. Psychological Medicine 32: 1339–43.Google Scholar
Slade, M (2002b) What outcomes to measure in routine mental health services, and how to assess them. A systematic review. Australian and New Zealand Journal of Psychiatry 36: 743–53.Google Scholar
Slade, M (2010) Mental illness and well-being. The central importance of positive psychology and recovery approaches. BMC Health Services Research 10: 26.CrossRefGoogle ScholarPubMed
Slade, M, McCrone, P, Kuipers, E et al (2006) Use of standardised outcome measures in adult mental health services. Randomised controlled trial. British Journal of Psychiatry 189: 330–6.CrossRefGoogle ScholarPubMed
Smith, D (2010) Outcome measurement in Canada: one province's experience with implementation in community mental health. In Outcome Measurement in Mental Health. Theory and Practice (ed Trauer, T) 94102. Cambridge University Press.Google Scholar
Stewart, M (2009) Service user and significant other versions of the Health of the Nation Outcome Scales. Australasian Psychiatry 17: 156–63.CrossRefGoogle ScholarPubMed
Tansella, M, Thornicroft, G (2009) Implementation science: understanding the translation of evidence into practice. British Journal of Psychiatry 195: 283–5.Google Scholar
Valenstein, M, Mitchinson, A, Ronis, DL et al (2004) Quality indicators and monitoring of mental health services. What do frontline providers think? American Journal of Psychiatry 161: 146–53.Google Scholar
Submit a response

eLetters

No eLetters have been published for this article.