Reference Delaffon, Anwar and NoushadDelaffon and colleagues (2012, this issue) review Health of the Nation Outcome Scales (HoNOS) publications, leading to the ‘surprising finding’ that the process of translating aggregated data to benefit individual service users is in its infancy. In this commentary, I focus on the use of outcome measures and suggest why their finding may not be so surprising.
HoNOS: the wrong measure?
HoNOS is a problematic basis for routine outcome assessment in four ways. First, it was developed, as the name suggests, for public health purposes and has not proven highly useful for clinical decision-making (Reference Jacobs and MoranJacobs 2010). Therefore, clinical buy-in was always going to be limited, since it is not a measure that greatly helps to do the job – treating individuals.
Second, the items reflect the preoccupations of the mental health system, but not perhaps of people using services. First impressions matter and it is noteworthy that ‘Aggression’ is the first HoNOS item.
Third, HoNOS as widely used is staff-rated. In modern services, the ultimate arbiter of the success of treatment should be the service user. The self-rated version HoNOS-SR (Reference StewartStewart 2009) has not been widely used.
Finally, the focus on deficits is inconsistent with the policy focus on recovery and well-being (Reference SladeSlade 2010).
A clash of cultures
The benefits of using standardised measures in routine care appear self-evident. Who could disagree that mental healthcare should be focused on improving outcome or that outcome should be assessed using reliable measures? The problem is that there is disagreement about exactly these points.
Empirical evidence suggests that workers do not prioritise outcome. When frontline providers are asked how their work should be monitored, outcome is last on the list, after (in ascending order of rated importance) service use, access, process and satisfaction indicators (Reference Valenstein, Mitchinson and RonisValenstein 2004).
As noted by Delaffon et al, psychiatrists do not use standardised outcome measures. This is not because of an absence of measures – there is no shortage of measures reported in research studies. The reason is one of culture – standardised outcome assessment is not needed by clinicians to ‘do the job’. Imposing an outcomes measure on a system that uses other forms of clinical decision-making – clinical judgement informed by ethics, economics (as previously discussed in this journal; Reference Byford and BarrettByford 2010) and public protection concerns (also previously examined in this journal; Reference Brookes and BrindleBrookes 2010) – leads to a clash of cultures. Despite the development of coherent conceptual frameworks (National Institute for Mental Health in England 2005) and clarity about the intended benefits (Reference SladeSlade 2002a), this cultural gap has been found when introducing routine outcome assessment into both in-patient (Reference Puschner, Schofer and KnaupPuschner 2009) and community settings (Reference Slade, McCrone and KuipersSlade 2006).
Starting with a centrally chosen measure and then trying to get it used in the mental health system is the wrong approach (Reference SladeSlade 2002b). A better approach is based on the evolving discipline of implementation science (Reference Tansella and ThornicroftTansella 2009). This is illustrated in a case study.
An empirical approach to routine outcome assessment
In 2006, Ontario, Canada (population 12 million) began the Community Mental Health Common Assessment Project (CMHCAP), with the aim of choosing and implementing a common tool for use in all 300 Ontario community mental health services (Reference Smith and TrauerSmith 2010). The project staff were recruited for relevant subject matter expertise:
-
• change management
-
• clinical and business analysis
-
• procurement
-
• project management expertise
-
• communications
-
• consumers
-
• technical expertise
-
• adult education.
A central aim was to ensure that implementation was owned by, and of benefit to, community mental health services.
Phase 1 (2006–2007) involved choosing a measure and was led by a partnership of consumer, sector and planning leadership. Over 8 months, 70 criteria were identified and 80 measures evaluated, followed by a full evaluation and presentations by advocates for 26 long-listed measures. A final shortlist of eight measures was produced, from which the Camberwell Assessment of Need (CAN; Reference Phelan, Slade and ThornicroftPhelan 1995) was chosen to underpin the Ontario Common Assessment of Need (OCAN; www.ccim.on.ca/CMHA/OCAN).
In Phase 2 (2008–2009), sector-led working groups oversaw the development of additional data elements and training requirements. This service-level ownership led to 50 of the 300 services volunteering to take part in the pilot, from which 16 were chosen to test OCAN and associated processes. Findings informed modifications and all 16 pilot sites continued to use OCAN post-pilot.
In Phase 3 (2009–2012), OCAN is being rolled out across all community mental health services in Ontario, informed by consumer working groups, case studies and pilot findings.
The implementation approach can be understood within the four-stage Replicating Effective Programs (REP) framework (Box 1) (Reference Kilbourne, Neumann and PincusKilbourne 2007). The four REP preconditions were met. Need was identified in Phase 1, an evidence-based measure was chosen, barriers were identified and addressed and a draft package was developed for piloting. The three REP pre-implementation activities were undertaken. A community working group of relevant stakeholders led the pilot, the pilot led to OCAN modifications, and logistical barriers were reduced by the CMHCAP technical expertise. The implementation activities comprised: training from professional adult educators with teleconference support and online training; technical assistance from a helpline (1600 contacts) and online information portal (100 hits a week); evaluation input from an external consultant; focus groups and online surveys; and ongoing support through presentations, newsletters, community consultations, conferences, regional meetings and sector champions. The final REP stage of maintenance and evolution is now the focus of CMHCAP activity in Phase 3.
Stage 1: Preconditions
Activities:
-
• identify need
-
• identify effective intervention
-
• identify barriers
-
• draft package
Stage 2: Pre-Implementation
Activities:
-
• community working group
-
• pilot test package
-
• orientation
Stage 3: Implementation
Activities:
-
• training and technical assistance
-
• evaluation
-
• ongoing support
-
• feedback and refinement
Stage 4: Maintenance and evolution
Activities:
-
• organisational/financial changes
-
• national dissemination
-
• re-customising delivery as need arises
Evaluation indicates that 84% of consumers felt that the assessment helps their worker to understand them better and 74% that it was useful for assessing their needs (Reference Smith and TrauerSmith 2010). In addition, 81% of staff stated that OCAN provided an accurate assessment and 56% that it identified a fuller range of needs than clinical judgement. An evaluation involving more than 100 consumers identified 91% as satisfied or very satisfied with OCAN (Reference PautlerPautler 2010). Routine outcome assessment can produce benefits for people using, and working in, services.
eLetters
No eLetters have been published for this article.