The increasing costs of healthcare have heightened the importance of assessing efficacy and service cost-effectiveness. Over the past two decades there has been a drive by health commissioning agencies to promote service auditing and the measurement of health outcomes. This is especially important for comparatively new and evolving services such as child and adolescent mental health services (CAMHS), still needing to make the case for their raison-d'etre in low- and middle-income countries lacking identifiable mental health policies specifically relevant for children and adolescents. Reference Skuse1
The audit imperative and CAMHS
In high-income countries, CAMHS have risen to the challenge of outcome measurement. A major breakthrough was the development of a dedicated measure, Health of the Nation Outcome Scales for Children and Adolescents (HoNOSCA), addressing both symptom improvement and reduced impairment following specialist CAMHS use. Reference Gowers, Bailey-Rogers, Shore and Levine2 This measure has been thoroughly researched internationally and found to be fit for purpose, user-friendly, a good proxy measure for diagnosis, valid for use by specialist and in-patient CAMHS working within a multi-disciplinary framework, with excellent national and international interrater reliability, and congruent with parent and referrer outcome ratings. Reference Garralda, Yates and Higginson3,Reference Hanssem-Bauer, Gowers, Aalen, Bilenberg, Brann and Garralda4 It is sensitive to change, and when complemented by parental and referrer satisfaction scores, it provides a comprehensive outcome summary of CAMHS use. Reference Garralda, Yates and Higginson3 It has documented substantial improvement in children's symptoms and impairment following CAMHS use and in the process has provided average change scores that can be used as yardsticks to compare performance across units. Alongside HoNOSCA, other instruments such as generic parent-completed epidemiological screening questionnaires and clinician-reported impairment scales (e.g. Strengths and Difficulties Questionnaire, Children's Global Assessment Scale) with established validity, reliability and sensitivity to change, have gained popularity among CAMHS, as have a variety of disorder-specific instruments. Reference Garralda, Yates and Higginson3,Reference Johnston and Gowers5
Beyond CAMHS audit tools
The availability of adequate tools is only one step towards outcome measurement. Implementation needs to take account of the practice and policy context and the interlocking influences of government initiatives. In the UK these include New Ways of Working, which aims to enable all clinicians to extend their roles and work effectively in teams, thus making outcome measurement widely relevant across different professions (www.newwaysofworking.org.uk), and quality assurance mechanisms such as the Quality Improvement Network for Multi-agency CAMHS and the Quality Network for In-Patients which develop and apply standards for specialist CAMHS including outcome measurement through a system of self- and external peer-review (www.rcpsych.ac.uk/clinicalservicestandards/centreforqualityimprovement.aspx).
As CAMHS outcome measurement becomes more widespread Reference Johnston and Gowers5,Reference Ford, Tingay and Wolpert6 and service purchaser requirements more explicit, renewed attention has focused on the actual purpose and objectives of outcome measurement and on the advantages and disadvantages of ‘dedicated’ outcomes compared with ‘all-purpose’ screening instruments more likely to take a dimensional approach which is not driven by the presence of symptoms or disorders. The use of generic as opposed to disorder-specific measures which may be more appropriate for specialist clinics such as for children with obsessive–compulsive disorder has also been debated, and the extent to which existing outcome measures are efficient for children with intellectual disabilities needs to be tested further. Reference Lee, Jones, Goodman and Heyman7 There are differences of opinion about the appropriateness of primary reliance for outcome measurement on clinicians as opposed to users (parent, child, teacher and referrer) as symptom reporters.
Furthermore, a number of important implementation issues have arisen, including the best approach to documenting outcomes for children and young people seen for assessment only, for those whose management and/or treatment extends over many months and years, for work that primarily addresses parental and family concerns rather than psychopathology in the child, and for work done in bridging posts (‘tier two’ CAMHS) between specialist CAMHS and primary care. This has resulted in variations in the measures recommended and implemented across services. Although research projects and audit in more self-contained services such as in-patient units obtain good returns and, therefore, reliable results, Reference Garralda, Yates and Higginson3,Reference Garralda, Rose and Dawson8 implementation in routine clinical practice tends to be marred by small returns.
There is, nevertheless, a move afoot to introduce uniformity to the outcome measurement process in CAMHS as in other health services. What audit objectives and tools are appropriate for this task? Three possible alternatives – measuring service efficacy, league tables, and enhancing service and clinician accountability – will be addressed here.
Possible objectives of CAMHS audit
Measuring service efficacy
An approach under consideration would involve measuring and contrasting symptom change, following CAMHS use, in a particular service or given area against expected changes over time in a comparable non-referred population. In effect, the objective here is the measurement of CAMHS efficacy which is more in the realm of hypothesis-driven research than audit. This requires a rigorous research design, careful sample and instrument selection and description, and multiple measures of clinical change, taking into account the range of possible clinical and attitudinal confounding factors influencing referral and outcomes. It demands substantially higher return rates and analytic expertise than may be expected from clinical audit, and some knowledge of the interventions provided. The development of a single tool able to audit change meaningfully across different primary and specialist services seems, moreover, implausible. Epidemiological research comparing referred and non-referred samples has generally failed to show differences in outcome and highlighted the methodological flaws inherent in this approach. Reference Zwaanswijk, Verhaak, van der Ende, Bensing and Verhulst9
League tables
A different objective for CAMHS outcomes audit would be issuing league tables in order to guide service purchasers and prospective patients. Nevertheless, high acceptable return data is again a central tenet of league tables. Even if full representative data were obtainable and clinical improvement judged against published standards, account still needs to be taken of confounding contextual or complexity factors; not least initial problem severity, since higher initial symptom scores generally predict greater change and improvement. As an illustration of the influence of complexity, reduced HoNOSCA change and improvement Reference Garralda, Yates and Higginson3,Reference Andrade, Lambert and Bickman10 has been reported in children with intellectual disability attending generic out-patient CAMHS when compared with other attenders – possibly suggesting a desirability for the development of specialist CAMHS with a special remit in these areas – but not in pre-adolescent in-patient psychiatric units which may be more attuned to their needs. Similarly, parental attitudes towards CAMHS contact have been found not to predict outcome in the community, but do predict outcome in in-patient units. Reference Garralda, Yates and Higginson3,Reference Garralda, Rose and Dawson8
Service accountability
If measuring service efficacy and the use of league tables are, on current evidence, unrealistic premature goals, an achievable objective of outcome auditing is to enhance service accountability. This is intrinsic to the audit process and deliverable provided that expectations from users and clinicians are realistic and the actual process is adequately supported administratively and technically. It can be met by: (a) obtaining information on user satisfaction; and (b) by symptom/impairment reporting at clinic intake and discharge, together with brief measures of context and case complexity, as well as of the service use process.
The appropriateness of user satisfaction enquiry is self-evident and, moreover, applicable across services with different levels of care, whether primary, bridging/tier two, or specialist CAMHS; although a small and biased response rate is to be expected, the onus would be on services to show: (a) all eligible users over a certain and uniformly defined period of time have been approached; (b) returns are consistent with those of other units with comparable clienteles; and (c) acceptable user satisfaction levels have been obtained in line with published comparable data. Reference Garralda, Yates and Higginson3
What about symptom change? Who should be entrusted with this: clinicians, service users or referrers? The leading consideration here is what procedure is most likely to obtain fuller returns and more representative information. For specialist generic CAMHS, there is much to be said for this being primarily a task for the clinician. First, althoguh it is unrealistic to expect high, representative return rates from parents and referrers, the same does not apply to clinicians, provided – and this cannot be overemphasised – that the demands on clinician training and time are minimal and appropriate information technology and administrative support is available. Second, CAMHS clinician accounts represent a professionally informed summary statement of problems as reported by different informants such as parents, children, teachers and clinicians, and, therefore, are preferable to those from single informants. Administratively, it is a more parsimonious process than obtaining and numerically aggregating three (parent, child, teacher) or more individual reports. Third, the use of appropriate composite measures, made up of the range of symptoms seen in specialist care, can help ensure a degree of uniformity in both intake and outcome data collection among clinicians from different backgrounds and contribute towards a sense of both personal and collective accountability for service outcomes. Fourth, dedicated user-friendly and quick to complete clinician CAMHS measures are available with good validity and interrater reliability, as well as congruence with parental and referrer reporting of symptoms and/or symptom change.
Clinical service effectiveness
Ultimately, of course, outcome auditing only provides a small snapshot of clinical effectiveness and service quality; the latter will depend to a large extent on the availability of good clinical assessment and management skills, and the use of adequately implemented evidence-based treatment methods in an adequately administratively supported and managed service. Auditing outcomes can contribute to enhancing service accountability through the acquisition of adequately technically supported contextualised information on user satisfaction and symptom/impairment change.
eLetters
No eLetters have been published for this article.