Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-26T22:23:36.677Z Has data issue: false hasContentIssue false

Mental health policy and evidence. Potentials and pitfalls

Published online by Cambridge University Press:  02 January 2018

Jocelyn Catty
Affiliation:
Social and Community Psychiatry, Department of General Psychiatry, St George's Hospital Medical School, Jenner Wing, Cranmer Terrace, London, SW17 0RE
Rights & Permissions [Opens in a new window]

Abstract

Type
Editorials
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © 2002. The Royal College of Psychiatrists

The NHS Plan (Department of Health, 2000) is a programme for major reform in the UK health services. The mental health component draws on the National Service Framework (NSF; Department of Health, 1999), which proposes radical changes based, wherever possible, on evidence. This emphasis on ‘evidence-based practice’ is a central plank of the NSF, with each section indicating and grading its evidence base. This is unusual and in many ways very welcome, as policy more usually precedes research (e.g. the deinstitionalisation movement (Reference Leff, Trieman and KnappLeff, et al, 2000)). The Government has justified these radical changes in structure, and in particular their detailed ‘micromanagement’ of these changes, on the grounds that the public has lost faith in community care.

Frank Dobson's contention in parliament in 1998 that ‘care in the community [has] failed’ (Reference WardenWarden, 1998) has been much debated (Reference Burns and PriebeBurns & Priebe, 1999; Reference Johnson, Zinkler and PriebeJohnson et al, 2001), but there is not doubt that public dissatisfaction persists, and is most marked around difficulties in prompt access to care during emergencies and loss to contact of some very severely ill individuals. This latter group of patients has been believed, quite wrongly (Reference Taylor and GunnTaylor & Gunn, 1999), to be responsible for a rise in assaults on the public. To what extent these concerns stem from real deficiencies in the structure and practice of UK mental health practice is questionable. Dissatisfaction with access, however, is universal within the NHS and represents a very real funding and capacity deficit not restricted to mental health. Nevertheless, foreign professionals generally commend the simplicity, functionality and effective targeting on the severely mentally ill of UK community mental health practice, while remarking on our scandalously poor in-patient provision.

In such circumstances it is not surprising that the importance of the evidence base is emphasised. There appear, however, to be two significant problems with how this evidence is presented. First, evidence for ‘interventions’ is used to support ‘service structures’, in the form of specialised teams. Second, evidence for service structures is presented without adequate attention to context, detail or contradictory evidence.

Use of intervention study evidence

An example of the former is early intervention teams. There is growing evidence that a shorter ‘duration of untreated psychosis’ is associated with better outcomes (Reference McGlashanMcGlashan, 1998; Reference Waddington, Buckley and ScullyWaddington et al, 1998) although this is far from unequivocal (Reference Barnes, Hutton and ChapmanBarnes et al, 2000). The step is then taken of assuming that intervening earlier will produce better outcomes, particularly in protecting cognitive functioning and preventing vocational and social decline. This is a plausible assumption, but rather than testing it, the response is to propose that these outcomes can only be achieved by establishing a separate dedicated service, despite the lack of specific evidence of the effectiveness of such a service. While there are descriptions of such services (Reference Birchwood, McGorry and JacksonBirchwood et al, 1997), there has as yet been no rigorous UK demonstration of their advantage over current practice — a seemingly chauvinist concern of which more below.

Service structure studies

The problems with community care studies of service structures have been increasingly recognised (Reference CodCoid, 1994; Burns, Reference Burns, Guimón and Sartorius1999, Reference Burns2000). Problems with studies of new startup services, such as the effects of clinician enthusiasm and the possibility of non-sustainability that Coid (Reference Cod1994) has pointed to, are still pertinent. A new approach combining both natural and social science methodology has been advocated as more appropriate to mental health services than randomised controlled trials (RCTs) (Reference Slade and PriebeSlade & Priebe, 2001). Even within traditional studies more meaningful results could be obtained by addressing two particular problems: defining the comparator and identifying active ingredients. These essential steps could then be enhanced with recourse to qualitative or organisational-level research. Both would require studies that are more, rather than less, rigorous.

Defining the comparator

This requires both understanding the context of the service studied and listing in a reasonably consistent manner the differences between it and the control service (often referred to as ‘treatment as usual’ or ‘standard care’) (Reference Burns and PriebeBurns & Priebe, 1996). Without this how can studies be compared? None of us would accept an RCT reporting the advantages of an antipsychotic without a clear understanding of what it was compared to (placebo? other antipsychotic? at what doses?). Indeed, the current controversy around the evidence for newer antipsychotics is illuminating here — the argument being that some of their reported benefits may reflect excessive doses of older antipsychotic comparators (Reference Geddes, Freemantle and HarrisonGeddes et al, 2000). Within community psychiatry services, the equivalent is to compare the experimental services with ‘treatment as usual’ where that ‘treatment as usual’ was recognised as failing.

Assertive Community Treatment (ACT) evidence seems to suggest that the quality of ‘treatment as usual’ may be responsible for the great differences in outcome sometimes reported in studies. Leaving aside the issue of how to distinguish between ACT and other types of case management, it is increasingly clear that the impressive advantages of ACT reported in the early studies (Reference Stein and TestStein & Test, 1980; Reference HoultHoult, 1986) are simply not being repeated in later studies (Reference Muijen, Marks and ConnollyMuijen et al, 1992; Reference Thornicroft, Wykes and HollowayThornicroft et al, 1998; UK700 Group, 1999). This is the case not just in the UK but also in the US (Reference Drake, McHugo and ClarkDrake et al, 1998; Reference Mueser, Bond and DrakeMueser et al, 1998). One explanation advanced for this reduction in advantage is that the control services already contain several of the elements of the ‘experimental’ service (Reference Drake, McHugo and ClarkDrake et al, 1998). They may not be so experimental anymore!

Identifying active ingredients

Defining different service models is far from straight-forward. First, we are working with a plethora of similar sounding terms for services that may or may not be providing the same things (e.g. case management, care management; or ACT, assertive case management, assertive outreach, aggressive outreach). Second, any one term may mask a range of different service ingredients or components. This was tellingly illustrated by the article by Smyth and Hoult advocating ‘home treatment’ for patients with acute psychiatric disorders (Reference Smyth and HoultSmyth & Hoult, 2000). Disingenuous use of terminology — such as the implication that the ACT service in Madison (Reference Stein and TestStein & Test, 1980) was a ‘crisis’ service (Reference BurnsBurns, 2000) — seriously compromises any conclusions that could be drawn.

The greatest danger with studies of service models currently is that we may be constructing them in such a way that it is impossible to ascertain the really active ingredients: either because service components have not been noted or tested, or because there are too many confounders to determine the impact of any single one. The UK700 study (UK700 Group, 1999) was a rigorous (and rare) attempt to test a single component — case-load size — that is the single most commonly cited factor in successful community care. It was thus able to establish that reducing case-load size does not by itself improve outcomes for patients with severe mental illness.

In a recent systematic review (Reference Catty, Burns and KnappCatty et al, 2002; Reference Burns, Knapp and CattyBurns et al, 2001) we attempted to analyse a wide range of studies of ‘home treatment’ (defined as community-based non-residential services) by service components rather than service label. Authors of 91 studies were followed up (with a 60% response rate) to ascertain systematically the components of the experimental and control services. These were generally inadequately presented in the published papers, particularly for the control services. Testing for associations between these components and the outcome of days in hospital, we found two components, ‘regularly visiting at home’ and ‘joint responsibility for health and social care’, to be associated with greater reductions in hospitalisation. These two were part of a cluster of associated components in the experimental services — although no direct associations between the other components and hospitalisation were found.

This review illustrates both the problems with existing service structure research and the pitfalls involved in trying to reinterpret it, retrospectively, by means of meta-analysis. In analysing by service component rather than label, we were able to avoid the problems of the latter in an attempt to identify the active ingredients. It also led us to cast our net wide, including a range of heterogeneous services. This may have affected our hospitalisation meta-analysis, which found a greater reduction in hospitalisation (6 days per patient per month) for those studies tautologically using in-patient treatment as the control service than for those using community comparators (0.5 days) (Reference Catty, Burns and KnappCatty et al, 2002).

The follow-up to study authors was limited in that it provided fairly broadly defined features — such as ‘regularly visiting at home’ — which are difficult to interpret or operationalise. It did confirm, however, that over its 30-year period, control services have increasingly incorporated service features originally associated with the innovative ‘experimental’ services, with an increase in the proportion of treatment delivered at home and multi-disciplinary working (Reference Burns, Knapp and CattyBurns et al, 2001).

The local (national) context

The need to define service components systematically and prospectively is made abundantly plain from this review and applies equally to the comparator services as the experimental ones. Yet this is not the whole answer to understanding service context. Organisational and cultural differences, particularly internationally, will have as great an impact and may be still harder to measure and interpret. The research hierarchy that favours RCTs as the ‘gold standard’ threatens to obscure the value of organisational and qualitative work. The latter may of course be incorporated into any study — including the RCT — so that its findings may be more meaningfully interpreted.

In blindly clinging to the RCT while ignoring its problems for service structure research, we may be throwing out the baby with the bathwater. After years of steady evolution of service models that provide simplicity and continuity of care (both over time and across functions, with multi-disciplinary teams including social workers) we face a shift to services that, although more targeted, are also fragmented and much more staff-intensive. The ‘evidence’ for this shift is provided by studies that failed to control for, or measure, the active ingredients that really distinguished the experimental and control services they reported, let alone their wider context.

Lost opportunities

Misunderstandings over the implications of study findings are in themselves no bad thing. They provoke essential debates and controversies over interpretation of results. Such debates are the motor of intellectual curiosity and new research, and themselves stimulate service improvements. The problem currently is that findings from studies are being translated into prescriptive and incredibly detailed policy, pre-empting, or simply ignoring, this vital stage of maturation and interpretation. Some of the resultant changes may prove successful but we should not kid ourselves that they are without cost. This includes the disruption of many currently successful community mental health team (CMHT) services.

There are other lost opportunities here. If the policy prescription were for the delivery of accepted evidence-based treatments (e.g. clozapine for resistant schizophrenia (Kane et al, 1995) or behaviour family management in psychosis (Reference Mari and StreinerMari & Streiner, 1994)) rather than service delivery structures that may or may not deliver them, then we would surely achieve concrete benefits for our patients.

We may also be missing the opportunity to tighten up how multi-disciplinary teams function. There is undoubtedly unacceptable variation and inefficiency in this, given the many competing forces in such teams. Indeed it is quite possible that inadequacies in the implementation of the CMHT model, rather than failings in the model itself, may have stimulated the searches for alternatives. Without attention to these implementation problems (leadership, active case-load management, boundary disputes) we risk simply replicating, or even exaggerating them, in a plethora of new specialised teams. Finally, as these changes go hand in hand with significant investment in mental health services, we lose the opportunity to make confident judgements about their success or otherwise. We are trapped into committing the cardinal scientific error of altering two major variables at the same time.

Declaration of interest

None.

References

Barnes, T. R. E., Hutton, S. B., Chapman, M. J., et al (2000) West London first-episode study of schizophrenia: clinical correlates of duration of untreated psychosis. British Journal of Psychiatry, 177, 207211.CrossRefGoogle ScholarPubMed
Birchwood, M., McGorry, P. & Jackson, H. (1997) Early intervention in schizophrenia. British Journal of Psychiatry, 170, 25.Google Scholar
Burns, T. (1999) Methodological problems of schizophrenia trials in community settings. In Manage or Perish? The Challenges of Managed Mental Health Care in Europe (eds Guimón, G. & Sartorius, N.). Kluwer Academic: New York.Google Scholar
Burns, T. (2000) Psychiatric home treatment. Vigorous, well designed trials are needed. BMJ, 321, 177.CrossRefGoogle ScholarPubMed
Burns, T. & Priebe, S. (1996) Mental health care systems and their characteristics: a proposal. Acta Psychiatrica Scandinavica, 94, 381385.CrossRefGoogle ScholarPubMed
Burns, T. & Priebe, S. (1999) Mental health care failure in England: myth and reality. British Journal of Psychiatry, 174, 191192.CrossRefGoogle Scholar
Burns, T., Creed, F., Fahy, T., et al (1999) Intensive versus standard case management for severe psychotic illness: a randomised trial. UK700 Group. Lancet, 353, 21852189.Google Scholar
Burns, T., Knapp, M., Catty, J., et al (2001) Home treatment for mental health problems: a systematic review. Health Technology Assessment, 5(15), 1139.Google Scholar
Catty, J., Burns, T., Knapp, K., et al (2002) Home treatment for mental health problems: a systematic review. Psychological Medicine, 32, 383401.Google Scholar
Cod, J. (1994) Failure in community care: psychiatry's dilemma. BMJ, 308, 805806.Google Scholar
Department of Health (1999) Modern Standards and Service Models: National Service Framework for Mental Health. London: Department of Health.Google Scholar
Department of Health (2000) The NHS Plan – A Plan for Investment, a Plan for Reform. London: Department of Health.Google Scholar
Drake, R. E., McHugo, G. J., Clark, R. E., et al (1998) Assertive community treatment for patients with co-occurring severe mental illness and substance use disorder: a clinical trial. American Journal of Orthopsychiatry, 68, 201215.Google Scholar
Geddes, J., Freemantle, N., Harrison, P., et al (2000) Atypical antipsychotics in the treatment of schizophrenia: systematic overview and meta-regression analysis. BMJ, 321, 13711376.Google Scholar
Hoult, J. (1986) Community care of the acutely mentally ill. British Journal of Psychiatry, 149, 137144.Google Scholar
Johnson, S., Zinkler, M., Priebe, S. (2001) Mental health service provision in England. Acta Psychiatrica Scandinavica Supplementum, 410, 4755.Google Scholar
Kane, J. M. & McGlashan, T. H. (1995) Treatment of schizophrenia. Lancet, 346, 820825.Google Scholar
Leff, J., Trieman, N., Knapp, M., et al (2000) TheTAPS Project: a report on 13 years of research, 1985–1998. Psychiatric Bulletin, 24, 165168.CrossRefGoogle Scholar
Mari, J. J. & Streiner, D. L. (1994) An overview of family interventions and relapse on schizophrenia: meta-analysis of research findings. Psychological Medicine, 24, 565578.CrossRefGoogle ScholarPubMed
McGlashan, T. H. (1998) Early detection and intervention of schizophrenia: rationale and research. British Journal of Psychiatry, 172 (suppl. 33), 36.Google Scholar
Mueser, K. T., Bond, G. R., Drake, R. E., et al (1998) Models of community care for severe mental illness: a review of research on case management. Schizophrenia Bulletin, 24, 3774.CrossRefGoogle ScholarPubMed
Muijen, M., Marks, I., Connolly, J., et al (1992) Home based care and standard hospital care for patients with severe mentalillness: a randomised controlled trial. BMJ, 304, 749754.Google Scholar
Slade, M. & Priebe, S. (2001) Are randomised controlled trials the only gold that glitters? British Journal of Psychiatry, 179, 286287.Google Scholar
Smyth, M. G. & Hoult, J. (2000) The home treatment enigma. BMJ, 320, 305309.CrossRefGoogle ScholarPubMed
Stein, L. I. & Test, M. A. (1980) Alternative to mental hospital treatment. I. Conceptual model, treatment program, and clinical evaluation. Archives of General Psychiatry, 37, 392397.Google Scholar
Taylor, P. J. & Gunn, J. (1999) Homicides by people with mental illness: myth and reality. British Journal of Psychiatry, 174, 914.Google Scholar
Thornicroft, G., Wykes, T., Holloway, F., et al (1998) From efficacy to effectiveness in community mental health services. PRiSM psychosis study. 10. British Journal of Psychiatry, 173, 423427.CrossRefGoogle ScholarPubMed
Waddington, J. L., Buckley, P. F., Scully, P. J., et al (1998) Course of psychopathology, cognition and neurobiological abnormality in schizophrenia: developmental origins and amelioration by antipsychotics. Journal of Psychiatric Research, 32, 179189.CrossRefGoogle ScholarPubMed
Warden, J. (1998) England abandons care in the community for the mentally ill. BMJ, 317, 1611.Google Scholar
Submit a response

eLetters

No eLetters have been published for this article.