Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-22T12:01:26.238Z Has data issue: false hasContentIssue false

Author's reply

Published online by Cambridge University Press:  19 November 2019

David Aceituno*
Affiliation:
PhD Candidate, Health Service and Population Research Department, Institute of Psychiatry, Psychology and Neuroscience, King's College London, UK; and Psychiatry Department, School of Medicine, Pontificia Universidad Católica de Chile, Chile. E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Type
Correspondence
Copyright
Copyright © The Royal College of Psychiatrists 2019 

In his letter about the article ‘Cost-effectiveness of early intervention in psychosis: systematic review’,Reference Aceituno, Vera, Prina and McCrone1 Andrew Amos defines the review as unsystematic, uncritical about the included literature and, ultimately, as an example of the use of spin to misrepresent the advantages of early intervention for psychosis (EIP) services.

As authors, we are pleased to see diverse opinions regards this work, which enriches the discussion and makes the topic more complex, as precisely analysed by Robert Rosenheck in his editorial.Reference Rosenheck2 However, there are some aspects in Amos’ letter that are not entirely correct or frankly misleading; therefore, we believe it is important to clarify.

First, this review adhered to a high-quality standard, following the recommended reporting guideline (PRISMA)Reference Moher, Liberati, Tetzlaff and Altman3 and registering a protocol before starting the review. Information necessary for replicability is available to any reader in the main text and supplementary material. Our search strategy was comprehensive, including six databases, and two authors independently screened and applied previously stated eligibility criteria to reduce ‘cherry-picking’ of studies. Risk of bias assessments were conducted using widely validated instruments. This contrasts with the studies cited by Amos.Reference Amos4,Reference Amos5 In both reviews he is the only author, in one of them only one database was searched and none of them have pre-registered protocols. Although the author recognised this limitation in a letter published in 2012,Reference Amos6 it seems this did not prevent him from applying the same method in 2014.

Regarding the included studies, Amos mentions that these were not critically analysed. We used three different instruments to appraise the risk of bias of the included studies. One was the widely applied Cochrane's risk of bias tool, to assess the effectiveness estimates, and two were specific tools used in economic evaluations (trial and model-based cost-effectiveness analyses).Reference Berger, Martin, Husereau, Worley, Allen and Yang7,Reference Evers, Goossens, de Vet, van Tulder and Ament8

Furthermore, we explicitly highlight the methodological deficiencies of the included studies in terms of internal validity and applicability to low-resource settings. In fact, we further specify that a meta-analysis would have been misleading considering the high heterogeneity of the studies (p. 389).

Amos mentioned that we did not highlight the risk of bias in Tsiachristas et al’s study,Reference Tsiachristas, Thomas, Leal and Lennox9 which is not a randomised controlled trial but an observational study. However, in this work propensity score matching was used to deal with confounding to account for the imbalanced samples. Although this technique does not eliminate other sources of confounding, it is a valid procedure to make inferences using observational data.Reference Freemantle, Marston, Walters, Wood, Reynolds and Petersen10 Nevertheless, we still classified this study as at-high risk of bias, as it is clearly depicted in the supplementary material.

There are other aspects that we believe require clarification. Cost-minimisation studies are not the only type of economic evaluation with which we can affirm that an intervention is cost-saving. Besides, the limitations of such an approach have been largely stated and was the rationale for excluding such evaluations.Reference Briggs and O'Brien11

Likewise, it is not completely accurate to say that an intervention is not cost-effective if the treatment is not effective. Besides the obvious scenario where the new treatment is cheaper, there is also an option when the new treatment does not reach statistical significance and might be considered cost-effective. This is because costs and effects are measured with uncertainty, which is usually characterised using probabilistic sensitivity analyses, and it may be the case where a relevant proportion of these simulated samples lay above the threshold defined by a given country.Reference Drummond, Sculpher, Claxton, Stoddart and Torrance12,Reference Neumann, Sanders, Russell, Siegel and Ganiats13

Leaving aside these technicalities, it is worth noting that Amos’ assertion about the ‘fact that the current consensus is that it is not possible to prevent transition to psychosis’ is questionable. The same reference cited in his letter is specific in stating that psychological interventions may reduce the risk of developing psychosis in people with clinical high risk by a half.Reference Fusar-Poli, McGorry and Kane14 It is true that this effect is not sustained at 2 years of follow-up, but this time window might be enough to make an intervention reach cost-effectiveness.

Besides, the interventions at this stage not only seek to prevent the first-episode of psychosis (FEP) but also engage young people with services, reduce comorbidities (including substance misuse disorders), decrease the duration of untreated psychosis and ameliorate the impact of the FEP by, for example, using less admissions to hospital and compulsory admissions.

Finally, there is a topic not covered in Amos’ letter but closely related to his strong accusation of considering this systematic review as an example of spin. This is about conflicts of interest and research allegiance of reviewers. This has been highlighted in other reviews of psychological therapies.Reference Lieb, von der Osten-Sacken, Stoffers-Winterling, Reiss and Barth15 In this regard, we can affirm that our review team was made up of health service researchers, health economists and epidemiologists with no financial or non-financial conflicts of interest. Likewise, only one study included was conducted by one of the authors of the review, who was not involved in rating the risk of bias of the studies. Furthermore, we were explicit about the fact that most of the studies were conducted by advocates of the EIP paradigm.

As authors, we welcome critical analysis and feedback for this and future work. We believe, nevertheless, that such criticisms should be stated in a constructive and collaborative manner with the focus on improving research and ultimately, patients’ well-being and quality of life.

References

1Aceituno, D, Vera, N, Prina, M, McCrone, P. Cost-effectiveness of early intervention in psychosis: systematic review. Br J Psych 2019; 215: 388–94.Google Scholar
2Rosenheck, R. The challenge of cost-effectiveness research on first-episode psychosis. Br J Psychiatry 2019; 215: 386–7.Google Scholar
3Moher, D, Liberati, A, Tetzlaff, J, Altman, DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009; 6: e1000097.Google Scholar
4Amos, AJ. A review of spin and bias use in the early intervention in psychosis literature. Prim Care Companion CNS Disord 2014; 16: 13r01586.Google Scholar
5Amos, A. Assessing the cost of early intervention in psychosis: a systematic review. Aust N Z J Psychiatry 2012; 46: 719–34.Google Scholar
6Amos, A. Burning the straw men: intellectual integrity in the early intervention debate. Aust N Z J Psychiatry 2012; 46: 812–5.Google Scholar
7Berger, ML, Martin, BC, Husereau, D, Worley, K, Allen, JD, Yang, W, et al. A questionnaire to assess the relevance and credibility of observational studies to inform health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force report. Value Health 2014; 17: 143–56.Google Scholar
8Evers, S, Goossens, M, de Vet, H, van Tulder, M, Ament, A. Criteria list for assessment of methodological quality of economic evaluations: consensus on health economic criteria. Int J Technol Assess Health Care 2005; 21: 240–5.Google Scholar
9Tsiachristas, A, Thomas, T, Leal, J, Lennox, BR. Economic impact of early intervention in psychosis services: results from a longitudinal retrospective controlled study in England. BMJ Open 2016; 6: e012611.Google Scholar
10Freemantle, N, Marston, L, Walters, K, Wood, J, Reynolds, MR, Petersen, I. Making inferences on treatment effects from real world data: propensity scores, confounding by indication and other perils for the unwary in observational research. BMJ 2013; 347: f6409.Google Scholar
11Briggs, AH, O'Brien, BJ. The death of cost-minimization analysis? Health Econ 2001; 10: 179–84.Google Scholar
12Drummond, MF, Sculpher, MJ, Claxton, K, Stoddart, GL, Torrance, GW. Methods for the Economic Evaluation of Health Care Programmes (4th edn). Oxford University Press, 2015.Google Scholar
13Neumann, PJ, Sanders, GD, Russell, LB, Siegel, JE, Ganiats, TG. Cost-Effectiveness in Health and Medicine. Oxford University Press, 2016.Google Scholar
14Fusar-Poli, P, McGorry, PD, Kane, JM. Improving outcomes of first-episode psychosis: an overview. World Psychiatry 2017; 16: 251–65.Google Scholar
15Lieb, K, von der Osten-Sacken, J, Stoffers-Winterling, J, Reiss, N, Barth, J. Conflicts of interest and spin in reviews of psychological therapies: a systematic review. BMJ Open 2016; 6: e010606.Google Scholar
Submit a response

eLetters

No eLetters have been published for this article.