Crawford et al (Reference Crawford, Thomas and Khan2007) conclude that the results of their meta-analysis ‘do not provide evidence that additional psychosocial interventions following self-harm have a marked effect on the likelihood of subsequent suicide’. This conclusion is far too bold considering the weaknesses inherent in the analytical approach employed. In my opinion Crawford et al have not allowed adequate weight for several methodological problems, the most prominent being the rationale for including studies in the analysis. They acknowledge the ‘lack of statistical power’ in the meta-analysis but offer a definitive and sweeping conclusion.
The lack of statistical power is only one reason not to conduct the meta-analysis. The central rationale for clustering the included studies is seriously flawed. Not only have they mixed simple interventions and treatments, the target populations range from latency-age children (some as young as 12 years) to older adults (>50 years), intervention methods and theoretical orientations vary considerably (employing individual, group, case-management and home-based care), samples include those making suicide attempts as well as those engaging in self-harm (non-suicidal) behaviour, and they have also included studies that employed questionable intervention or treatment protocols for suicidality. A review of the intervention and treatment protocols of the studies included reveals wide variability in the nature, oversight and fidelity of the services being offered. I have serious concerns about at least 8 of the 19 study protocols. Some of the interventions cannot realistically be described as appropriate for suicidality, at least from the perspective that they have a serious chance of reducing subsequent pathology of suicide attempts, much less actual deaths. For example, Harrington et al (Reference Harrington, Kerfoot and Dyer1998) employed four home visits by a social worker. Similarly, Guthrie et al (Reference Guthrie, Kapur and Mackway-Jones2001) included four sessions delivered in the patient's home. Cedereke et al (Reference Cedereke, Monti and Ojehagen2002) explored the utility of random telephone interventions and Clarke et al (Reference Clarke, Baker and Watts2002) included ‘management enhanced by nurseled case management’. As these examples illustrate, not all psychosocial interventions are the same, something Crawford et al (Reference Crawford, Thomas and Khan2007) failed to clarify in their article. Why would we expect that a meta-analysis of randomised trials of interventions or treatments that are this broadly disparate (with samples equally disparate) would actually provide evidence of effective reduction of subsequent suicides?
Meta-analyses have become increasingly popular and increasingly misleading in their findings. Prior to inclusion in a meta-analysis of intervention or treatment outcome, I would suggest a thorough review of the intervention/treatment approach and related fidelity. Only those studies meeting strict and predefined criteria should be included. When considering strategies for including and clustering treatment studies for meta-analysis, it is particularly important to consider the targeted problem or disorder. Many, if not most problems targeted by psychosocial interventions and treatments are recurrent, persistent and potentially chronic in nature. Hence, the need for careful scrutiny of studies included.
Compounding the problems noted above, the follow-up periods for all of the studies included by Crawford et al ranged from 6 to 12 months. The efficacy of treatment or interventions for suicide will only be known after 5, 10 or 20 years. In shorter-term studies even if the results did show a reduction in subsequent suicides, we would not know whether the interventions or treatments were ‘delaying’ suicide or actually preventing it without longitudinal data.
There are many other factors that need to be scrutinised prior to inclusion of studies in a meta-analysis (e.g. sample size, categorisation of attempt status and suicide intent, fidelity/oversight of intervention or treatment) but space does not allow a full discussion. The point is that identifying appropriate inclusion criteria for such a study is a complex process which is far more complicated than simply taking all randomised controlled trials.
The definitive nature of the conclusion offered by Crawford et al belies the current state of the science in this area. In an age when legislators and funding agencies rely on science for direction, studies like this one generate ill-informed conclusions on what interventions, treatments and approaches to suicide prevention offer the most promise. Many readers will sadly and mistakenly carry away the message that psychosocial interventions offer no promise to reduce suicide rates.
eLetters
No eLetters have been published for this article.