Hostname: page-component-586b7cd67f-vdxz6 Total loading time: 0 Render date: 2024-11-22T06:55:27.711Z Has data issue: false hasContentIssue false

What is a multiple treatments meta-analysis?

Published online by Cambridge University Press:  19 January 2012

A. Cipriani*
Affiliation:
Department of Public Health and Community Medicine, Section of Psychiatry and Clinical Psychology, University of Verona, Italy
C. Barbui
Affiliation:
Department of Public Health and Community Medicine, Section of Psychiatry and Clinical Psychology, University of Verona, Italy
C. Rizzo
Affiliation:
Department of Public Health and Community Medicine, Section of Psychiatry and Clinical Psychology, University of Verona, Italy
G. Salanti
Affiliation:
Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece
*
*Address for correspondence: Andrea Cipriani, Department of Public Health and Community Medicine, Section of Psychiatry and Clinical Psychology, University of Verona, Piazzale L.A. Scuro 10, 37134 Verona, Italy. (Email: [email protected])
Rights & Permissions [Opens in a new window]

Abstract

Standard meta-analyses are an effective tool in evidence-based medicine, but one of their main drawbacks is that they can compare only two alternative treatments at a time. Moreover, if no trials exist which directly compare two interventions, it is not possible to estimate their relative efficacy. Multiple treatments meta-analyses use a meta-analytical technique that allows the incorporation of evidence from both direct and indirect comparisons from a network of trials of different interventions to estimate summary treatment effects as comprehensively and precisely as possible.

Type
ABC of Methodology
Copyright
Copyright © Cambridge University Press 2012

Pair-wise (or standard) meta-analysis is a statistical technique used to synthesize evidence from studies with similar design, addressing the same research question within the frame of a systematic review (Higgins & Green, Reference Higgins and Green2011). Standard meta-analyses are an effective tool in evidence-based medicine, but one of their main drawbacks is that they can compare only two alternative treatments at a time (Cipriani et al. Reference Cipriani, Furukawa and Barbui2011a). For most clinical conditions where many treatment regimens already exist, standard meta-analysis approaches result into a plethora of pair-wise comparisons and do not inform on the comparative efficacy of all treatments simultaneously. Moreover, if no trials exist which directly compare two interventions, it is not possible to estimate their relative efficacy and thus this specific information is missing from the overall picture. All this has led to the development of meta-analytical techniques that allow the incorporation of evidence from both direct and indirect comparisons in a network of trials and different interventions to estimate summary treatment effects as comprehensively and precisely as possible (Caldwell et al. Reference Caldwell, Ades and Higgins2005). This meta-analytical technique is called multiple treatments meta-analysis (MTM), also known as mixed-treatment comparison or network meta-analysis. How does MTM work? An example is pictured in the attached Fig. 1.

Fig. 1. Graphic explanation of direct–indirect comparisons to be used in MTM (see text).

Consider we want to assess comparative efficacy of all available pharmacological treatments for a specific psychiatric disorder. After carrying out a systematic review of all the available scientific evidence, only randomised controlled trials (RCTs) comparing treatment A versus treatment B (RCT 1) and treatment A versus treatment C (RCT 2) are available. Hence, for these two head-to-head comparisons (namely, A versus B and A versus C), evidence is provided by studies that compare these two pairs of treatments directly (Fig. 1 – Step 1). By contrast, there is no study which directly compares treatment B versus treatment C, and so the direct estimate between these two treatments is missing. If we used a standard meta-analytical approach, there would be no way to determine the relative efficacy between treatments B and C and this might clearly limit the clinical applicability of results. However, using an MTM approach, indirect evidence can be provided because studies that compared A versus B and A versus C can be analysed jointly, as follows. Treatment A is present in both the RCTs (Fig. 1 – Step 2) and so it is possible to establish how much better (or worse) are treatments B and C relative to the ‘common’ comparator A, by calculating the indirect estimate between treatments B and C via treatment A (Fig. 1 – Step 3). For example, if treatment B is better than treatment A by reducing on average the symptoms by 7 units on a rating scale and treatment C is better than treatment A reducing the symptoms by 5 units, we can conclude that treatment B is better than treatment C by a mean difference of 2 units. In this way, it is possible to have the relative efficacy of all three comparisons, notwithstanding the lack of direct comparison between treatments B and C. The combination of direct and indirect estimates into a single effect size not only can provide information on missing comparisons, but also can increase precision of treatment estimate of already existing direct comparisons, reducing confidence intervals and strengthening inferences concerning the relative efficacy of two treatments. Hence, going back to our previous example, if direct evidence were available between treatments B and C, we could merge this information with the indirect estimate (via treatment A) into a mixed effect size to obtain overall estimates with maximum precision. Originally, MTM was the extension of this idea of merging direct and indirect evidences together in a full network of comparisons (Lu & Ades, Reference Lu and Ades2004; Salanti et al. Reference Salanti, Higgins, Ades and Ioannidis2008).

Another fruitful role of the MTM technique is to facilitate simultaneous inference regarding all treatments in order to rank them according to any outcome of interest, for instance efficacy and acceptability (Cipriani et al. Reference Cipriani, Barbui, Salanti, Rendell, Brown, Stockton, Purgato, Spineli, Goodwin and Geddes2011b; Salanti et al. Reference Salanti, Ades and Ioannidis2011). Using MTMs within the frame of a more complex statistical procedure, it is possible to calculate the probability of each treatment to be the most effective (first-best) regimen, the second-best, the third-best and so on, and thus to rank treatments according to this hierarchical order. This is a very easy to understand and a straightforward way to present MTM results, most of all for clinicians who want to know which is the best treatment to be prescribed to patients on average (Salanti et al. Reference Salanti, Marinho and Higgins2009).

Recently, MTMs have become more widely employed and demanded, with the increased complexity of analyses that underpin clinical guidelines and health technology appraisals (Barbui & Cipriani, Reference Barbui and Cipriani2011). Expert statistical support, as well as subject expertise, is required for carrying out and interpreting MTM results. Several applications of the methodology have depicted the benefits of a joint analysis, but MTM approaches are far from being an established practice in the medical literature. Concerns have been expressed about the validity of MTM methods as they rely on assumptions that are difficult to test (Salanti et al. Reference Salanti, Marinho and Higgins2009). Although randomised evidence is used and MTM techniques preserve the randomisation, indirect evidence is not randomised evidence as treatments have originally been compared within but not across studies. Therefore indirect evidence may suffer the biases of observational studies (i.e. confounding or selection bias). In this respect, direct evidence remains more robust and in situations when both direct and indirect comparisons are available in a review, any use of MTM should be to supplement, rather than replace, the direct comparisons. Several techniques exists which can account for but not eliminate the impact of effect modifiers across studies involving different interventions (Salanti et al. Reference Salanti, Marinho and Higgins2009). However, recent empirical evidence suggests that direct and indirect evidences are in agreement in the majority of cases and that methods based on indirect evidence (such as MTM) can address biases that cannot be addressed in a standard meta-analysis, such as sponsorship bias and optimism bias (Song et al. Reference Song, Harvey and Lilford2008; Salanti et al. Reference Salanti, Dias, Welton, Ades, Golfinopoulos, Kyrgiou, Mauri and Ioannidis2010).

Acknowledgement

G.S. acknowledges research funding support from the European Research Council (Grant Agreement Number 260559 IMMA).

Footnotes

This Section of Epidemiology and Psychiatric Sciences regularly appears in each issue of the Journal to cover methodological aspects related to the design, conduct, reporting and interpretation of clinical and epidemiological studies. The aim of these Editorials is to help developing a more critical attitude towards research findings published in international literature, promoting original research projects with higher methodological standards, and implementing the most relevant results of research in every-day clinical practice.

Corrado Barbui, Section Editor and Michele Tansella, Editor EPS

References

Barbui, C, Cipriani, A (2011). What are evidence-based treatment recommendations? Epidemiology and Psychiatric Sciences 20, 2931.CrossRefGoogle ScholarPubMed
Caldwell, DM, Ades, AE, Higgins, JPT (2005). Simultaneous comparison of multiple treatments: combining direct and indirect evidence. British Medical Journal 331, 897900.CrossRefGoogle ScholarPubMed
Cipriani, A, Furukawa, TA, Barbui, C (2011 a). What is a Cochrane review? Epidemiology and Psychiatric Sciences 20, 231233.CrossRefGoogle ScholarPubMed
Cipriani, A, Barbui, C, Salanti, G, Rendell, J, Brown, R, Stockton, S, Purgato, M, Spineli, LM, Goodwin, GM & Geddes, JR (2011 b). Comparative efficacy and acceptability of antimanic drugs in acute mania: a multiple-treatments meta-analysis. Lancet 378, 13061315.CrossRefGoogle Scholar
Higgins, JPT, Green, S (editors). Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0. The Cochrane Collaboration 2011 (http://www.cochrane-handbook.org). Updated March 2011.Google Scholar
Lu, G, Ades, AE (2004). Combination of direct and indirect evidence in mixed treatment comparisons. Statistics in Medicine 23, 31053124.CrossRefGoogle ScholarPubMed
Salanti, G, Higgins, JP, Ades, AE, Ioannidis, JP (2008). Evaluation of networks of randomized trials. Statistical Methods in Medical Research 17, 279301.CrossRefGoogle ScholarPubMed
Salanti, G, Marinho, V, Higgins, JP (2009). A case study of multiple-treatments meta-analysis demonstrates that covariates should be considered. Journal of Clinical Epidemiology 62, 857864.CrossRefGoogle ScholarPubMed
Salanti, G, Dias, S, Welton, NJ, Ades, AE, Golfinopoulos, V, Kyrgiou, M, Mauri, D, Ioannidis, JP (2010). Evaluating novel agent effects in multiple-treatments meta-regression. Statistics in Medicine 29, 23692383.CrossRefGoogle ScholarPubMed
Salanti, G, Ades, AE, Ioannidis, JP (2011). Graphical methods and numerical summaries for presenting results from multiple-treatment meta-analysis: an overview and tutorial. Journal of Clinical Epidemiology 64, 163171.CrossRefGoogle ScholarPubMed
Song, F, Harvey, I, Lilford, R (2008). Adjusted indirect comparison may be less biased than direct comparison for evaluating new pharmaceutical interventions. Journal of Clinical Epidemiology 61, 455463.CrossRefGoogle ScholarPubMed
Figure 0

Fig. 1. Graphic explanation of direct–indirect comparisons to be used in MTM (see text).