Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-23T23:14:01.008Z Has data issue: false hasContentIssue false

A new generation of pragmatic trials of psychosocial interventions is needed

Published online by Cambridge University Press:  20 March 2013

M. Ruggeri*
Affiliation:
Department of Public Health and Community Medicine, Section of Psychiatry, University of Verona, Verona, Italy
A. Lasalvia
Affiliation:
Department of Public Health and Community Medicine, Section of Psychiatry, University of Verona, Verona, Italy
C. Bonetto
Affiliation:
Department of Public Health and Community Medicine, Section of Psychiatry, University of Verona, Verona, Italy
*
*Address for correspondence: Professor M. Ruggeri, Department of Public Health and Community Medicine, Section of Psychiatry, University of Verona, Policlinico G.B. Rossi, Piazzale L.A. Scuro 10, 37134 Verona, Italy. (Email: [email protected])
Rights & Permissions [Opens in a new window]

Abstract

This Editorial addresses the crucial issue of which research methodology is most suited for capturing the complexity of psychosocial interventions conducted in ‘real world’ mental health settings. It first examines conventional randomized controlled trial (RCT) methodology and critically appraises its strengths and weaknesses. It then considers the specificity of mental health care treatments and defines the term ‘complex’ intervention and its implications for RCT design. The salient features of pragmatic RCTs aimed at generating evidence of psychosocial intervention effectiveness are then described. Subsequently, the conceptualization of pragmatic RCTs, and of their further developments – which we propose to call ‘new generation’ pragmatic trials – in the broader routine mental health service context, is explored. Helpful tools for planning pragmatic RCTs, such as the CONSORT extension for pragmatic trials, and the PRECIS tool are also examined. We then discuss some practical challenges that are involved in the design and implementation of pragmatic trials based on our own experience in conducting the GET UP PIANO Trial. Lastly, we speculate on the ways in which current ideas on the purpose, scope and ethics of mental health care research may determine further challenges for clinical research and evidence-based practice.

Type
Editorials
Copyright
Copyright © Cambridge University Press 2013 

Introduction

Complex interventions are the rule rather than the exception in mental health care, and their effectiveness has become a pressing question in this era of limited economic and personnel resources (Becker & Pushner, Reference Becker, Pushner, Thornicroft, Ruggeri and Goldberg2013). The randomized controlled trial (RCT) has traditionally been considered as the main tool for addressing this issue from a research perspective. Yet, current RCT methodology in mental health research essentially responds to a relatively limited set of questions, which predominantly pertain to pharmacological treatments. Although all RCTs have much in common – in terms of design, conduct, analysis and reporting – complex intervention trials (i.e. interventions with several interacting components) (Chambless & Hollon, Reference Chambless and Hollon1998) can pose specific challenges that require careful consideration, if they are to yield high quality and credible evidence that might modify or inform practice (Thornicroft et al. Reference Thornicroft, Wykes, Holloway, Johnson and Szmukler1998).

We discuss herein some practical challenges that must be faced when designing and implementing trials aimed to provide knowledge that is really useful to disseminate evidence-based practices, and refer to our own experience in designing and conducting the GET UP PIANO Trial, a pragmatic cluster RCT that aimed to evaluate the effectiveness of a multi-component psychosocial intervention in a large epidemiologically based cohort of patients with first episode psychosis recruited from Italian public Community Mental Health Centers (CMHCs) located in a 10 million inhabitant catchment area (Ruggeri et al. Reference Ruggeri, Bonetto, Lasalvia, De Girolamo, Fioritti, Rucci, Santonastaso, Neri, Pileggi, Ghigi, Miceli, Scarone, Cocchi, Torresani, Faravelli, Zimmermann, Meneghelli, Cremonese, Scocco, Leuci, Mazzi, Gennarelli, Brambilla, Bissoli, Bertani, Tosato, De Santi, Poli, Cristofalo and Tansella2012). The intervention lasted 9 months and consisted of an integrated package of evidence-based psychosocial interventions (i.e. cognitive behavioral therapy for patients, family intervention for family members and case management for patients and families) plus standard care (consisting of personalized outpatient psychopharmacological treatment, combined with non-specific supportive clinical management and non-specific informal support/educational sessions). The control group received standard care only.

Lastly, we speculate on the ways in which current ideas about the purpose, scope and ethics of mental health-care research may determine further clinical research challenges.

Conventional RCT application limitations in mental health care

The RCT is considered the gold-standard methodology for testing health treatment effectiveness. The importation of this approach into psychiatric research has proved to be more easily applicable to pharmacotherapy (which tends to be a more standardized and well-defined intervention) than to psychological and social interventions. Mental health care RCTs aiming to assess the many complex aspects of psychosocial interventions – which are no less important in real-world practice than psychotropic drug prescription – tend to be undermined by methodological limitations, such as: (a) poor design, which oversimplifies the variables to be examined; (b) poor conceptualization and inaccurate measurement of process variables, with change usually being measured only at the end of treatment and not also during it; (c) over-interpretation of associations, with causality assumed even for unclear directions in associations; (d) neglect in modelling the plausible influence of important pre-treatment factors (moderators) on treatment effects; (e) difficulties in identifying the variables that might actually modulate observed changes (mediators); (f) under-powerment of trial sample, due to costs and study design constraints, which interferes negatively on a trial's potential for detecting important effects.

Therefore, the rigour of the conventional RCT design, as traditionally conceived, has turned out to be unsatisfactory for studying situations of high treatment complexity (Slamoirago-Blotcher & Ockene, Reference Salmoirago-Blotcher and Ockene2009), such as those involved in psychosocial interventions (Slade & Priebe, Reference Slade and Priebe2001). In fact, in ‘real world’ mental health settings, interventions are usually targeted to test their effect on a broad range of outcome domains. Furthermore, they generally include different kinds of integrated pharmacological and psychosocial interventions, whose implementation can be mediated by a series of clinical and non-clinical variables, e.g. therapist-specific characteristics (including cultural background, training, interpersonal skills, caseloads, etc.) and features of the social-relational environment within which the treatment is delivered. Difficulties in conducting psychosocial RCTs in real-world care stem not only from the challenges that are associated with a given intervention's intrinsic complexity but also from the local service context's complexity and heterogeneity (Harvey et al. Reference Harvey, Killaspy, Martino, White, Priebe, Wright and Johnson2011). Persons involved in a study – whether participants, health professionals or researchers – are influenced by their own beliefs, attitudes and experiences, and these consciously or unconsciously affect the way in which they engage with the research process. This phenomenon creates (both positive and negative) cultural expectations, which affect the ways in which people engage with a trial. A given context and service culture's impact on a trial has been acknowledged, particularly in relation to complex interventions (Campbell et al. Reference Campbell, Murray, Darbyshire, Emery, Farmer, Griffiths, Guthrie, Lester, Wilson and Kinmonth2007; Nastasi & Hitchcok, Reference Nastasi and Hitchcock2009), although trial designers have perhaps thus far failed to fully address the above-mentioned challenges when designing and implementing these types of clinical trials.

As an example, we report the procedure used in the GET UP PIANO Trial to take into consideration all these issues. Specifically, to increase the overall acceptability of the trial, we choose not to administer the experimental treatment using external psychotherapy experts, but we rather decided to train staff of routine CMHCs allocated to the experimental intervention arm. This implied performing the training of the staff to the experimental intervention in the pre-trial stage, with assessment of competencies achieved, followed by ongoing supervision during the patient enrolment phase (Ruggeri et al. Reference Ruggeri, Bonetto, Lasalvia, De Girolamo, Fioritti, Rucci, Santonastaso, Neri, Pileggi, Ghigi, Miceli, Scarone, Cocchi, Torresani, Faravelli, Zimmermann, Meneghelli, Cremonese, Scocco, Leuci, Mazzi, Gennarelli, Brambilla, Bissoli, Bertani, Tosato, De Santi, Poli, Cristofalo and Tansella2012; Poli et al. submitted for publication). This strategy has indeed proved to be an important factor in favouring a high degree of collaboration by the CMHCs staff in all trial's procedures, with none of the participating CMHCs in the experimental arm dropping out during the study.

Beyond the conventional RCT

Clinicians have often held a negative view of conventional RCT use in mental health-care research, by considering them a reductionist tool or as being poorly representative of complex clinical practice and by claiming that RCTs disrupt clinical decision-making or clinical engagement with patients. Moreover, clinicians tend to consider the results of RCTs inapplicable or irrelevant to real clinical practice (Hotopf et al. Reference Hotopf, Churchill and Lewis1999; Ruggeri et al. Reference Ruggeri, Lora and Semisa2008), and, in general, they may resent having to strictly adhere to evidence-based practice, fearing that it will deprive them of leeway in employing their own competence and experience.

Conversely, clinicians have frequently assumed that qualitative methods are more suitable for use in clinical research (Bird et al. Reference Bird, Arthur and Cox2011). Paradoxically, however, the randomized design approach is particularly suited for testing complex treatments in a number of ways. For example, the nature of complex treatments is that of multiple potential factors, both known and unknown, which influence outcomes: only an adequately powered randomized design approach allows for the proper control of these variables. Furthermore, despite their limitations, the key principles of the RCT methodology is to date considered to represent the gold standard: in the current evidence-based era, any attempt to deprive complex psychosocial interventions of their imprimatur potentially risks undervaluing results achieved in this field (Goldbeck & Vitiello, Reference Goldbeck and Vitiello2011; Melfsen et al. Reference Melfsen, Kuehnemund, Schwieger, Warnke, Stadler, Poustka and Stangier2011).

Hotopf et al. (Reference Hotopf, Churchill and Lewis1999) championed the concept of pragmatic RCTs as a major attempt to extend RCT methodology to interventions conducted in ‘real-world’ services. The approach has since become a key tool for evaluating the effectiveness of psychotropic drugs and complex psychological interventions. Pragmatic trials provide a realistic compromise between observational studies (which have good external validity, at the expense of internal validity) and conventional RCTs (which have good internal validity, at the expense of external validity). Thus, the gap between clinician's treatment concerns in everyday practice and the study designs that are able to test them may be narrowing, thereby allowing for the development of more targeted intervention and ultimately for service enhancement.

Seminal publications from the Medical Research Council (MRC, 2000; Campbell et al. Reference Campbell, Fitzpatrick, Haines, Kinmonth, Sandercock, Spiegelhalter and Tyrer2000; Campbell et al. Reference Campbell, Murray, Darbyshire, Emery, Farmer, Griffiths, Guthrie, Lester, Wilson and Kinmonth2007) provide useful guidance for developing and evaluating complex interventions. The MRC underscores the need for robust and rigorous evaluation of complex interventions, particularly in the area of psychological and psychosocial treatments. They promote the use of experimental methods and also provide information on alternatives to conventional RCTs and highlight situations in which these trials are impractical or undesirable. Specific dimensions of complexity are implicated in both the development and evaluation phases of research, such as: the ways in which a given intervention leads to change, lack of impact due to implementation failure and variability in individual-level outcomes, use of multiple primary outcomes and adaptation of intervention programmes to local settings (Craig et al. Reference Craig, Dieppe, Macintyre, Mitchie, Nazareth and Petticrew2008). In fact, conventional RCTs may be inapplicable in these instances, and assessment of treatment effectiveness may require solutions involving special experimental designs, such as cluster randomized trials (Barbui & Cipriani, Reference Barbui and Cipriani2011; Campbell et al. Reference Campbell, Piaggio, Elbourne and Altman2012), stepped wedge designs (Hussey & Hughes, Reference Hussey and Hughes2007), preference trials (Torgerson & Sibbald, Reference Torgerson and Sibbald1998), randomized consent designs (Zelen, Reference Zelen2006) and ‘N of 1 designs’ (Mahon et al. Reference Mahon, Laupacis, Donner and Wood1996). Furthermore, randomization may not always be necessary or appropriate, thus leaving a non-randomized design as the only choice (Mohr et al. Reference Mohr, Spring, Freedland, Beckner, Arean, Hollon, Ockene and Kaplan2009; Catts et al. Reference Catts, O'Toole, Carr, Lewin, Neil, Harris, Frost, Crissman, Eadie and Evans2010).

In the light of these considerations, the GET UP PIANO Trial deliberately adopted a cluster randomized controlled design, due to the higher feasibility of the complex interventions' implementation at the service level rather than in individual patients randomized to the experimental or the control arm. Specifically, clusters in the GET UP PIANO Trial were CMHCs operating for the Italian National Health Service in the catchment area. Notwithstanding the higher statistical complexity of this model, as described by Dunn (Reference Dunn2013) in this issue and proved in the GET UP Trial, cluster randomization seems to be the strategy of choice when interventions that require some overall service organizational modification are to be implemented.

The MRC Guidance places new emphasis on collaboration with clinicians and service users. This approach can be considered part of a recent cultural shift that aims to view the generation of robust science-based knowledge on complexity as the core of cultural concern, rather than as a peripheral, specialized or methodologically flawed discipline. Indeed, this new MRC conceptualization is yielding many fascinating intellectual and practical challenges for clinicians and researchers alike. In the crucial pre-trial stage of treatment modelling there is the risk of oversimplified reductionism, which overlooks key aspects. Thus, during the pre-trial phase, staff training and assessment of competencies achieved must be undertaken; moreover, all treatment features must be robustly operationalized, and this procedure should result in the development of a manual. In this process, then, any form of rigidity that diminishes the possibility to respond to real patients' individual needs must be avoided by incorporating flexibility of response in the protocol itself (Hawe et al. Reference Hawe, Shiell and Riley2004). With this regard, in the GET UP PIANO Trial, detailed intervention manuals based on international standards were developed and given to the staff providing experimental interventions as a standard to be followed for the treatment. Flexibility in the use of the manual was allowed, but a written explanation of the specific reasons for this was required. Fidelity was measured throughout the trial by using therapists' reports of their own session and audiotape recordings of therapy sessions. Therapist reports and audiotapes were planned to be rated at the end of the trial by an independent team using the Cognitive Therapy Scale-Revised (CTRS) (Blackburn et al. Reference Blackburn, James, Milne, Baker, Standart, Garland and Reichelt2001) and the Cognitive Therapy for Psychosis Adherence Scale (CTPAS) (Startup et al. Reference Startup, Jackson and Bendix2004), together with ad hoc checklists based on the specific trial intervention manuals. Staff were also supported in their clinical work by a team of expert psychotherapists assigned to each CMHC and received on site supervision by external experts throughout the study.

The body of knowledge and experience discussed above has allowed to further extend the pragmatic approach in order to fully capture the complexity of psychosocial interventions in real-world care (Roy-Byrne et al. Reference Roy-Byrne, Sherbourne, Craske, Stein, Katon, Sullivan, Means-Christensen and Bystritsky2003; Ruggeri & Tansella, Reference Ruggeri and Tansella2011; Ruggeri et al. Reference Ruggeri, Bonetto, Lasalvia, De Girolamo, Fioritti, Rucci, Santonastaso, Neri, Pileggi, Ghigi, Miceli, Scarone, Cocchi, Torresani, Faravelli, Zimmermann, Meneghelli, Cremonese, Scocco, Leuci, Mazzi, Gennarelli, Brambilla, Bissoli, Bertani, Tosato, De Santi, Poli, Cristofalo and Tansella2012). We propose to call these kinds of trials ‘new generation’ pragmatic trials. An increasing number of them are being implemented in mental health services, paving the way to the possibility of changing mental health professionals' perceptions of clinical trials, and testing the applicability of new methodological tools.

Tools to building on the pragmatic trials approach and implementing ‘new generation’ trials

Of great interest in this perspective, is the CONSORT (Consolidated Standards of Reporting Trials) Statement extension (Boutron et al. Reference Boutron, Moher, Altman, Schulz and Ravaud2008a, Reference Boutron, Moher, Altman, Schulz and Ravaudb), which specifically addresses the complexity of non-pharmacological treatments, including psychotherapy and behavioural interventions. It is worth mentioning here its checklist item extensions concerning: (a) the description of the different intervention components (both experimental treatment and comparators), procedures for tailoring interventions to individual participants, details on the ways in which interventions are standardized and the ways in which care provider adherence to the protocol is assessed and, if necessary, improved; (b) details on the implementation of experimental treatment and comparators; (c) comparator choice, reasons for lack or partial blinding, and unequal expertise of each group's care providers and/or participating centres.

Lastly, discussion on the trial's external validity, comparators, patients, care providers and services is required. The extended CONSORT flow diagram provides a box for reporting information on the number of each group's centres and care providers, and the distribution of participants per care provider and/or centre. The authors of the present editorial tested in the GET UP PIANO Trial the feasibility and utility of the CONSORT expanded procedure and found that it is a very useful tool for identifying and monitoring the trial's context complexity and performing continuous quality checks on its implementation. Of special interest is the blinding procedure used in the GET UP PIANO Trial, where a complete blinding of patients, clinicians and raters working on site was not possible because of the cluster randomization design which implied the intervention's implementation at the CMHC level, and not at the patient level. However, every effort was made to preserve the independence of the raters: they were not involved in the treatment sessions, and any conflict of interest was accurately prevented and monitored. Primary outcomes (relapses and/or changes in psychopathology) were mostly objective clinical assessments, based on standardized instruments with clearly defined anchor points. Cross-check of the different instrument scores' congruence was performed post hoc, and, whenever raw data were to be analysed, assessments were made by paired and independent members of the research team who were blinded to the randomization arm (Ruggeri et al. submitted for publication).

Another relevant issue to be aware of is that distinguishing between explanatory and pragmatic trials in real life is not easy (Schwartz & Lellouch, Reference Schwartz and Lellouch1967) The view that explanatory and pragmatic procedures are mutually exclusive is now transforming into the idea that most trials have both explanatory and pragmatic aspects, and are becoming ‘hybrid trials’ thereby (Green & Dunn, Reference Green and Dunn2008). Furthermore, pragmatism is an attribute that is not merely dichotomous (absent/present), and is gradually being seen as existing on a continuum. It is noteworthy that, to provide a thorough measure of this aspect, Thorpe et al. (Reference Thorpe, Zwarenstein, Oxman, Treweek, Furberg, Altman, Tunis, Bergel, Harvey, Magid and Chalkidou2009) have recently developed the PRagmatic-Explanatory Continuum Indicator Summary (PRECIS) tool.

The primary outcomes issue in ‘new generation’ pragmatic trials

When dealing with complex interventions, another key issue is that of how primary and secondary outcomes should be characterized. Conventional RCT methodology allows for only one pre-specified primary outcome measure, which tests the trial's key hypothesis (Green, Reference Green2006). Secondary outcomes and intermediate measures are permitted, but they only provide opportunities for testing wider aspects of the primary outcome. Moreover, they are used only sparingly and are considered of marginal value. This rigorous stance is aimed at obviating any post hoc temptations ‘fishing trips’, but researchers and clinicians are increasingly questioning whether this degree of rigour is actually appropriate for testing complex interventions that yield important outcomes, which are unlikely, however, to be unitary or thoroughly simple and which present similarly complex intermediate effects.

In addition to the choice of single or multiple outcomes, which influences sample size calculation, the planning of outcomes in complex intervention trials calls for the careful examination of other critical aspects. Outcomes selected for trials conducted in routine settings are frequently those inherent to patients' self-report measures, such as the GET UP PIANO Trial, where two primary outcome measures were defined in order to detect more finely tuned clinical changes. The first primary outcome was based on symptoms' severity assessed by the independent raters (as measured by using the PANSS; Kay et al. Reference Kay, Fiszbein and Opler1987), but the second was based on the subjective appraisal of psychotic symptoms as referred by patients themselves (measured by using the PSYRATS; Drake et al. Reference Drake, Haddock, Tarrier, Bentall and Lewis2007).

A growing body of literature has shown that patients' pre-intervention characteristics can affect patients' subjective reports of outcome, and that some heterogeneity in patient-reported outcomes may likely be driven by baseline heterogeneity in the enrolled population (Candy et al. Reference Candy, King, Jones and Oliver2011). The exploration of heterogeneity in ‘new generation’ pragmatic trials could represent an important methodological advance, because treatment effect heterogeneity might not only be simply due to outcome variability (which is controllable through statistical approaches) but also to non-random variability. This latter type of variability can be attributed to patient, treatment, provider or environmental factors, which require more complex epidemiological methods of analysis. In this context, it might happen that, paradoxically, the kind of outcomes selected contribute to the heterogeneity of treatment effects (Kent et al. Reference Kent, Alsheikh-Ali and Hayward2008).

Moreover, subgroups of individuals might be more greatly impacted by treatment than other subgroups, with clinically crucial implications. In this light, subgroup analysis – which has traditionally been considered a subordinate part of the trial results (frequently relegated to the role of exploratory analysis) – can conversely be considered a major contribution to the intervention's mechanism of action. Both moderators and mediators can play an important role in this process. For example, moderators could be crucial pre-treatment variables that help identify the type of individuals and conditions in which treatment has a certain effect on outcome. Moreover, treatment could have a causal effect on mediators, which are thus the intervention's real targets, though frequently disregarded. Indeed, mediator identification can be used for speculating on modifications to treatment strategies aimed to augment effectiveness or reduce costs (Kraemer et al. Reference Kraemer, Stice, Kazdin, Offord and Kupfer2001; Kraemer Reference Kraemer2013). Consistently with this, in the GET UP PIANO Trial (Ruggeri et al. Reference Ruggeri, Bonetto, Lasalvia, De Girolamo, Fioritti, Rucci, Santonastaso, Neri, Pileggi, Ghigi, Miceli, Scarone, Cocchi, Torresani, Faravelli, Zimmermann, Meneghelli, Cremonese, Scocco, Leuci, Mazzi, Gennarelli, Brambilla, Bissoli, Bertani, Tosato, De Santi, Poli, Cristofalo and Tansella2012), a series of exploratory analyses have been planned in order to compare the outcome in groups of patients with specific characteristics identified a priori (such as gender, age of onset, duration of untreated psychosis, etc.) and pathways to outcome have been analysed in these different subgroups (Ruggeri et al. in preparation).

Conclusion

In conclusion, the issue of how to best evaluate the effectiveness of complex interventions in real-world services poses some key questions: Does a specific type of intervention work? How does it work? Is it cost-effective? What components are responsible for its efficacy, and for cost- and patient-related outcomes? Can it be tailored to work more effectively or cost-effectively with particular types of patient? (Patsopoulos, Reference Patsopoulos2011). The challenge for trial methodologists is to develop ways of designing trials to answer these questions without abandoning methodological rigour.

By further exploring these questions, remarkable progress can be made, and many challenges posed nowadays by the issue of trial design might represent landmark opportunities in the advancement of mental health service research and implementation of evidence-based interventions.

Acknowledgements

This Editorial has been conceived in the frame of the theoretical and experimental work conducted between 2007 and 2012 to devising and implementing the GET UP (Genetics Endophenotype and Treatment: Understanding early Psychosis) Research Programme (National Coordinator: Mirella Ruggeri), funded by the Italian Ministry of Health as part of a National Health Care Research Program (Ricerca Sanitaria Finalizzata) coordinated by the Academic Hospital of Verona (Azienda Ospedaliera Universitaria Integrata Verona). The authors wish to express their deepest gratitude to the researchers and the clinicians who have contributed to the implementation of the GET UP Program. See http://ww.psychiatry.univr.it/page_getup for the full list of those involved in the GET UP GROUP.

Conflict of Interest

None.

Financial Support

This work was supported by a grant from the Italian Ministry of Health (National Health Care Research Program (Ricerca Sanitaria Finalizzata), funds assigned to the Genetics Endophenotype and Treatment: Understanding early Psychosis) Research Programme

Ethical Standards

The authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration of 1975, as revised in 2008.

References

Barbui, C, Cipriani, A (2011). Cluster randomised trials. Epidemiology and Psychiatric Sciences 20, 307309.Google Scholar
Becker, T, Pushner, B (2013). Complex interventions in mental health services research: potential, limitations and challenges. In Improving Mental Health Care: the Global Challenge (eds. Thornicroft, G., Ruggeri, M. and Goldberg, D.), John Wiley: Chichester.Google Scholar
Bird, L, Arthur, A, Cox, K (2011). Did the trial kill the intervention? Experiences from the development, implementation and evaluation of a complex intervention. BMC Medical Research Methodology 11, 24.CrossRefGoogle ScholarPubMed
Blackburn, IM, James, IA, Milne, DL, Baker, C, Standart, S, Garland, A, Reichelt, FK (2001). The Revised Cognitive therapy scale (CTS-R). Psychometric properties. Behavioural and Cognitive Psychotherapy 29, 431446.Google Scholar
Boutron, I, Moher, D, Altman, DG, Schulz, KF, Ravaud, P (2008 a). Extending the CONSORT Statement to randomized trials of nonpharmacologic treatment: explanation and elaboration. Annals of Internal Medicine 148, 295309.Google Scholar
Boutron, I, Moher, D, Altman, DG, Schulz, KF, Ravaud, P (2008 b). Methods and processes of the CONSORT group: example of an extension for trials assessing nonpharmacologic treatments. Annals of Internal Medicine 148, W6066.Google Scholar
Campbell, M, Fitzpatrick, R, Haines, A, Kinmonth, AL, Sandercock, P, Spiegelhalter, D, Tyrer, P (2000). Framework for design and evaluation of complex interventions to improve health. British Medical Journal 321, 694696.Google Scholar
Campbell, MK, Piaggio, G, Elbourne, DR, Altman, DG, for the CONSORT Group (2012). CONSORT 2010 statement: extension to cluster randomised trials. British Medical Journal 345, e5661.CrossRefGoogle ScholarPubMed
Campbell, NC, Murray, E, Darbyshire, J, Emery, J, Farmer, A, Griffiths, F, Guthrie, B, Lester, H, Wilson, P, Kinmonth, AL (2007). Designing and evaluating complex interventions to improve health care. British Medical Journal 334, 455459.CrossRefGoogle ScholarPubMed
Candy, B, King, M, Jones, L, Oliver, S (2011). Using qualitative synthesis to explore heterogeneity of complex interventions. BMC Medical Research Methodology 11, 124132.Google Scholar
Catts, SV, O'Toole, BI, Carr, VJ, Lewin, T, Neil, A, Harris, MG, Frost, AD, Crissman, BR, Eadie, K, Evans, RW (2010). Appraising evidence for intervention effectiveness in early psychosis: conceptual framework and review of evaluation approaches. Australian and New Zealand Journal of Psychiatry 44, 195219.Google Scholar
Chambless, DL, Hollon, SD (1998). Defining empirically supported therapies. Journal of Consulting and Clinical Psychology 66, 718.Google Scholar
Craig, P, Dieppe, P, Macintyre, S, Mitchie, S, Nazareth, I, Petticrew, M (2008). Developing and evaluating complex interventions: the new Medical Research Council guidance. British Medical Journal 337, 979983.Google Scholar
Drake, R, Haddock, G, Tarrier, N, Bentall, R, Lewis, S (2007). The psychotic symptom rating scales (PSYRATS): their usefulness and properties in first episode psychosis. Schizophrenia Research 89, 119122.Google Scholar
Dunn, G (2013). Pragmatic trials of complex psychosocial interventions: methodological challenges. Epidemiology and Psychiatic Sciences, this issue.CrossRefGoogle ScholarPubMed
Goldbeck, L, Vitiello, B (2011). Reporting clinical trials of psychosocial interventions in child and adolescent psychiatry and mental health. Child and Adolescent Psychiatry and Mental Health 5, 4.Google Scholar
Green, J (2006). The evolving randomised controlled trial in mental health: studying complexity and treatment process. Advances in Psychiatric Treatment 12, 268279.Google Scholar
Green, J, Dunn, G (2008). Using intervention trials in developmental psychiatry to illuminate basic science. British Journal of Psychiatry 192, 323325.Google Scholar
Harvey, C, Killaspy, H, Martino, S, White, S, Priebe, S, Wright, C, Johnson, S (2011). A comparison of the implementation of assertive community treatment in Melbourne, Australia and London, England. Epidemiology and Psychiatric Sciences 20, 151161.Google Scholar
Hawe, P, Shiell, A, Riley, T (2004). Complex interventions: how “out of control” can a randomised controlled trial be? British Medical Journal 328, 15611563.Google Scholar
Hotopf, M, Churchill, R, Lewis, G (1999). Pragmatic randomised controlled trials in psychiatry. British Journal of Psychiatry 175, 217223.Google Scholar
Hussey, MA, Hughes, JP (2007). Design and analysis of stepped wedge cluster randomized trials. Contemporary Clinical Trials 28, 182191.Google Scholar
Kay, SR, Fiszbein, A, Opler, LA (1987). The positive and negative syndrome scale for schizophrenia. Schizophrenia Bulletin 13, 261276.CrossRefGoogle ScholarPubMed
Kent, DM, Alsheikh-Ali, A, Hayward, RA (2008). Competing risk and heterogeneity of treatment effect in clinical trials. Trials 9, 3035.Google Scholar
Kraemer, HC (2013). Discovering, comparing, and combining moderators of treatment on outcome after randomized clinical trials: a parametric approach. Statistics in Medicine Jan 10. doi: 10.1002/sim.5734. [Epub ahead of print].Google Scholar
Kraemer, HC, Stice, E, Kazdin, A, Offord, D, Kupfer, D (2001). How do risk factors work together? Mediators, moderators, and independent, overlapping, and proxy risk factors. American Journal Psychiatry 158, 848856.Google Scholar
Mahon, J, Laupacis, A, Donner, A, Wood, T (1996). Randomised study of n of 1 trials versus standard practice. British Medical Journal 312, 10691074.Google Scholar
Medical Research Council (2000). A Framework for the Development and Evaluation of RCTs for Complex Interventions to Improve Health. MRC: London.Google Scholar
Melfsen, S, Kuehnemund, M, Schwieger, J, Warnke, A, Stadler, C, Poustka, F, Stangier, U (2011). Cognitive behavioral therapy of socially phobic children focusing on cognition: a randomised wait-list control study. Child and Adolescent Psychiatry and Mental Health 5, 5.Google Scholar
Mohr, DC, Spring, B, Freedland, KE, Beckner, V, Arean, P, Hollon, SD, Ockene, J, Kaplan, R (2009). The selection and design of control conditions for randomized controlled trials of psychological interventions. Psychotherapy and Psychosomatics 78, 275284.Google Scholar
Nastasi, BK, Hitchcock, J (2009). Challenges of evaluating multilevel interventions. American Journal of Community Psychology 43, 360376.Google Scholar
Patsopoulos, NA (2011). A pragmatic view on pragmatic trials. Dialogues Clinical Neuroscience 13, 217224.Google Scholar
Roy-Byrne, PP, Sherbourne, CD, Craske, MG, Stein, MB, Katon, W, Sullivan, G, Means-Christensen, A, Bystritsky, A (2003). Moving treatment research from clinical trials to the real world. Psychiatric Services 54, 327332.Google Scholar
Ruggeri, M, Tansella, M (2011). New perspectives in the psychotherapy of psychoses at onset: evidence, effectiveness, flexibility, and fidelity. Epidemiology and Psychiatric Sciences 20, 107111.Google Scholar
Ruggeri, M, Lora, A, Semisa, DSIEP-DIRECT'S Group (2008). The SIEP-DIRECT'S Project on the discrepancy between routine practice and evidence. An outline of main findings and practical implications for the future of community based mental health services. Epidemiologia e Psichiatria Sociale 17, 358368.Google Scholar
Ruggeri, M, Bonetto, C, Lasalvia, A, De Girolamo, G, Fioritti, A, Rucci, P, Santonastaso, P, Neri, G, Pileggi, F, Ghigi, D, Miceli, M, Scarone, S, Cocchi, A, Torresani, S, Faravelli, C, Zimmermann, C, Meneghelli, A, Cremonese, C, Scocco, P, Leuci, E, Mazzi, F, Gennarelli, M, Brambilla, P, Bissoli, S, Bertani, ME, Tosato, S, De Santi, K, Poli, S, Cristofalo, D, Tansella, M (2012). A multi-element psychosocial intervention for early psychosis (GET UP PIANO TRIAL) conducted in a catchment area of 10 million inhabitants: study protocol for a pragmatic cluster randomized controlled trial. Trials 13, 73.Google Scholar
Salmoirago-Blotcher, E, Ockene, IS (2009). Methodological limitations of psychosocial interventions in patients with an implantable cardioverter-defibrillator (ICD): a systematic review. BMC Cardiovascular Disorders 9, 56.Google Scholar
Schwartz, D, Lellouch, J (1967). Explanatory and pragmatic attitudes in therapeutical trials. Journal of Chronic Disease 20, 637648.CrossRefGoogle ScholarPubMed
Slade, M, Priebe, S (2001). Are randomised controlled trials the only gold that glitters? British Journal of Psychiatry 179, 286287.Google Scholar
Startup, M, Jackson, MC, Bendix, S (2004). North Wales randomized controlled trial of cognitive behaviour therapy for acute schizophrenia spectrum disorders: outcomes at 6 and 12 months. Psychological Medicine 34, 413422.Google Scholar
Thornicroft, G, Wykes, T, Holloway, F, Johnson, S, Szmukler, G (1998). From efficacy to effectiveness in community mental health services. PRiSM Psychosis Study 10. British Journal of Psychiatry 173, 423427.Google Scholar
Thorpe, KE, Zwarenstein, M, Oxman, AD, Treweek, S, Furberg, CD, Altman, DG, Tunis, S, Bergel, E, Harvey, I, Magid, DJ, Chalkidou, K (2009). A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. Journal of Clinical Epidemiology 62, 464475.Google Scholar
Torgerson, D, Sibbald, B (1998). Understanding controlled trials: what is a patient preference trial?. British Medical Journal 316, 360.Google Scholar
Zelen, M (2006). Randomized consent designs for clinical trials: an update. Statistics in Medicine 9, 645656.Google Scholar