Hostname: page-component-78c5997874-4rdpn Total loading time: 0 Render date: 2024-11-19T09:44:40.268Z Has data issue: false hasContentIssue false

Simultaneous evaluation of the harms and benefits of treatments in randomized clinical trials: demonstration of a new approach

Published online by Cambridge University Press:  24 August 2011

E. Frank*
Affiliation:
University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
D. J. Kupfer
Affiliation:
University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
P. Rucci
Affiliation:
University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
M. Lotz-Wallace
Affiliation:
University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
J. Levenson
Affiliation:
University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
J. Fournier
Affiliation:
University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
H. C. Kraemer
Affiliation:
University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
*
*Address for correspondence: E. Frank, Ph.D., Western Psychiatric Institute and Clinic, 3811 O'Hara Street, Pittsburgh, PA 15213, USA. (Email: [email protected])

Abstract

Background

One aim of personalized medicine is to determine which treatment is to be preferred for an individual patient, given all patient information available. Particularly in mental health, however, there is a lack of a single objective, reliable measure of outcome that is sensitive to crucial individual differences among patients.

Method

We examined the feasibility of quantifying the total clinical value provided by a treatment (measured by both harms and benefits) in a single metric. An expert panel was asked to compare 100 pairs of patients, one from each treatment group, who had participated in a randomized clinical trial (RCT) involving interpersonal psychotherapy (IPT) and escitalopram, selecting the patient with the preferred outcome considering both benefits and harms.

Results

From these results, an integrated preference score (IPS) was derived, such that the differences between any two patients' IPSs would predict the clinicians' preferences. This IPS was then computed for all patients in the RCT. A second set of 100 pairs was rated by the panel. Their preferences were highly correlated with the IPS differences (r=0.84). Finally, the IPS was used as the outcome measure comparing IPT and escitalopram. The 95% confidence interval (CI) for the effect size comparing treatments indicated clinical equivalence of the treatments.

Conclusions

A metric that combines benefits and harms of treatments could increase the value of RCTs by making clearer which treatments are preferable and, ultimately, for whom. Such methods result in more precise estimation of effect sizes, without increasing the required sample size.

Type
Original Articles
Copyright
Copyright © Cambridge University Press 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

APA (2000). Handbook of Psychiatric Measures, 1st edn. American Psychiatric Association: Washington, DC.Google Scholar
Brown, W (1910). Some experimental results in the correlation of mental abilities. British Journal of Psychology 3, 296322.Google Scholar
Cassano, GB, Michelini, S, Shear, MK, Coli, E, Maser, JD, Frank, E (1997). The panic-agoraphobic spectrum: a descriptive approach to the assessment and treatment of subtle symptoms. American Journal of Psychiatry 154, 2738.Google Scholar
Cohen, J (1988). Statistical Power Analysis for the Behavioral Sciences. Lawrence Erlbaum Associates: Hillsdale, NJ.Google Scholar
Detre, KA, Wright, E, Murphy, ML, Takaro, T (1975). Observer agreement in evaluating coronary angiograms. Circulation 52, 979986.CrossRefGoogle ScholarPubMed
Efron, B (1988). Bootstrap confidence intervals: good or bad? Psychological Bulletin 104, 293296.CrossRefGoogle Scholar
Frank, E, Cassano, GB, Rucci, P, Thompson, WK, Kraemer, HC, Fagiolini, A, Maggi, L, Kupfer, DJ, Shear, MK, Houck, PR, Calugi, S, Grochocinski, VJ, Scocco, P, Buttenfield, J, Forgione, RN (2011). Predictors and moderators of time to remission of depression with interpersonal psychotherapy and SSRI pharmacotherapy. Psychological Medicine 41, 151162.CrossRefGoogle ScholarPubMed
Frank, E, Cassano, GB, Shear, MK, Rotondo, A, Dell'Osso, L, Mauri, M, Maser, J, Grochocinski, V (1998). The spectrum model: a more coherent approach to the complexity of psychiatric symptomatology. CNS Spectrums 3, 2334.CrossRefGoogle Scholar
Goldberg, LR (1970). Man versus model of man: a rationale, plus some evidence, for a method of improving on clinical inferences. Psychological Bulletin 73, 422432.CrossRefGoogle Scholar
Grove, WM, Meehl, PE (1996). Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: the clinical-statistical controversy. Psychology, Public Policy, and Law 2, 293323.CrossRefGoogle Scholar
Hamilton, M (1960). A rating scale for depression. Journal of Neurology, Neurosurgery, and Psychiatry 23, 5662.CrossRefGoogle ScholarPubMed
Insel, T, Cuthbert, B, Garvey, M, Heinssen, R, Pine, DS, Quinn, K, Sanislow, C, Wang, P (2010). Research domain criteria (RDoC): toward a new classification framework for research on mental disorders. American Journal of Psychiatry 167, 748751.CrossRefGoogle Scholar
Karelaia, N, Hogarth, RM (2008). Determinants of linear judgment: a meta-analysis of lens model studies. Psychological Bulletin 134, 404426.CrossRefGoogle ScholarPubMed
Klerman, GL, Weissmann, MM, Rounsaville, BJ, Chevron, ES (1984). Interpersonal Psychotherapy of Depression. Basic Books: New York, NY.Google Scholar
Kraemer, HC (1979). Ramifications of a population model for k as a coefficient of reliability. Psychometrika 44, 461472.CrossRefGoogle Scholar
Kraemer, HC, Frank, E (2010). Evaluation of comparative treatment trials: assessing the clinical benefits and risks for patients, rather than statistical effects on measures. Journal of the American Medical Association 304, 683684.CrossRefGoogle ScholarPubMed
Kraemer, HC, Frank, E, Kupfer, DJ (2011). How to assess the clinical impact of treatments on patients, rather than the statistical impact of treatments on measures. International Journal of Methods in Psychiatric Research 20, 6372.CrossRefGoogle Scholar
Kraemer, HC, Kupfer, DJ (2006). Size of treatment effects and their importance to clinical research and practice. Biological Psychiatry 59, 990996.CrossRefGoogle ScholarPubMed
Kraemer, HC, Thiemann, S (1987). How Many Subjects? Statistical Power Analysis in Research. Sage Publications: Newbury Park, CA.Google Scholar
Rush, AJ, O'Neal, (1999). Patient Rated Inventory of Side Effects (PRISE). Unpublished rating scale. University of Texas Southwestern Medical Center: Dallas, TX.Google Scholar
Spearman, C (1910). Correlation calculated from faulty data. Britsish Journal of Psychology 3, 271295.Google Scholar