Hostname: page-component-586b7cd67f-2brh9 Total loading time: 0 Render date: 2024-11-26T03:09:04.279Z Has data issue: false hasContentIssue false

Measuring patients' views: a bifactor model of distinct patient-reported outcomes in psychosis

Published online by Cambridge University Press:  21 April 2010

U. Reininghaus*
Affiliation:
Queen Mary University of London, Unit for Social and Community Psychiatry, Barts and the London School of Medicine, London, UK
R. McCabe
Affiliation:
Queen Mary University of London, Unit for Social and Community Psychiatry, Barts and the London School of Medicine, London, UK
T. Burns
Affiliation:
University Department of Psychiatry, Warneford Hospital, Oxford, UK
T. Croudace
Affiliation:
Department of Psychiatry, University of Cambridge, UK
S. Priebe
Affiliation:
Queen Mary University of London, Unit for Social and Community Psychiatry, Barts and the London School of Medicine, London, UK
*
*Address for correspondence: Mr U. Reininghaus, Newham Centre for Mental Health, London E13 8SP, UK. (Email: [email protected])

Abstract

Background

Patient-reported outcomes (PROs) are widely used for evaluating the care of patients with psychosis. Previous studies have reported a considerable overlap in the information captured by measures designed to assess different outcomes. This may impair the validity of PROs and makes an a priori choice of the most appropriate measure difficult when assessing treatment benefits for patients. We aimed to investigate the extent to which four widely established PROs [subjective quality of life (SQOL), needs for care, treatment satisfaction and the therapeutic relationship] provide distinct information independent from this overlap.

Method

Analyses, based on item response modelling, were conducted on measures of SQOL, needs for care, treatment satisfaction and the therapeutic relationship in two large samples of patients with psychosis.

Results

In both samples, a bifactor model matched the data best, suggesting sufficiently strong concept factors to allow for four distinct PRO scales. These were independent from overlap across measures due to a general appraisal tendency of patients for positive or negative ratings and shared domain content. The overlap partially impaired the ability of items to discriminate precisely between patients from lower and higher PRO levels. We found that widely used sum scores were strongly affected by the general appraisal tendency.

Conclusions

Four widely established PROs can provide distinct information independent from overlap across measures. The findings may inform the use and further development of PROs in the evaluation of treatments for psychosis.

Type
Original Articles
Copyright
Copyright © Cambridge University Press 2010

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Altman, D, Schulz, K, Moher, D, Egger, M, Davidoff, F, Elbourne, D, Gøtzsche, P, Lang, T (2001). The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Annals of Internal Medicine 134, 663694.CrossRefGoogle ScholarPubMed
Bentler, P (1990). Comparative fit indexes in structural models. Psychological Bulletin 107, 238246.CrossRefGoogle ScholarPubMed
Bjorner, J, Chang, C, Thissen, D, Reeve, B (2007). Developing tailored instruments: item banking and computerized adaptive assessment. Quality of Life Research 16, 95–108.CrossRefGoogle ScholarPubMed
Browne, M, Cudeck, R (1993). Alternative ways of assessing model fit. In Testing Structural Equation Models (ed. Bollen, K. and Long, J.), pp. 136162. Sage: Beverly Hills, CA.Google Scholar
Burns, T, Creed, F, Fahy, T, Thompson, S, Tyrer, P, White, I (1999). Intensive versus standard case management for severe psychotic illness: a randomised trial. UK700 Group. Lancet 353, 21852189.CrossRefGoogle Scholar
Campbell, D, Fiske, D (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin 56, 81–105.CrossRefGoogle ScholarPubMed
Crow, R, Gage, H, Hampson, S, Hart, J, Kimber, A, Storey, L, Thomas, H (2002). The measurement of satisfaction with healthcare: implications for practice from a systematic review of the literature. Health Technology Assessment 6, 1244.CrossRefGoogle ScholarPubMed
Cudeck, R, Browne, R (1983). Cross-validation of covariance structures. Multivariate Behavioral Research 18, 147167.CrossRefGoogle ScholarPubMed
DoH (2008). High Quality Care For All. NHS Next Stage Review Final Report. Department of Health: London.Google Scholar
DoH (2009). Guidance on the Routine Collection of Patient Reported Outcome Measures (PROMs). Department of Health: London.Google Scholar
Elwyn, G, Buetow, S, Hibbard, J, Wensing, M (2007). Respecting the subjective: quality measurement from the patient's perspective. British Medical Journal 335, 10211022.CrossRefGoogle ScholarPubMed
Embretson, S, Reise, S (2000). Item Response Theory for Psychologists. Lawrence Erlbaum Associates, Inc.: Mahwah, NJ.Google Scholar
EMEA (2005). Reflection Paper on the Regulatory Guidance for the Use of Health-Related Quality of Life (HRQL) Measures in the Evaluation of Medicinal Products. European Medicines Agency: London.Google Scholar
Fakhoury, W, Kaiser, W, Roeder-Wanner, U, Priebe, S (2002). Subjective evaluation: is there more than one criterion? Schizophrenia Bulletin 28, 319327.CrossRefGoogle ScholarPubMed
FDA (2006). Guidance for Industry. Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims. US Food and Drug Administration: Rockville, MD.Google Scholar
Flora, D, Curran, P (2004). An empirical evaluation of alternative methods of estimation for confirmatory factor analysis with ordinal data. Psychological Methods 9, 466491.CrossRefGoogle ScholarPubMed
Floyd, F, Widaman, K (1995). Factor analysis in the development and refinement of clinical assessment instruments. Psychological Assessment 7, 286299.CrossRefGoogle Scholar
Fries, J, Bruce, B, Cella, D (2005). The promise of PROMIS: using item response theory to improve assessment of patient-reported outcomes. Clinical and Experimental Rheumatology 23, S53S57.Google ScholarPubMed
Gaite, L, Vazquez-Barquero, J, Arrizabalaga, A, Schene, A, Welcher, B, Thornicroft, G, Ruggeri, M, Vazquez-Bourgon, E, Retuerto, M, Leese, M (2000). Quality of life in schizophrenia: development, reliability and internal consistency of the Lancashire Quality of Life Profile – European Version. British Journal of Psychiatry. Supplement 177, S49S54.CrossRefGoogle Scholar
Gibbons, R, Bock, D, Hedeker, D, Weiss, D, Segawa, E, Bhaumik, D, Kupfer, D, Frank, E, Grochocinski, V, Stover, A (2007). Full-information bifactor analysis for graded response data. Applied Psychological Measurement 31, 4–19.CrossRefGoogle Scholar
Gibbons, R, Hedeker, D (1992). Full-information item bi-factor analysis. Psychometrika 57, 423436.CrossRefGoogle Scholar
Gibbons, R, Weiss, D, Kupfer, D, Frank, E, Fagiolini, A, Grochocinski, V, Bhaumik, D, Stover, A, Bock, R, Immekus, J (2008). Using computerized adaptive testing to reduce the burden of mental health assessment. Psychiatric Services 59, 361368.CrossRefGoogle ScholarPubMed
Gilbody, S, House, A, Sheldon, T (2002). Routine administration of Health Related Quality of Life (HRQoL) and needs assessment instruments to improve psychological outcome – a systematic review. Psychological Medicine 32, 13451356.CrossRefGoogle ScholarPubMed
Hansson, L, Bjorkman, T, Priebe, S (2007). Are important patient-rated outcomes in community mental health care explained by only one factor? Acta Psychiatrica Scandinavica 116, 113118.CrossRefGoogle ScholarPubMed
James, L, Mulai, S, Brett, J (1982). Causal Analysis: Assumptions, Models and Data. Sage Publications: Beverly Hills, CA.Google Scholar
Kilian, R, Angermeyer, M (1999). Quality of life in psychiatry as an ethical duty: from the clinical to the societal perspective. Psychopathology 32, 127134.CrossRefGoogle Scholar
Leese, M, Schene, A, Koeter, M, Meijer, K, Bindman, J, Mazzi, M, Puschner, B, Burti, L, Becker, T, Moreno, M, Celani, D, White, I, Thornicroft, G (2008). SF-36 scales, and simple sums of scales, were reliable quality-of-life summaries for patients with schizophrenia. Journal of Clinical Epidemiology 61, 588596.CrossRefGoogle ScholarPubMed
Lehman, A (1996). Measures of quality of life among persons with severe and persistent mental disorders. Social Psychiatry and Psychiatric Epidemiology 31, 7888.CrossRefGoogle ScholarPubMed
McCabe, R, Saidi, M, Priebe, S (2007). Patient-reported outcomes in schizophrenia. British Journal of Psychiatry. Supplement 191, S21S28.CrossRefGoogle Scholar
Moher, D, Schulz, KF, Altman, DG (2001). The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet 357, 11911194.CrossRefGoogle Scholar
Mokkink, L, Terwee, C, Knol, D, Stratford, P, Alonso, J, Patrick, D, Bouter, L, de Vet, H (2006). Protocol of the COSMIN study: COnsensus-based Standards for the selection of health Measurement INstruments. BMC Medical Research Methodology 6, 2.CrossRefGoogle ScholarPubMed
Muthén, B (1989). Latent variable modeling in heterogeneous populations. Psychometrika 54, 557585.CrossRefGoogle Scholar
Muthén, L, Muthén, B (1998–2009). Mplus Version 5.2. Muthén & Muthén: Los Angeles, CA.Google Scholar
Nguyen, T, Attkisson, C, Stegner, B (1983). Assessment of patient satisfaction: development and refinement of a service evaluation questionnaire. Evaluation and Program Planning 6, 299313.CrossRefGoogle ScholarPubMed
Oliver, J, Huxley, P, Priebe, S, Kaiser, W (1997). Measuring the quality of life of severely mentally ill people using the Lancashire Quality of Life Profile. Social Psychiatry and Psychiatric Epidemiology 32, 7683.CrossRefGoogle ScholarPubMed
Phelan, M, Slade, S, Thornicroft, G, Dunn, G, Holloway, F, Wykes, T, Strathdee, G, Loftus, L, McCrone, P, Hayward, P (1995). The Camberwell Assessment of Need: the validity and reliability of an instrument to assess the needs of people with severe mental illness. British Journal of Psychiatry 167, 589595.CrossRefGoogle ScholarPubMed
Priebe, S (2007). Social outcomes in schizophrenia. British Journal of Psychiatry. Supplement 191, S15S20.CrossRefGoogle Scholar
Priebe, S, Gruyters, T (1993). The role of the helping alliance in psychiatric community care. A prospective study. Journal of Nervous and Mental Disease 181, 552557.CrossRefGoogle ScholarPubMed
Priebe, S, Huxley, P, Knight, S, Evans, S (1999). Application and results of the Manchester Short Assessment of Quality of Life (MANSA). International Journal of Social Psychiatry 45, 7–12.CrossRefGoogle ScholarPubMed
Priebe, S, Kaiser, W, Huxley, P, Roder-Wanner, U, Rudolf, H (1998). Do different subjective evaluation criteria reflect distinct constructs? Journal of Nervous and Mental Disease 186, 385392.CrossRefGoogle ScholarPubMed
Priebe, S, McCabe, R, Bullenkamp, J, Hansson, L, Lauber, C, Martinez-Leal, R, Rossler, W, Salize, H, Svensson, B, Torres-Gonzales, F, van den Brink, R, Wiersma, D, Wright, DJ (2007). Structured patient-clinician communication and 1-year outcome in community mental healthcare: cluster randomised controlled trial. British Journal of Psychiatry 191, 420426.CrossRefGoogle ScholarPubMed
Reise, S, Morizot, J, Hays, R (2007). The role of the bifactor model in resolving dimensionality issues in health outcomes measures. Quality of Life Research 16, 1931.CrossRefGoogle ScholarPubMed
Rodebaugh, T, Woods, C, Thissen, D, Heimberg, R, Chambless, D, Rapee, R (2004). More information from fewer questions: the factor structure and item properties of the original and brief fear of negative evaluation scale. Psychological Assessment 16, 169181.CrossRefGoogle ScholarPubMed
Salvi, G, Leese, M, Slade, M (2005). Routine use of mental health outcome assessments: choosing the measure. British Journal of Psychiatry 186, 146152.CrossRefGoogle ScholarPubMed
Speechley, M, Forchuk, C, Hoch, J, Jensen, E, Wagg, J (2009). Deriving a mental health outcome measure using the pooled index: an application to psychiatric consumer-survivors in different housing types. Health Services and Outcomes Research Methodology 9, 133143.CrossRefGoogle Scholar
Steiger, J (1990). Structural model evaluation and modification: an interval estimation approach. Multivariate Behavioral Research 25, 173180.CrossRefGoogle ScholarPubMed
Tsutakawa, R, Johnson, J (1990). The effect of uncertainty of item parameter estimation on ability estimates. Psychometrika 55, 371390.CrossRefGoogle Scholar
Tucker, L, Lewis, C (1973). A reliability coefficient for maximum likelihood factor analysis. Psychometrika 38, 110.CrossRefGoogle Scholar
Tyrer, P, Remington, M (1979). Controlled comparison of day-hospital and outpatient treatment for neurotic disorders. Lancet 313, 10141016.CrossRefGoogle Scholar
Uher, R, Farmer, A, Maier, W, Rietschel, M, Hauser, J, Marusic, A, Mors, O, Elkin, A, Williamson, R, Schmael, C, Henigsberg, N, Perez, J, Mendlewicz, J, Janzing, J, Zobel, A, Skibinska, M, Kozel, D, Stamp, A, Bajs, M, Placentino, A, Barreto, M, McGuffin, P, Aitchison, K (2008). Measuring depression: comparison and integration of three scales in the GENDEP study. Psychological Medicine 38, 289300.CrossRefGoogle ScholarPubMed
Williams, B (1994). Patient satisfaction: a valid concept. Social Sciences and Medicine 38, 509516.CrossRefGoogle ScholarPubMed
Yang, F, Tommet, D, Jones, R (2009). Disparities in self-reported geriatric depressive symptoms due to socio-demographic differences: an extension of the bi-factor item response theory model for use in differential item functioning. Journal of Psychiatric Research 43, 10251035.CrossRefGoogle Scholar