Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-22T05:39:29.350Z Has data issue: false hasContentIssue false

Development of the RMT20, a composite screener to identify common mental disorders

Published online by Cambridge University Press:  18 May 2020

Philip J. Batterham*
Affiliation:
Centre for Mental Health Research, Research School of Population Health, The Australian National University, Australia
Matthew Sunderland
Affiliation:
Matilda Centre for Research in Mental Health and Substance Use, Sydney Medical School, Faculty of Medicine and Health, The University of Sydney, Australia
Natacha Carragher
Affiliation:
World Health Organization, Switzerland; and Office of Medical Education, UNSW Sydney, Australia
Alison L. Calear
Affiliation:
Centre for Mental Health Research, Research School of Population Health, The Australian National University, Australia
*
Correspondence: Phil Batterham. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Background

There are few very brief measures that accurately identify multiple common mental disorders.

Aims

The aim of this study was to develop and assess the psychometric properties of a new composite measure to screen for five common mental disorders.

Method

Two cross-sectional psychometric surveys were used to develop (n = 3175) and validate (n = 3620) the new measure, the Rapid Measurement Toolkit-20 (RMT20) against diagnostic criteria. The RMT20 was tested against a DSM-5 clinical checklist for major depression, generalised anxiety disorder, panic disorder, social anxiety disorder and post-traumatic stress disorder, with comparison with two measures of general psychological distress: the Kessler-10 and Distress Questionnaire-5.

Results

The area under the curve for the RMT20 was significantly greater than for the distress measures, ranging from 0.86 to 0.92 across the five disorders. Sensitivity and specificity at prescribed cut-points were excellent, with sensitivity ranging from 0.85 to 0.93 and specificity ranging from 0.73 to 0.83 across the five disorders.

Conclusions

The RMT20 outperformed two established scales assessing general psychological distress, is free to use and has low respondent burden. The measure is well-suited to clinical screening, internet-based screening and large-scale epidemiological surveys.

Type
Papers
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2020. Published by Cambridge University Press on behalf of the Royal College of Psychiatrists

Existing screening tools for common mental disorders

Mental disorders are common in the community but often go unrecognised, contributing to low rates of treatment.Reference Wang, Berglund, Olfson, Pincus, Wells and Kessler1,Reference Wittchen, Kessler, Beesdo, Krause, Hofler and Hoyer2 Using screening measures to identify individuals who might be experiencing a mental health problem may lead to better uptake of evidence-based treatments.Reference Hirschfeld3 Accurate screening measures may also be used to identify individuals who require more comprehensive assessment,Reference Sen, Wilkinson and Mari4,Reference Scott, Wilcox, Schonfeld, Davies, Hicks and Turner5 used to guide the tailoring of treatment in the context of internet-based and face-to-face interventions (for example Batterham et alReference Batterham, Calear, Farrer, McCallum and Cheng6 and Kiropoulos et alReference Kiropoulos, Kilpatrick, Holmes and Threader7), or used to assist individuals in the community to recognise whether they are likely to be experiencing a specific mental disorder.Reference Donker, van Straten, Marks and Cuijpers8 However, there are few brief measures that can be used to accurately screen for a range of common mental health problems.Reference Donker, van Straten, Marks and Cuijpers8,Reference Farvolden, McBride, Bagby and Ravitz9

There are two broad approaches to screening for mental health problems: assessing general psychological distress and assessing the presence of specific common mental disorders. Screeners that assess general psychological distress capture internalising symptoms (i.e. symptoms of mood or anxiety disorders) at a transdiagnostic level.Reference Batterham, Sunderland, Slade, Calear and Carragher10,Reference Sunderland, Batterham, Carragher, Calear and Slade11 Psychological distress measures, such as the Distress Questionnaire-5 (DQ5Reference Batterham, Sunderland, Carragher, Calear, Mackinnon and Slade12) and Kessler-6/Kessler-10 (K6/K10Reference Kessler, Andrews, Colpe, Hiripi, Mroczek and Normand13) are accurate for identifying a range of mental disorders. However, this approach to screening does not provide direction as to which disorder is most likely to be present, may miss specific manifestations of distress, and may not cover symptoms of disorders other than major depressive disorder (MDD) or generalised anxiety disorder (GAD).Reference Batterham, Sunderland, Slade, Calear and Carragher10,Reference Batterham, Sunderland, Carragher, Calear, Mackinnon and Slade12

There have been attempts to generate brief composite screeners to identify a broader spectrum of mental disorders in online, community or clinical settings.Reference Donker, van Straten, Marks and Cuijpers8,Reference Farvolden, McBride, Bagby and Ravitz9 However, finding an appropriate balance between the brevity of the screener and its accuracy has proven challenging. Screening measures typically aim to maximise sensitivity (low rates of false negatives), so as not to miss any individuals who may meet clinical criteria for a disorder, along with adequate levels of specificity.Reference Zarin and Earls14 Multidisorder composite screeners that typically consist of between two and five items per disorder have been found to perform with varying degrees of success in identifying common mental disorders in the community.Reference Donker, van Straten, Marks and Cuijpers8

Use of item banks to assess mental disorders

Our team has recently developed item banks to assess a range of specific mental disorders, labelled as Rapid Measurement Toolkit (RMT) item banks.Reference Batterham, Sunderland, Carragher and Calear15 The RMT item banks complement the existing Patient Reported Outcomes Measurement Information System (PROMIS) emotional health measures that assess depression and anxiety.Reference Pilkonis, Choi, Reise, Stover, Riley and Cella16 RMT and PROMIS item banks were developed through extensive multistage processes that included evaluation of items with experts and people with lived experience of mental illness, and validation in population-based samples using item-response theory methods.Reference Batterham, Sunderland, Carragher and Calear15Reference Batterham, Brewer, Tjhin, Sunderland, Carragher and Calear17 Items within these banks have been demonstrated to be free from invariance on the basis of age, gender and education. Items are also free of local dependence, that is, items are not correlated after accounting for variance in the latent construct, which makes them appropriate candidates for use in unidimensional measurement tools.Reference Batterham, Sunderland, Carragher and Calear15

By combining items from the RMT and PROMIS item banks, it may be possible to identify a parsimonious set of items that enable accurate and efficient screening for common mental disorders. However, no existing studies have assessed whether items from these dimensional measures also provide accurate screening to identify individuals who meet clinical criteria for a disorder.

Aims

The aim of the present study was to develop a composite screener for common mental disorders and, in two population-based samples, assess its psychometric properties in identifying clinical ‘caseness’ for five common mental disorders: MDD, GAD, social anxiety disorder (SAD), panic disorder and post-traumatic stress disorder (PTSD). Two measures of general psychological distress, the DQ5 and K10, were used as comparators for identifying individuals who met criteria for each disorder, on the basis of area under the receiver operating characteristic curve (AUC). The new multidisorder screener, called the Rapid Measurement Toolkit-20 (RMT20), was designed to have high sensitivity with acceptable specificity in assessing the presence of five mental disorders. The screener was also designed using items that have been shown to be precise in assessing severity of symptoms for each of the five included disorders.Reference Batterham, Sunderland, Carragher and Calear15,Reference Pilkonis, Choi, Reise, Stover, Riley and Cella16 Screeners that can also provide an indication of severity to the clinician may have higher utility than those that only assess the presence or absence of a disorder.Reference Kessler18,Reference Andrews, Anderson, Slade and Sunderland19

Method

Participants and procedures

Two independent samples were recruited using virtually identical methods, separated by 13 months. These samples are referred to as the ‘development’ sample (n = 3175) and the ‘validation’ sample (n = 3620) – the screener was developed using data from the development sampleReference Batterham, Sunderland, Carragher and Calear15 then validated using data from the validation sample.Reference Batterham, Calear, Carragher and Sunderland20 All participants were recruited using advertisements on the online social media platform Facebook, with the development sample recruited between August and December 2014 and the validation sample recruited between January and February 2016. Advertisements linked directly to the survey and targeted Australian adults aged 18 years or older.

The content of the advertisement was designed to attract oversampling of people with symptoms of a mental disorder, emphasising that the research was on the topic of mental health. The survey was implemented online using Qualtrics survey software. From 39 945 individuals who clicked on the advertisement for the initial (development) survey, 10 082 (25%) consented and commenced the survey and 5011 (50%) completed the full survey, with 3175 allocated to complete all of the scales included in the present study (a short form of the survey was administered to the remaining 1836 participants). For the second (validation) survey, 5379 individuals clicked on the advertisement, 5220 (97%) consented and commenced the survey and 3620 (69%) of these completed the full survey. There were no missing data as responses were required for all questions except age and gender, with participants encouraged to discontinue if they were uncomfortable with the survey.

Written informed consent was obtained from all participants. Participants were given details of local and national mental health resources, along with the contact details of the research team, to facilitate access to mental health support if required.

The development survey included pools of items to assess SAD, panic disorder, PTSD, obsessive–compulsive disorder (OCD), adult attention-deficit hyperactivity disorder (ADHD) and substance use disorder, described previously.Reference Batterham, Sunderland, Carragher and Calear15,Reference Batterham, Brewer, Tjhin, Sunderland, Carragher and Calear17 In addition, the PROMIS-depression, PROMIS-anxiety and PROMIS-alcohol use item banksReference Pilkonis, Choi, Reise, Stover, Riley and Cella16,Reference Pilkonis, Yu, Dodds, Johnston, Lawrence and Daley21 were administered, but only in the development sample as these measures are established. A number of other existing scales related to mental health and suicide prevention were also included, but are not the focus of the present study.

Each survey took approximately 40–60 min to complete. Participants in the development survey were offered the opportunity to enter into a draw for one of four iPad Minis; no incentive was provided to participants in the validation survey. The authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration of 1975, as revised in 2008. All procedures involving human patients were approved by the ANU Human Research Ethics Committee (protocols: #2013/509 and #2015/717).

Measures

Item banks

The item banks used to assess MDD and GAD were the PROMIS-depression and PROMIS-anxiety item banks, whereas item banks for SAD, panic disorder and PTSD were the respective RMT item banks. All items used a first-person perspective. PROMIS item banks are rated based on the past 7 days, whereas RMT item banks are based on the past 30 days. Response to all items are on a 5-point frequency Likert scale: never (1), rarely (2), sometimes (3), often (4), always (5). The complete item banks have previously been published.Reference Pilkonis, Choi, Reise, Stover, Riley and Cella16,Reference Batterham, Calear, Christensen, Carragher and Sunderland22 The item banks were designed through systematic item selection and refinement processes, resulting in unidimensional, accurate measures to assess specific mental disorders.Reference Batterham, Sunderland, Carragher and Calear15,Reference Pilkonis, Choi, Reise, Stover, Riley and Cella16,Reference Batterham, Sunderland, Carragher and Calear23 The authors of the current study developed the RMT item banks but were not involved in the development of the PROMIS item banks.

Clinical diagnoses

Clinical diagnoses were made using the DSM-5 symptom checklist, developed by the authors as a self-report assessment for clinical diagnosis based on DSM-5 criteria.Reference Batterham, Calear, Christensen, Carragher and Sunderland22,Reference Batterham, Sunderland, Carragher and Calear23 The checklist queried respondents about the presence or absence of symptoms based on DSM-5 definitions for each disorder of interest. Eight items assessed SAD; 21 for panic disorder; 14 for GAD; 15 for MDD (including items to exclude hypomania); 22 for PTSD; 14 for OCD; and 21 for ADHD. Each item reflected a single DSM-5 criterion for the disorder of interest, although some criteria were probed across multiple questions and additional items were used to exclude alternative diagnoses.

Example items included: ‘In the past six months, did social situations nearly always make you feel frightened or anxious?’, ‘During the past six months, has your behaviour or difficulty in paying attention caused problems at home, work, school, or socially?’ and ‘In the past month, has there been a time when you unexpectedly felt intense fear or discomfort?’ The checklist was designed along similar principles to the electronic version of the Mini International Neuropsychiatric Interview (MINIReference Zbozinek, Rose, Wolitzky-Taylor, Sherbourne, Sullivan and Stein24) in terms of structure (binary and categorical self-report items with conditional skip logic) and response burden. However, the checklist used in the current study was developed independently from the MINI, non-proprietary and based on DSM-5 rather than DSM-IV criteria. The full checklist has been published previouslyReference Batterham, Calear, Christensen, Carragher and Sunderland22 and is available from the authors.

Comparator scales

Comparator scales to test the relative precision of the new screener were the DQ5Reference Batterham, Sunderland, Carragher, Calear, Mackinnon and Slade12 and K10.Reference Kessler, Andrews, Colpe, Hiripi, Mroczek and Normand13 Both scales are accurate unidimensional measures of general psychological distress and are accurate in identifying individuals who are likely to meet clinical criteria for a range of common internalising disorders.Reference Batterham, Sunderland, Slade, Calear and Carragher10,Reference Kessler, Andrews, Colpe, Hiripi, Mroczek and Normand13 The DQ5 (α = 0.91) and K10 (α = 0.94) both had excellent internal consistency in the validation sample.

Demographic factors

Demographic factors were collected to describe the participants and were based on self-reported measures of age group, gender (male, female, other), educational attainment, employment status, location (metropolitan, regional, rural) and language spoken at home.

Analysis

From the five item banks (PROMIS-depression, PROMIS-anxiety, RMT social anxiety, RMT panic, RMT PTSD), four items for each disorder were chosen on the basis of their accuracy in assessing clinical criteria for the disorder of interest. These 20 items formed the RMT20 screener. We initially tested screeners with 3–6 items for each disorder but found that adding items beyond four typically did not substantially improve the sensitivity and specificity of the screener within the development sample. The items were chosen to provide coverage across the spectrum of severity that is measured by the full item banks.Reference Sunderland, Batterham, Calear and Carragher25 Specifically, items were selected based on item response theory (IRT) discrimination and difficulty parameters as well as item information curves when measuring a single unidimensional construct representing either panic disorder, SAD or PTSD.Reference Sunderland, Batterham, Calear and Carragher25 This approach ensured that the screeners were accurate across the continuum of the latent construct.

The PROMIS screeners were only administered within the development sample, as they are established item banks, whereas screeners for the RMT measures were then tested in the validation sample. The AUC was the indicator of the precision of each subscreener. AUC for the new subscreeners were compared to AUC for the DQ5 and K10 for each of the five disorders.

Cut-points were defined based on Youden indices, although with a view to maximising sensitivity when there were comparable choices. Sensitivity and specificity (with 95% CI) were estimated at each cut-point and compared with the sensitivity and specificity of the DQ5 and K10 based on prescribed cut-points.Reference Batterham, Sunderland, Carragher, Calear, Mackinnon and Slade12

Results

The characteristics of the two samples are provided in Table 1. There were significant differences in all variables except for language, GAD caseness, panic disorder caseness, PTSD caseness and K10 score. Participants in the validation sample appeared to be slightly younger, better educated, had higher employment rates, and a greater proportion resided in urban areas. In addition, the validation sample had a lower prevalence of depression and SAD and less severe distress than the development sample. Nevertheless, differences were typically small (for example Cohen's d = 0.12 for DQ5), which suggests that the statistical significance of these comparisons may be more related to sample size than clinically meaningful differences.

Table 1 Characteristics of the development and validation samples

MDD, major depressive disorder; GAD, generalised anxiety disorder; SAD: social anxiety disorder; PTSD: post-traumatic stress disorder.

Bold values indicate P < 0.05.

The eight PROMIS items and 12 RMT items selected for the final composite screener (RMT20) are provided in Table 2, including mean (s.d.) for individuals with and without the specific disorder of interest. It should be noted that these means are likely higher than would be seen in the general population, so should not be considered normative data. All items significantly differentiated those with and without the disorder of interest.

Table 2 Mean (s.d.) and 95% CI for items included in the composite screener, based on presence or absence of the disorder of interest

MDD, major depressive disorder; GAD, generalised anxiety disorder; SAD: social anxiety disorder; PTSD: post-traumatic stress disorder.

Table 3 details the precision of the subscreeners in identifying clinical caseness for the five mental disorders based on AUC, with comparison with the two measures of psychological distress: DQ5 and K10. The table indicates that the disorder-specific subscreeners from the RMT20 were significantly more precise in screening for all disorders of interest, except in the case of GAD where the difference between the RMT20 and DQ5 was not significant. For SAD and panic disorder in particular, the RMT20 had considerably greater precision than both the DQ5 and K10, with an increase of up to 9% in AUC.

Table 3 Comparison of area under the curve for new 4-item subscreeners in comparison with existing measures of general psychological distressa

(D), data from the development sample (n = 3175); (V), data from the validation sample (n = 3620); PTSD: post-traumatic stress disorder; AUC, area under the receiver operating characteristic curve; K10, Kessler-10; DQ5, Distress Questionnaire-5.

a. χ2 tests have one degree of freedom; bold values indicate P < 0.05.

Table 4 shows the performance of the subscreeners at the identified cut-points. All screeners had high sensitivity, approximately 85% or greater, and specificity above 70%, indicating their accuracy across independent samples. The subscreeners also had high internal consistency, at or above 0.9. Performance of the RMT20 was stronger overall than the distress screeners, with similar or high sensitivity and specificity at prescribed cut-points.

Table 4 Performance of the subscreeners at selected cut-points

(D), based on data from the development sample (n = 3175); (V), based on data from the validation sample (n = 3620); PTSD: post-traumatic stress disorder.

Discussion

Main findings

The psychometric properties of the RMT20 composite screening measure suggest it provides an accurate and rapid method to identify individuals who meet clinical criteria for specific common mental disorders within the general population. The RMT20 screener outperformed two established measures of general psychological distress across all disorders, suggesting the composite screener approach may be more effective and efficient when there is a need to identify the presence of specific mental disorders. The gains in precision were most evident for social anxiety, panic disorder and PTSD, which are not typically captured as well as depression and generalised anxiety by measures of general psychological distress.Reference Batterham, Sunderland, Slade, Calear and Carragher10 The RMT20 was designed for online use but may also have relevance in a range of clinical settings where identifying specific forms of psychopathology would be beneficial.

Measures of general psychological distress remain useful for identifying individuals who are likely to meet clinical criteria for one or more mental disorders. Indeed, measures such as the DQ5 have been shown to be highly accurate and efficient in screening for a range of mental disorders, using only five items. However, distress measures are unable to differentiate the specific disorder that an individual is most likely to be experiencing. The 20-item composite screener presented here provides a compromise between distress measures and lengthy batteries of mental health measures. For example, using common existing measures assessing five disorders included in the composite screener would require presentation of at least 34 items from five scales, each with different stems and response frames. The RMT20 also allows flexibility in the scope of screening, enabling the subscreeners to be administered as needed.

Administration of the RMT20

The RMT20 performed well in relation to other brief composite screeners tested previously,Reference Donker, van Straten, Marks and Cuijpers8,Reference Farvolden, McBride, Bagby and Ravitz9 which typically perform better for some disorders (for example MDD) than others (for example GAD, PTSD). The RMT20 also performed similarly to computer adaptive tests, which typically require 4–15 items per domain to deliver a similar level of accuracy.Reference Graham, Minc, Staab, Beiser, Gibbons and Laiteerapong26Reference Gibbons, Weiss, Pilkonis, Frank, Moore and Kim28 Computer adaptive tests, although similarly efficient to brief composite screeners, require considerable infrastructure to administer and do not appear to provide a considerable benefit in terms of markedly greater precision for identifying specific mental disorders in the community.Reference Sunderland, Batterham, Calear and Carragher25 In contrast, the RMT20 is easy to administer in an online or paper-and-pencil settings, with minimal loss of diagnostic accuracy.

Potential applications

While our main focus in the development of this composite screener was on tailoring internet interventions to prevent and treat mental disorders in the community,Reference Batterham, Calear, Farrer, McCallum and Cheng6 further potential applications are extensive. Poor recognition of mental disorders in the communityReference Reavley and Jorm29 could be improved by provision of screening tools to the public, in conjunction with feedback to support appropriate help-seeking. Screening programmes within general practice, hospital settings and community-based mental health services often require brief and accurate indicators for a range of specific mental health problems. As a result of time constraints and consideration of patient burden, healthcare settings typically screen using psychological distress measures that may be suboptimal, or focus only on depression and/or generalised anxiety. The current findings suggest that distress screeners may not be as accurate as a composite screener in identifying particular common mental disorders such as panic disorder and PTSD. The present composite screener may provide an alternative approach to screening that is more accurate and can be administered in 1–2 min. Although the current population sample reported high levels of psychopathology, further investigation of the performance of the RMT20 in clinical settings would be warranted.

Strengths and limitations

This study used separate development and validation data-sets to establish the psychometric properties of the new composite screener. Both data-sets consisted of large samples of adults recruited from the community, with oversampling of people with mental health problems. However, there are some limitations of the present research. First, the RMT20 did not include externalising disorders or less common internalising disorders, although there is scope to add modules for these domains in future. The sample was not representative of the population nor of a treatment-seeking clinical sample, so further data may be needed to provide population norms. The clinical outcome measure was a self-report checklist, which is similar to the approach used in other population-based studies. Such checklists provide an indication for probable disorder only, so further evaluation of the RMT20 against a clinician diagnosis would be valuable. Furthermore, the development of the RMT20 was on the basis of accuracy in assessing DSM-5 disorders, so some degree of circularity in the definitions used in RMT20 may exist, excluding broader manifestations of these disorders. Although psychological distress scales are widely used for screening, a stronger future comparison for the RMT20 may be against a battery of more traditional disorder-specific screening measures (see for example Zuromski et al,Reference Zuromski, Ustun, Hwang, Keane, Marx and Stein30 Kroenke et al31 and Batterham et alReference Batterham, Mackinnon and Christensen32,Reference Batterham, Mackinnon and Christensen33 ). The PROMIS measures were only included in the development sample, as they are established measures, precluding us from measuring their psychometric properties across independent data-sets.

Finally, the method for selecting items was designed to maximise the accuracy of the screener across the dimensions of each disorder, rather than using methods to maximise classification, such as decision trees. The dual-function screeners may therefore be used in assessing both severity and presence of disorder. However, alternative methods for identifying subsets of items may have provided greater accuracy in capturing diagnostic criteria.

Implications

The RMT20 is a composite measure that is accurate in screening for five mental disorders in the community. The measure outperforms two established scales assessing general psychological distress, is free to use and is associated with low respondent burden, which makes it well-suited to busy clinical settings, internet-based screening and large-scale epidemiological surveys.

Data availability

All authors had full access to the study data. P.J.B. is the data custodian for the study and led the analyses. Data are available on request from P.J.B.

Author contributions

P.J.B. drafted the manuscript and all authors critically revised the paper; all authors contributed to the design of the study, data collection and interpretation; all authors approved the final version of the manuscript and agree to be accountable for all aspects of the work.

Funding

This research was supported by NHMRC Project Grant 1043952. P.J.B. and A.L.C. are supported by NHMRC Fellowships 1158707 and 1122544, respectively.

Declaration of interest

None.

ICMJE forms are in the supplementary material, available online at https://doi.org/10.1192/bjo.2020.37.

References

Wang, PS, Berglund, P, Olfson, M, Pincus, HA, Wells, KB, Kessler, RC. Failure and delay in initial treatment contact after first onset of mental disorders in the National Comorbidity Survey Replication. Arch Gen Psychiatry 2005; 62: 603–13.CrossRefGoogle ScholarPubMed
Wittchen, HU, Kessler, RC, Beesdo, K, Krause, P, Hofler, M, Hoyer, J. Generalized anxiety and depression in primary care: prevalence, recognition, and management. J Clin Psychiatry 2002; 63 (Suppl 8): 2434.Google ScholarPubMed
Hirschfeld, RM. The comorbidity of major depression and anxiety disorders: recognition and management in primary care. Prim Care Companion J Clin Psychiatry 2001; 3: 244–54.CrossRefGoogle ScholarPubMed
Sen, B, Wilkinson, G, Mari, JJ. Psychiatric morbidity in primary health care. A two-stage screening procedure in developing countries: choice of instruments and cost-effectiveness. Br J Psychiatry 1987; 151: 33–8.CrossRefGoogle ScholarPubMed
Scott, MA, Wilcox, HC, Schonfeld, IS, Davies, M, Hicks, RC, Turner, JB, et al. School-based screening to identify at-risk students not already known to school professionals: the Columbia suicide screen. Am J Public Health 2009; 99: 334–9.CrossRefGoogle Scholar
Batterham, PJ, Calear, AL, Farrer, L, McCallum, SM, Cheng, VWS. FitMindKit: randomised controlled trial of an automatically tailored online program for mood, anxiety, substance use and suicidality. Internet Interv 2018; 12: 91–9.CrossRefGoogle ScholarPubMed
Kiropoulos, LA, Kilpatrick, T, Holmes, A, Threader, J. A pilot randomized controlled trial of a tailored cognitive behavioural therapy based intervention for depressive symptoms in those newly diagnosed with multiple sclerosis. BMC Psychiatry 2016; 16: 435.CrossRefGoogle ScholarPubMed
Donker, T, van Straten, A, Marks, I, Cuijpers, P. A brief web-based screening questionnaire for common mental disorders: development and validation. J Med Internet Res 2009; 11: e19.CrossRefGoogle ScholarPubMed
Farvolden, P, McBride, C, Bagby, RM, Ravitz, P. A web-based screening instrument for depression and anxiety disorders in primary care. J Med Internet Res 2003; 5: e23.CrossRefGoogle ScholarPubMed
Batterham, PJ, Sunderland, M, Slade, T, Calear, AL, Carragher, N. Assessing distress in the community: psychometric properties and crosswalk comparison of eight measures of psychological distress. Psychol Med 2018; 48: 1316–24.CrossRefGoogle ScholarPubMed
Sunderland, M, Batterham, P, Carragher, N, Calear, A, Slade, T. Developing and validating a computerized adaptive test to measure broad and specific factors of internalizing in a community sample. Assessment 2019; 26: 1030–45.CrossRefGoogle Scholar
Batterham, PJ, Sunderland, M, Carragher, N, Calear, AL, Mackinnon, AJ, Slade, T. The Distress Questionnaire-5: population screener for psychological distress was more accurate than the K6/K10. J Clin Epidemiol 2016; 71: 3542.CrossRefGoogle Scholar
Kessler, RC, Andrews, G, Colpe, LJ, Hiripi, E, Mroczek, DK, Normand, SL, et al. Short screening scales to monitor population prevalences and trends in non-specific psychological distress. Psychol Med 2002; 32: 959–76.CrossRefGoogle ScholarPubMed
Zarin, DA, Earls, F. Diagnostic decision making in psychiatry. Am J Psychiatry 1993; 150: 197206.Google Scholar
Batterham, PJ, Sunderland, M, Carragher, N, Calear, AL. Development and community-based validation of eight item banks to assess mental health. Psychiatry Res 2016; 243: 453–62.CrossRefGoogle ScholarPubMed
Pilkonis, PA, Choi, SW, Reise, SP, Stover, AM, Riley, WT, Cella, D. Item banks for measuring emotional distress from the Patient-Reported Outcomes Measurement Information System (PROMIS(R)): depression, anxiety, and anger. Assessment 2011; 18: 263–83.CrossRefGoogle ScholarPubMed
Batterham, PJ, Brewer, JL, Tjhin, A, Sunderland, M, Carragher, N, Calear, AL. Systematic item selection process applied to developing item pools for assessing multiple mental health problems. J Clin Epidemiol 2015; 68: 913–9.CrossRefGoogle ScholarPubMed
Kessler, RC. The categorical versus dimensional assessment controversy in the sociology of mental illness. J Health Soc Behav 2002; 43: 171–88.CrossRefGoogle ScholarPubMed
Andrews, G, Anderson, TM, Slade, T, Sunderland, M. Classification of anxiety and depressive disorders: problems and solutions. Depress Anxiety 2008; 25: 274–81.CrossRefGoogle ScholarPubMed
Batterham, PJ, Calear, AL, Carragher, N, Sunderland, M. Prevalence and predictors of distress associated with completion of an online survey assessing mental health and suicidality in the community. Psychiatry Res 2018; 262: 348–50.CrossRefGoogle Scholar
Pilkonis, PA, Yu, L, Dodds, NE, Johnston, KL, Lawrence, SM, Daley, DC. Validation of the alcohol use item banks from the Patient-Reported Outcomes Measurement Information System (PROMIS). Drug Alcohol Depend 2016; 161: 316–22.CrossRefGoogle Scholar
Batterham, PJ, Calear, AL, Christensen, H, Carragher, N, Sunderland, M. Independent effects of mental disorders on suicidal behavior in the community. Suicide Life Threat Behav 2018; 48: 512–21.CrossRefGoogle Scholar
Batterham, PJ, Sunderland, M, Carragher, N, Calear, AL. Psychometric properties of 7- and 30-day versions of the PROMIS emotional distress item banks in an Australian adult sample. Assessment 2019; 26: 249–59.CrossRefGoogle Scholar
Zbozinek, TD, Rose, RD, Wolitzky-Taylor, KB, Sherbourne, C, Sullivan, G, Stein, MB, et al. Diagnostic overlap of generalized anxiety disorder and major depressive disorder in a primary care sample. Depress Anxiety 2012; 29: 1065–71.CrossRefGoogle Scholar
Sunderland, M, Batterham, PJ, Calear, AL, Carragher, N. The development and validation of static and adaptive screeners to measure the severity of panic disorder, social anxiety disorder, and obsessive compulsive disorder. Int J Methods Psychiatr Res 2017; 26: e1561.CrossRefGoogle ScholarPubMed
Graham, AK, Minc, A, Staab, E, Beiser, DG, Gibbons, RD, Laiteerapong, N. Validation of the computerized adaptive test for mental health in primary care. Ann Fam Med 2019; 17: 2330.CrossRefGoogle ScholarPubMed
Gibbons, RD, Weiss, DJ, Pilkonis, PA, Frank, E, Moore, T, Kim, JB, et al. Development of a computerized adaptive test for depression. Arch Gen Psychiatry 2012; 69: 1104–12.CrossRefGoogle ScholarPubMed
Gibbons, RD, Weiss, DJ, Pilkonis, PA, Frank, E, Moore, T, Kim, JB, et al. Development of the CAT-ANX: a computerized adaptive test for anxiety. Am J Psychiatry 2014; 171: 187–94.CrossRefGoogle ScholarPubMed
Reavley, NJ, Jorm, AF. Recognition of mental disorders and beliefs about treatment and outcome: findings from an Australian national survey of mental health literacy and stigma. Aust N Z J Psychiatry 2011; 45: 947–56.CrossRefGoogle ScholarPubMed
Zuromski, KL, Ustun, B, Hwang, I, Keane, TM, Marx, BP, Stein, MB, et al. Developing an optimal short-form of the PTSD Checklist for DSM-5 (PCL-5). Depress Anxiety 2019; 36: 790800.CrossRefGoogle Scholar
Kroenke, K, Spitzer, RL, Williams, JB, Lowe, B. An ultra-brief screening scale for anxiety and depression: the PHQ-4. Psychosomatics 2009; 50: 613–21.Google ScholarPubMed
Batterham, PJ, Mackinnon, AJ, Christensen, H. Community-based validation of the Social Phobia Screener (SOPHS). Assessment 2017; 24: 958–69.CrossRefGoogle Scholar
Batterham, PJ, Mackinnon, AJ, Christensen, H. The panic disorder screener (PADIS): development of an accurate and brief population screening tool. Psychiatry Res 2015; 228: 72–6.CrossRefGoogle ScholarPubMed
Figure 0

Table 1 Characteristics of the development and validation samples

Figure 1

Table 2 Mean (s.d.) and 95% CI for items included in the composite screener, based on presence or absence of the disorder of interest

Figure 2

Table 3 Comparison of area under the curve for new 4-item subscreeners in comparison with existing measures of general psychological distressa

Figure 3

Table 4 Performance of the subscreeners at selected cut-points

Supplementary material: File

Batterham et al. supplementary material

Batterham et al. supplementary material

Download Batterham et al. supplementary material(File)
File 4.7 MB
Submit a response

eLetters

No eLetters have been published for this article.