Hostname: page-component-586b7cd67f-rdxmf Total loading time: 0 Render date: 2024-11-26T14:58:29.188Z Has data issue: false hasContentIssue false

Assessing the readability of the self-reported Strengths and Difficulties Questionnaire

Published online by Cambridge University Press:  22 February 2018

Praveetha Patalay*
Affiliation:
University of Liverpool, UK, Evidence Based Practice Unit, UCL, London, UK, and Anna Freud National Centre for Children and Families, London, UK
Daniel Hayes
Affiliation:
Evidence Based Practice Unit, UCL, London, UK and Anna Freud National Centre for Children and Families, London, UK
Miranda Wolpert
Affiliation:
Evidence Based Practice Unit, UCL, London, UK, Anna Freud National Centre for Children and Families, London, UK, and Child Outcomes Research Consortium, London, UK
*
Correspondence: Praveetha Patalay, University of Liverpool, Eleanor Rathbone Building, Bedford Street South, Liverpool, L69 7ZA, UK. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

The Strengths and Difficulties Questionnaire (SDQ) is one of the most widely used measures in child and adolescent mental health in clinical practice, community-based screening and research. Assessing the readability of such questionnaires is important as young people may not comprehend items above their reading ability when self-reporting. Analyses of readability in the present study indicate that the self-report SDQ might not be suitable for young people with a reading age below 13–14 years and highlight differences in readability between subscales. The findings suggest a need for caution in using the SDQ as a self-report measure for children below the age of 13, and highlight considerations of readability in measure development, selection and interpretation.

Declaration of interest

None.

Type
Short report
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
Copyright © The Royal College of Psychiatrists 2018

Self-reported perspectives of mental health symptoms are being used more widely in screening, research and practice in child and adolescent mental health.Reference Green, McGinnity, Meltzer, Ford and Goodman 1 3 Measures have been developed and tested for use by young people to report on their mental health symptoms, and several such measures that are well validated in terms of psychometric properties exist.Reference Deighton, Croudace, Fonagy, Brown, Patalay and Wolpert 4 Measure development has become increasingly sophisticated, and a range of psychometric and other properties of questionnaires are assessed before deciding they are fit for purpose.Reference Rust and Golombok 5 , 6 However, while observing the criteria used to assess measure suitability, 6 we noted that a key criterion that is specifically relevant to child measures, but is routinely not accounted for, is the readability of the items in the scale. Readability has been defined in many ways, but in simple terms refers to the ease with which a reader can read and understand text.Reference Dale and Chall 7 , 8 The concern regarding the readability of psychological questionnaires is not new, with the readability of some adult psychopathology questionnairesReference McHugh and Behar 9 , Reference McHugh, Rasmussen and Otto 10 and some measures of child psychopathology, as reported by parents and children, having been assessed for their readability.Reference Jensen, Fabiano, Lopez-Williams and Chacko 11 However, the latter investigation does not include a readability assessment of one of the most widely used child mental health measures in the UK, the Strengths and Difficulties Questionnaire (SDQ). The self-report SDQ is used extensively in the UK in research, community settings and clinical practiceReference Wolpert, Jacob, Napoleone, Whale, Calderon and Edbrooke-Childs 12 , Reference Fink, Patalay, Sharpe, Holley, Deighton and Wolpert 13 and is available to complete in different modes.Reference Patalay, Hayes, Deighton and Wolpert 14 The developers recommend this self-report measure for young people aged 11–17 years.Reference Goodman, Meltzer and Bailey 15 However, it has since been psychometrically validated for use in younger children, including from 8 yearsReference Muris, Meesters, Eijkelenboom and Vincken 16 and as young as 6 years.Reference Curvis, McNulty and Qualter 17 In this short paper, we investigate the reading age suitability of the self-report SDQ.

Method

Strengths and Difficulties Questionnaire

The SDQ is a 25-item questionnaire that comprises five five-item subscales: emotional symptoms, conduct problems, hyperactivity, peer problems and prosocial behaviour.Reference Goodman, Meltzer and Bailey 18 Participants respond to each item by selecting one of three responses: not true, somewhat true and certainly true.

Readability

In the current study, we used four standard methods that can be used to examine the readability of text.

Flesch–Kincaid reading grade (FK)

This method,Reference Kincaid, Fishburne, Rogers and Chissom 19 adapted from the Flesch Reading Ease score,Reference Kincaid, Fishburne, Rogers and Chissom 19 is one of the oldest and most widely used readability indices, and is based on average numbers of syllables per word and sentence lengths. Scores are estimated as a US grade level.

$${\rm FK} = \left( {0.39 \times {\rm ASL}} \right) + \left( {11.8 \times {\rm ASW}} \right)- 15.59$$

where ASL = average words per sentence; ASW = average syllables per word

Gunning Fog Index (GFI)

The GFI corresponds to the number of years of formal education required to understand textReference Gunning 20 and uses the numbers of words, sentences and complex words, which are defined as having three of more syllables.

$${\rm FI} = 2{\rm /}5 \times \left[ {({\rm A/N}) + (100{\rm L/A})} \right]$$

where A = number of words; N = number of sentences; L = number of words with three or more syllables (excluding -ing and -ed ends)

Coleman Liau Index (CLI)

The CLIReference Coleman and Liau 21 differs from the FK and GFI tests by focusing on the number of letters (rather than syllables) per word. It also results in a US grade-level score. The formula for estimating the CLI is:

$${\rm CLI} = (0.0588 \times {\rm L})- (0.296 \times {\rm S})- 15.8$$

where L = average letters/100 words; S = average sentences/100 words

Dale–Chall Readability Formula (DC)

The DC differs from the previous three indices by incorporating the level of difficulty of the words in the text into the formula for its estimation.Reference Dale and Chall 7 , Reference Chall and Dale 22 A list of words that up to 80% of fourth graders (children aged around 10) know is used as the basis for identifying words that can be considered difficult.

$${\rm DC} = 0.1579{\rm} ({\rm DW/A} \times 100) + 0.0496\;{\rm (ASL)} + 3.6365$$

where DW = difficult words (i.e. words not in the list), A = number of words, ASL = average words per sentence

These indices mainly estimate readability as a US grade-level score. Grade levels can be translated to age levels by adding 6 to a grade-level score (children in US grade 1 are aged 6–7 years), and this was done in this study. The four methods were chosen not only because they are well established and widely used measures of readability, but also because they have differences in focus when estimating readability, which can lead to varying readability estimates. The unique elements as part of their estimation include that the FK focuses on syllables in words, the GFI on words with more than three syllables, the CLI on the number of letters per word and the DC on the presence of difficult words.

Procedure

The readability formulae described above were applied to the instruction section and the items of the SDQ (for the full set of items in the measure and for each of the subscales separately), resulting in four readability estimates for each of the examined components (Table 1). The estimates across the four readability estimates were averaged to provide a single readability estimate.

Table 1 Readability estimates (in years) made using the four different approaches and the average estimate for the full measure, instructions and subscales of the Strengths and Difficulties Questionnaire

FK, Flesch–Kincaid reading grade; GFI, Gunning Fog Index; CLI, Coleman Liau Index; DC, Dale–Chall Readability Formula.

Results

For the full measure, age estimates for readability ranged from 10.94 to 12.74, with a mean estimate of 11.75 years (Table 1). For the subscales, the conduct problems subscale had the lowest mean readability age of 10.46 years, followed by peer problems (M = 11.83), prosocial behaviour (M = 12.84), hyperactivity (M = 13.56), with the emotional symptoms subscale (M = 13.85) having the highest average readability age. For the instructions, the average readability estimate was 13.41 years.

Discussion

The results indicate that while some of the SDQ subscales have a readability of around 11–12 years (peer and conduct problems), the instructions and some of the subscales (notably emotional difficulties and hyperactivity) have average readability estimates that are substantially higher (ranging up to 13.9 years). On the basis of these readability estimates, the SDQ would be considered unsuitable for 6- and 8-year-olds (despite psychometric validation studies at these ages)Reference Muris, Meesters, Eijkelenboom and Vincken 16 , Reference Curvis, McNulty and Qualter 17 . Moreover, these findings suggest that it might be difficult to understand overall for 11-year-olds (the recommended starting age for this measure). This difficulty may be further compounded when young people have lower reading ages relative to their developmental age. This might be of particular relevance in clinical settings, given that many children with mental health difficulties also have learning difficulties and special educational needs.Reference Emerson and Hatton 23 It is important to note that this issue of unsuitably high readability is not unique to this self-report measure; for example, the Youth Self Report version of the Achenbach assessments of child mental health has a readability estimate of 12.5 years,Reference Jensen, Fabiano, Lopez-Williams and Chacko 11 although, as with the SDQ, it is meant to be suitable from age 11 years onwards.

We present the results from a range of readability indices to highlight the variation in the age estimates they provide. The inclusion of the DC is especially relevant here, as it is the only index to include word complexity in its estimate, an aspect that provides insight into possible difficulty in understanding the specific content of the items. Previous attempts to map the readability of psychopathology measuresReference McHugh and Behar 9 have been criticised for not taking word complexity into account in their estimates.Reference Schinka 24 In this example, the hyperactivity and emotional symptoms subscales have eight and seven difficult words, respectively (e.g. squirming, fidgeting, down-hearted). This highlights the importance of also considering specific words and their suitability for the age group in question when designing questionnaires. These findings raise issues for interpretation of the SDQ self-report data derived from younger children. If younger children with a reading age lower than around 13 who are completing the SDQ are not comprehending items, key words or instructions, this may affect their responses and subsequently the derived scores used in analysis or to inform treatment.

Psychometric properties are not the only criteria that determine whether or not a measure is fit for purpose. There is a crucial prior step: assessing whether the target population can read and understand the items in a questionnaire comfortably. In this paper we describe and apply a non-resource-intense approach to assess the content of measures using four standardised measures of readability. Estimates of readability provide an overarching view of the complexity of the language used in a questionnaire and help identify words that might be difficult to understand. In addition, more intensive approaches to investigating measure comprehension, such as cognitive interviewing, can help illuminate how items are understood and interpreted by respondents.Reference Willis 25

Advice in relation to measures and health-related materials for adults is that they should have a readability of around 12 years, although the average reading age of adults is around 14 years.Reference Parker, Williams, Weiss, Baker, Davis and Doak 26 Extrapolating from this advice, we recommend that child self-report measures should aim for a readability age of around 2 years below the target minimum age for the measure.

The implications of these findings for the use of the SDQ include reconsidering the age of the target group for the questionnaire, developing accompanying support materials and explanations to aid completion of the survey or developing alternative questionnaires with a lower readability age. The wider implications of the results are that they highlight that alongside psychometric properties, a key consideration for the selection and reliable use of any self-report measure with children (or adults) should be: can the target user understand this?

Ethical approval

Given that no data from human participants were used in this report, no ethical approvals were necessary.

References

1 Green, H, McGinnity, A, Meltzer, H, Ford, T, Goodman, R. Mental Health of Children and Young People in Great Britain, 2004. Palgrave Macmillan, 2005.Google Scholar
2 Deighton, J, Lereya, ST, Morgan, E, Breedvelt, H, Martin, K, Feltham, A, et al. Measuring and monitoring children and young people's mental wellbeing: a toolkit for schools and colleges. Public Health England and the Evidence Based Practice Unit 2016.Google Scholar
3 Department of Health, Department for Education, NHS England. Future in Mind: Promoting, Protecting and Improving Our Children and Young People's Mental Health and Wellbeing. Department of Health, 2015.Google Scholar
4 Deighton, J, Croudace, T, Fonagy, P, Brown, J, Patalay, P, Wolpert, M. Measuring mental health and wellbeing outcomes for children and adolescents to inform practice and policy: a review of child self-report measures. Child Adolesc Psychiatry Ment Health 2014; 8: 1.CrossRefGoogle ScholarPubMed
5 Rust, J, Golombok, S. Modern Psychometrics: the Science of Psychological Assessment. 3rd ed. Routledge, 2009.Google Scholar
6 Scientific Advisory Committee of the Medical Outcomes Trust. Assessing health status and quality of life instruments: attributes and review criteria. Qual Life Res 2002; 11: 193205.Google Scholar
7 Dale, E, Chall, JS. A formula for predicting readability. Educ Res Bull 1948; 27: 1128.Google Scholar
8 Badgett BA. Toward the development of a model to estimate the readability of credentialing-examination materials. University of Nevada, Texas 2010.Google Scholar
9 McHugh, RK, Behar, E. Readability of self-report measures of depression and anxiety. J Consult Clin Psychol 2009; 77: 1100–12.Google Scholar
10 McHugh, RK, Rasmussen, JL, Otto, MW. Comprehension of self-report evidence-based measures of anxiety. Depress Anxiety 2011; 28: 607–14.CrossRefGoogle ScholarPubMed
11 Jensen, SA, Fabiano, GA, Lopez-Williams, A, Chacko, A. The reading grade level of common measures in child and adolescent clinical psychology. Psychol Assess 2006; 18: 346–52.Google Scholar
12 Wolpert, M, Jacob, J, Napoleone, E, Whale, A, Calderon, A, Edbrooke-Childs, J. Child- and Parent-reported Outcomes and Experience from Child and Young People's Mental Health Services 2011–2015. CAMHS Press, 2016.Google Scholar
13 Fink, E, Patalay, P, Sharpe, H, Holley, S, Deighton, J, Wolpert, M. Mental health difficulties in early adolescence: a comparison of two cross-sectional studies in England from 2009 to 2014. J Adolesc Health 2015; 56(5): 502–7.Google Scholar
14 Patalay, P, Hayes, D, Deighton, J, Wolpert, M. A comparison of paper and computer administered strengths and difficulties questionnaire. J Psychopathol Behav Assess 2016; 38: 242–50.Google Scholar
15 Goodman, R, Meltzer, H, Bailey, V. The Strengths and Difficulties Questionnaire: a pilot study on the validity of the self-report version. Eur Child Adolesc Psychiatry 1998; 7: 125–30.Google Scholar
16 Muris, P, Meesters, C, Eijkelenboom, A, Vincken, M. The self-report version of the Strengths and Difficulties Questionnaire: its psychometric properties in 8- to 13-year-old non-clinical children. Br J Clin Psychol 2004; 43: 437–48.Google Scholar
17 Curvis, W, McNulty, S, Qualter, P. The validation of the self-report Strengths and Difficulties Questionnaire for use by 6- to 10-year-old children in the UK. Br J Clin Psychol. 2014; 53: 131–7.Google Scholar
18 Goodman, R, Meltzer, H, Bailey, V. The Strengths and Difficulties Questionnaire: a pilot study on the validity of the self-report version. Int Rev Psychiatry 1998; 15: 173–7.Google Scholar
19 Kincaid, JP, Fishburne, RP Jr, Rogers, RL, Chissom, BS. Derivation of New Readability Formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy Enlisted Personnel. Naval Technical Training Command, 1975.Google Scholar
20 Gunning, R. The Technique of Clear Writing. McGraw-Hill, 1952.Google Scholar
21 Coleman, M, Liau, TL. A computer readability formula designed for machine scoring. J Appl Psychol 1975; 60: 283–4.Google Scholar
22 Chall, JS, Dale, E. Readability Revisited: The New Dale-Chall Readability Formula. Brookline Books, 1995.Google Scholar
23 Emerson, E, Hatton, C. Mental health of children and adolescents with intellectual disabilities in Britain. Br J Psychiatry 2007; 191: 493–9.Google Scholar
24 Schinka, JA. Further issues in determining the readability of self-report items: comment on McHugh and Behar (2009). J Consult Clin Psychol 2012; 80: 952–5.CrossRefGoogle ScholarPubMed
25 Willis, GB. Cognitive Interviewing: A Tool for Improving Questionnaire Design. Sage Publications, 2004.Google Scholar
26 Parker, RM, Williams, MV, Weiss, BD, Baker, DW, Davis, TC, Doak, CC, et al. Health literacy: report of the council on scientific affairs. JAMA 1999; 281: 552–7.Google Scholar
Figure 0

Table 1 Readability estimates (in years) made using the four different approaches and the average estimate for the full measure, instructions and subscales of the Strengths and Difficulties Questionnaire

Submit a response

eLetters

No eLetters have been published for this article.