Hostname: page-component-cd9895bd7-dzt6s Total loading time: 0 Render date: 2024-12-23T20:15:32.700Z Has data issue: false hasContentIssue false

Potential value of the current mental health monitoring of children in state care in England

Published online by Cambridge University Press:  14 November 2018

Christine Cocker*
Affiliation:
Senior Lecturer in Social Work, School of Social Work, University of East Anglia, UK
Helen Minnis
Affiliation:
Professor of Child and Adolescent Psychiatry, Institute of Health and Wellbeing, University of Glasgow, UK
Helen Sweeting
Affiliation:
Reader, MRC/CSO Social & Public Health Sciences Unit, University of Glasgow, UK.
*
Correspondence: Christine Cocker, School of Social Work, University of East Anglia, Norwich Research Park, Norwich NR4 7TJ, UK. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Background

Routine screening to identify mental health problems in English looked-after children has been conducted since 2009 using the Strengths and Difficulties Questionnaire (SDQ).

Aims

To investigate the degree to which data collection achieves screening aims (identifying scale of problem, having an impact on mental health) and the potential analytic value of the data set.

Method

Department for Education data (2009–2017) were used to examine: aggregate, population-level trends in SDQ scores in 4/5- to 16/17-year-olds; representativeness of the SDQ sample; attrition in this sample.

Results

Mean SDQ scores (around 50% ‘abnormal’ or ‘borderline’) were stable over 9 years. Levels of missing data were high (25–30%), as was attrition (28% retained for 4 years). Cross-sectional SDQ samples were not representative and longitudinal samples were biased.

Conclusions

Mental health screening appears justified and the data set has research potential, but the English screening programme falls short because of missing data and inadequate referral routes for those with difficulties.

Declaration of interest

None.

Type
Papers
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Royal College of Psychiatrists 2018

The mental health of children in state care is of great concern. Because of this, in 2009 the Department for Education introduced compulsory mental health data collection for these children in England by using the Strengths and Difficulties Questionnaire (SDQ). This article examines the degree to which the current mass data collection achieves screening aims and the potential analytic value of the resulting data set.

The SDQ

The SDQ is an internationally validated questionnaireReference Goodman, Ford, Corbin and Meltzer1, 2 comprising 25 items, which are broken down into five scales: emotional symptoms, conduct problems, hyperactivity, friendship/peer problems and pro-social behaviour. A general difficulties score is created by adding up the scores from the first four scales. The cut-offs for this score were originally chosen ‘so that roughly 80% of children in the community are normal, 10% are borderline, and 10% are abnormal’.Reference Goodman3 There are (almost identical) versions for completion by: parents/carers/teachers of 4- to 16-year-olds, parents/carers/teachers of 3- to 4-year-olds and 11- to 16-year-olds themselves. In addition to high specificity (80%) and sensitivity (85%),Reference Goodman, Ford, Corbin and Meltzer4 the main benefits of the SDQ are that it is free, quick and straightforward to use.Reference Tarren-Sweeney5 However, cross-informant agreement tends to be lower for internalising than for (more observable) externalising behaviours,Reference Cheng, Keyes, Bitfoi, Carta, Koç and Goelitz6 and emotional symptoms are best identified by self-reports.Reference Aebi, Kuhn, Banaschewski, Grimmer, Poustka and Steinhausen7 The SDQ is one of the most used and recognised child and adolescent screening tools.Reference Tarren-Sweeney5 In the UK, it has been successfully used to screen for child psychiatric disorders in both communityReference Goodman, Ford, Simmons, Gatward and Meltzer8 and looked-after children samples, with the study of looked-after children concluding that ‘screening with the SDQ (carer and teacher versions) could improve the detection and treatment of behavioural, emotional, and concentration problems among looked-after children’Reference Goodman, Ford, Corbin and Meltzer4 (p. 30).

Data collection for looked-after children in England

In England, it is compulsory to collect mental health data (using the carer-report SDQ) from all children aged 4/5–16/17 who have been in state care for 1 year or longer.9 The mental health of these children is known to be poorReference Meltzer, Gohuard, Corbin, Goodman and Ford10 and routine SDQ data is seen by the Department for Education as both a way of identifying ‘the scale of the problem’ and, at an individual level, of highlighting ‘the likelihood that the child either has, or could develop significant mental health problems’11 (p. 125). The Department for Education recommends it ‘is used to help decision-making about links with Child and Adolescent Mental Health Services (CAMHS)’; suggests that ‘In the longer term, data from SDQ returns will give an indication on how effective the service provision provided is in meeting the needs of looked after children’ (p. 125) and notes that over time ‘records can show a child's progress – whether difficulties identified remain or, if appropriate interventions have been put in place, whether they have eased’11 (p. 128). The Department for Education's aim thus seems to be to use the SDQ in multiple ways: as an indicator of those children and young people (CYP) who are at greater risk than the general population of developing mental health problems; as an outcome measure to monitor the impact of services; and to track CYP who are in the care of the state over time. Since routinely collected demographic, health and placement variables are included with the SDQ in the data set (English SSDA903) it is also a potential source of rich longitudinal data for researchers. SDQs are not completed at entry into care, which rules out before–after analyses; however, it should be possible to use the data set to track demographic, health and/or placement correlates of changes in scores over time. As far as we are aware, this is the first time analyses such as these have been done.

Aims

This article examines:

  1. (a) the degree to which the current programme has achieved the intention of providing screening to identify the scale of the problem and whether it has had impact on the mental health of looked-after children in England;

  2. (b) the potential value of analysing the data set created by that programme.

Method

Examining population trends

A common first step in evaluating screening programmes is to examine population trends (e.g. time trends of breast cancer mortality to assess the impact of mammographic screeningReference Broeders, Moss, Nyström, Njor, Jonsson and Paap12). The SDQ is an indicator of the prevalence of disorders.9 Therefore, one way to evaluate screening of looked-after CYP is to examine aggregate, population-level trends in the SDQ scores over time. In this case, the screening ‘intervention’ is also the outcome measure and, if screening had a positive impact on practice (e.g. leading to effective referral and treatment), we might expect this to be reflected in reduced population SDQ scores over time. Publicly available aggregated data (for example13) allowed us to examine population trends in the annual SDQ returns for the 9 years (2009–2017) for which data were available. These include the number of valid SDQ returns (overall and for individual local authorities); percent of those eligible with a return; the mean total difficulties score (range 0–40); and percentages with ‘normal’ (0–13), ‘borderline’ (14–16) and ‘abnormal’ scores (17–40).

Examining representativeness

To accurately identify the scale of the problem, the SDQ data set would either need to have 100% coverage (the aim of the SSDA903 data collection) or cover a representative sample. To investigate representativeness, we conducted analyses based on the English SSDA903 data set provided, on request, by the Department for Education, which included SDQ data from 2009 to 2012. This comprised individualised data (including demographic and placement-related variables as well as the SDQ) collected annually from every English local authority relating to all CYP who had been looked after continuously for a year or longer at 31 March of the year in question. We compared selected key characteristics of children aged 4–17 with and without an SDQ to determine the representativeness of those with an SDQ data return.

Examining attrition

For meaningful longitudinal analyses (based on a data set linking individual children over 2 or more years), a representative sample needs to retain sufficient numbers over time and the characteristics of those retained compared with those lost to follow-up should be known. We selected children with a 2009 SDQ return and examined the proportion retained and whether those retained over time differed from those lost. We did this by comparing the 2009 (baseline) characteristics of children with and without longitudinal data over 2, 3 and 4 consecutive years.

Ethical approval

The study was approved by the University of Glasgow Medical Faculty Ethics Committee (2011/FM06009). Data were provided by the Department for Education Data Services Group in 2011 and 2014.

Results

Population trends

Figure 1 (based on Table 1) shows SDQ completion rates and scores from 2009 to 2017. Since the introduction of compulsory data collection, the mean SDQ score has remained consistently close to 14, with around half all children screened falling within the abnormal or borderline score categories. Levels of missing data were around 30% each year from 2009 to 2015 and 25% in 2016–2017. Table 1 shows the range of local authority data-return rates from 2009 to 2016 (2017 data by local authority not available). In 6 of the 8 years, a small number of local authorities submitted no returns; however, over this period the percentage of local authorities submitting returns for 66% or fewer eligible children decreased from 34.4% in 2009 to 21% in 2016.

Fig. 1 Summary of Department for Education SDQ aggregated data 2009–2017. Percentage with ‘normal’, ‘borderline’ and ‘abnormal’ scores (left-hand axis); mean total difficulties score (right-hand axis); x-axis shows percent SDQ returns from those eligible in each year.

Table 1 Summary of Department for Education SDQ aggregated data over 7 years (2009–2017)

SDQ, Strengths and Difficulties Questionnaire.

a. 2009–11 sample stated as aged 4–16.

b. 2012–17 sample stated as aged 5–16.

c. Based on a total of 154 local authorities in 2009–2011 and 152 local authorities in 2012–2017 (from 2012 Cheshire and Bedfordshire ceased to exist as separate authorities).

d. 2017 data by local authority not available.

e. In 2009 there was an anomaly with the data returns and a small number of local authorities returned more than 100% of data.

f. SDQ range 0–40; categorised as 0–13 = normal, 14–16 = borderline and 17–40 = abnormal.Reference Goodman3

Representativeness

Table 2 compares the characteristics of the CYP about whom data returns were and were not made in 2009 (results similar for subsequent years). It shows that those with an SDQ were significantly (all P < 0.000) more likely to be white (59% compared with 55%), in the middle of the age range (67% of 11- to 15-year-olds compared with only 39% of 16- to 17-year-olds), have no disability (59% compared with 45%) and to be fostered (64% compared with around 50% living in adoption, temporary or residential accommodation; 39% with parents; 17% living independently).

Table 2 Characteristics of those with and without an SDQ data return in 2009 (4–17 year olds)

SDQ, Strengths and Difficulties Questionnaire.

a. The data set we received included 22 681 children (58.3% of total 38 887), aged 4–17 with an SDQ return. This included 21 669 (64.5% of 33 606) children aged 4–16 and 22 092 (58.7% of 37 620) aged 5–17 with a return. We assume the 22 700 valid SDQ returns in the Department for Education 2009 summary figures shown in Table 1 is the result of rounding, but the 58.3% return rate in the data set does not tally with the 68% figure provided in the summary figures. However, 22 681 is 67.5% of the number of 4- to 17-year-olds in the data set. It is therefore possible that the 2009 Department for Education return rate is based on a numerator of SDQ returns from 4- to 17-year-olds and a denominator of total 5- to 17-year-olds. Government publications themselves are inconsistent in this respect, with one noting both that ‘This indicator [was] … completed for just 65 per cent of the eligible cohort’ and, later in the same document, that ‘SDQ scores were only submitted for 59% of eligible children’.14

b. Disability was defined as the reason for entry into care rather than whether or not the child has a disability. It is therefore likely to only identify children who have profound needs.

c. There were 81 cases of missing data on ‘placement’, these were excluded.

Attrition

Table 3 compares the 2009 (baseline) characteristics of children with and without longitudinal data over 2, 3 and 4 consecutive years. Of those with an SDQ return in 2009, 64% were retained for 2 years (2009–2010), whereas only 28% were retained for 4 years (2009–2012). Those retained in the longitudinal data set were similar to those lost in respect to gender and ethnicity, but they were significantly (all P < 0.000) less likely to have had a disability, and were more likely to have been in foster care and to have had an abnormal score in 2009. For example, those retained from 2009 to 2012 included 28% of those with no disability compared with 17% with a disability, and 30% whose 2009 SDQ scores were abnormal compared with 26% whose scores were normal. Children with SDQ returns over consecutive years are therefore not representative of those with an SDQ in any 1 year.

Table 3 Characteristics of those with and without SDQ data for 2, 3 and 4 consecutive years

SDQ, Strengths and Difficulties Questionnaire.

a. Disability was defined as the reason for entry into care rather than whether or not the child has a disability. It is therefore likely to only identify children who have profound needs.

b. There were 16 cases with missing data on ‘placement’, these were excluded.

Discussion

Examination of the English SSDA903 data set shows no change in levels of mental health problems in looked-after children since routine screening was introduced in 2009. We found significant levels of missing data and poorly representative cross-sectional and longitudinal samples.

Whether to screen for mental disorders in looked-after children

Given these findings, a first reaction might be to ask whether SDQ screening of looked-after children is justified. Screening programmes are ‘designed to detect early signs of disease in the population and then to provide a reliable method of referral for diagnostic testing and further treatment’.15 The following ten ‘influential principles’Reference Harris, Sawaya, Moyer and Calonge16 (first published in 1968 and described as ‘a public health classic’Reference Andermann, Blancquaert, Beauchamp and Dery17) have been widely used to consider whether to screen populations for noninfectious diseases: the condition should be an important health problem, there should be an accepted treatment, facilities for diagnosis and treatment should be available, there should be a recognisable latent/early symptomatic stage, there should be a suitable test/examination, the test should be acceptable, the natural history of the condition should be adequately understood, there should be an agreed policy on whom to treat, the economic costs of case finding and of providing care should be considered and case-finding should be a continuing process.Reference Wilson and Jungner18 We suggest these criteria are largely fulfilled by using the SDQ to screen for mental health problems in looked-after children. In particular, prevalence studies show high mental disorder rates within the looked-after population,Reference Meltzer, Gohuard, Corbin, Goodman and Ford10 indicating public health importance. Understandings of the natural history of child and adolescent mental disorders are increasing, with evidence that early symptoms can often be identified.Reference Costello19 Cost-effective, evidence-based programmes for particular groups, such as Multidimensional Treatment Foster CareReference Fisher, Chamberlain and Leve20 and Attachment and Biobehavioral Catch-up for vulnerable infants,Reference Dozier and Bernard21 are available. In addition, the SDQ is a cheaper, shorter alternative to longer measures yet it has good sensitivity and specificity.Reference Mason, Chmelka and Thompson22

More recently, it has been suggested that screening programmes should be evaluated in terms of the balance between their benefits (probability of an adverse health outcome without screening; degree to which screening identifies all those who suffer the adverse health outcome; health benefit of earlier versus later treatment) and harms (frequency and experience of those with false-positive tests or who are over-diagnosed; frequency and severity of harms of treatment).Reference Harris, Sawaya, Moyer and Calonge16 Weighing up this balance in the context of screening looked-after children requires acknowledgement of the potential stigma of a mental disorder diagnosis/label; in light of this, evidence-based interventions (e.g. enhanced foster care, enhanced sensitivity to foster infants, additional resources) may seem less likely to cause harm than treatments for screening-identified physical illnesses (e.g. surgery, radio- and chemotherapy for cancers). Again, screening for mental health problems in looked-after children appears justified.

Do we have an effective screening programme in England?

Identification of those with problems is only the first step; the next is to address those problems. However, English local authorities have inadequate referral routes to CAMHS once the SDQ has identified children with possible mental disorders.Reference Bazalgette, Rahilly and Trevelyan23 The current programme of compulsory SDQ returns comes at a time when financial pressures mean many specialist teams offering support to looked-after children have been cut.24 The scheme incurs financial costs of its own and, despite the Department for Education's desire to improve routes to CAMHS, there is no mechanism to ensure abnormal SDQ results routinely lead to referral and treatment of identified individuals. The absence of such a mechanism is a policy-implementation deficit and we recommend renewed consideration of the programme, especially of referral pathways. Annual SDQ rates have remained remarkably consistent since the screening was launched, suggesting its introduction has not been associated with any change in the mental health of English looked-after children at a population level.

The expectation from the Department for Education is that these data are gathered annually,9, 25 but high levels of missing data undermine this, with considerable variance in local authority completion rates in England. It is likely that these levels of missing returns relate both to understandings of the value of the data by some of those within local authorities involved in its collection, and the process of data collection itself. The latter involves encouraging completion by the child's carer, questionnaire collection, data entry and collation by local authority administrators, looked-after children specialist nurses or specialist looked-after children CAMHS practitioners. Although recent slight increases in rates suggest systems may be improving, we need to better understand why so many SDQ scores are missing.

There are ethical issues associated with continuing this policy in its current form if nothing is then done with these data to assess and support those CYP identified as having problems. Compulsory SDQ monitoring has enabled the scale of mental health problems to be identified among looked-after CYP and, as a public health intervention, there are benefits to regularly overseeing the mental health of a highly vulnerable group. Given the relative stability in these population-based data, there may be little benefit in continuing with the expense of data collection without first addressing the ethical and moral imperatives of the missing data and referral pathways to additional services for CYP who need support.

We argue that the current data collection is not achieving the screening programme aims and that some modifications of the existing system need to occur to improve the mental health of looked-after children.

At the time of writing (autumn 2018), baseline data on mental health are not routinely collected about CYP at entry to care. This could be construed as an oversight in the current system's design which could be remedied by incorporating it into the CYP's first medical. Investment in ten pilot sites that aim to improve mental health assessments for children entering the care system was announced in June 2018, as the Department for Education and Department of Health and Social Care accept that ‘looked-after children should undertake the SDQ as a starting point when they come into care, and then each year as part of compiling an accurate picture of their health needs’ (p. 6).26

What could the data tell us beyond screening?

This mass data collection exercise might be useful for examining geographical variations or time trends, or as a performance indicator for local authorities.Reference Goodman, Ford, Corbin and Meltzer4, Reference Bazalgette, Rahilly and Trevelyan23, 25 Data derived from the SDQ screening programme are only available for about 70–75% of children in any 1 year and approximately 40% of children move in and out of the care system each year,27 meaning that useful, representative, longitudinal analyses would be challenging – although not impossible if levels of missing data were reduced. This vast and annually increasing data set has great research potential: it is possible that lack of change at the population level masks real effects at an individual level and careful consideration of how individual analyses could be achieved should be part of any revision of the system.

Suggestions for improvement of the current system

We suggest consideration of the further opportunities the annual SDQ data collection affords, both in terms of its analytic potential and as a screening programme. Currently the screening programme falls short, due to large amounts of missing data and no link to any ‘next steps’ for those children whose scores indicate likely disorder. As a data set, investment in better completion and more complex analyses may increase understandings of (likely reciprocal) associations between looked-after children's emotional/behavioural difficulties and both demographic and placement-related factors. Screening should not occur in isolation; investment in better systems would ensure SDQ scores for individual children are scrutinised, used in decision-making and – where they indicate likely psychiatric diagnosis – trigger clear referral pathways. These actions could result in improved placement and health outcomes for looked-after children, and this would be a worthwhile investment.

Limitations

Our use of publicly available data and simple statistical analyses aimed to demonstrate time trends and examine representativeness. Some might argue that it is impossible to evaluate the impact of screening using the SDQ without conducting longitudinal analyses (e.g. comparing outcomes for those with/without an SDQ, or those coming into the system at earlier/later time points) or by examining proxy data on service referral rates, access and/or effectiveness as outcome indicators. We contend that examining population-level trends in SDQ scores offers insight into the impact of screening looked-after children, and that there are flaws inherent in any longitudinal analyses of incomplete data. Our simple analyses are thus an important first step in examining the SDQ screening programme.

References

1Goodman, R, Ford, T, Corbin, T, Meltzer, H. Using the Strengths and Difficulties Questionnaire (SDQ) multi-informant algorithm to screen looked-after children for psychiatric disorders. Eur Child Adolesc Psychiatry 2004; 13: 2531.Google Scholar
2Information for researchers and professionals about the Strengths & Difficulties Questionnaires. Youth in Mind (http://www.sdqinfo.com).Google Scholar
3Goodman, R. The strengths and difficulties questionnaire: a research note. J Child Psychol Psychiatry 1997; 38: 581–6.Google Scholar
4Goodman, R, Ford, T, Corbin, T, Meltzer, H. Using the strengths and difficulties questionnaire (SDQ) multi-informant algorithm to screen looked-after children for psychiatric disorders. Eur Child Adolesc Psychiatry 2004; 13: II2531.Google Scholar
5Tarren-Sweeney, M. The brief assessment checklists (BAC-C, BAC-A): mental health screening measures for school-aged children and adolescents in foster, Kinship, residential and adoptive care. Child Youth Serv Rev 2013; 35: 771–9.Google Scholar
6Cheng, S, Keyes, KM, Bitfoi, A, Carta, MG, Koç, C, Goelitz, D, et al. Understanding parent–teacher agreement of the strengths and difficulties questionnaire (SDQ): comparison across seven European countries. Int J Methods Psychiatr Res 2018; 27: e1589.Google Scholar
7Aebi, M, Kuhn, C, Banaschewski, T, Grimmer, Y, Poustka, L, Steinhausen, H-C, et al. The contribution of parent and youth information to identify mental health disorders or problems in adolescents. Child Adolesc Psychiatry Ment Health 2017; 11: 112.Google Scholar
8Goodman, R, Ford, T, Simmons, H, Gatward, R, Meltzer, H. Using the strengths and difficulties questionnaire (SDQ) to screen for child psychiatric disorders in a community sample. Br J Psychiatry 2000; 177: 534–9.Google Scholar
9Department for Education. Guidance on Data Collection on the Emotional Health of Looked After Children. Department for Education, 2012 (https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/253566/ssda903_sdq_guidance_2012-13_version1-0.pdf).Google Scholar
10Meltzer, H, Gohuard, R, Corbin, T, Goodman, R, Ford, T. The Mental Health of Young People Looked After by Local Authorities in England: The Report of a Survey Carried out in 2002 by Social Survey Division of the Office for National Statistics on behalf of the Department of Health. Office for National Statistics, Stationery Office, 2003.Google Scholar
11Department for Education. Children looked after by local authorities in England Guide to the SSDA903 collection 1 April 2015 to 31 March 2016. Department for Education, 2015 (https://www.gov.uk/government/publications/children-looked-after-return-2015-to-2016-guide).Google Scholar
12Broeders, M, Moss, S, Nyström, L, Njor, S, Jonsson, H, Paap, E, et al. The impact of mammographic screening on breast cancer mortality in Europe: a review of observational studies. J Med Screen 2012; 19: 1425.Google Scholar
13Department for Education. Children looked after in England (including adoption), year ending 31 March 2017: additional tables. Department for Education, 2017 (https://www.gov.uk/government/statistics/children-looked-after-in-england-including-adoption-2016-to-2017).Google Scholar
14Department for Children, School and Families. Children looked after in England (including adoption and care leavers) year ending 31 March 2009. Department for Children, School and Families, 2009 (http://webarchive.nationalarchives.gov.uk/20101008101451/ http://www.dcsf.gov.uk/rsgateway/DB/SFR/s000878/SFR25-2009Version2.pdf).Google Scholar
15National Health Service (NHS). Screening Programmes: National Services Division Scotland. National Health Service, 2016 (http://www.nsd.scot.nhs.uk/services/screening/).Google Scholar
16Harris, R, Sawaya, G, Moyer, V, Calonge, N. Reconsidering the criteria for evaluating proposed screening programs: reflections from 4 current and former members of the U.S. preventive services task force. Epidemiol Rev 2011; 33: 2035.Google Scholar
17Andermann, A, Blancquaert, I, Beauchamp, S, Dery, V. Revisiting Wilson and Jungner in the genomic age: a review of screening criteria over the past 40 years. Bull World Health Org 2008; 86: 317–9.Google Scholar
18Wilson, J, Jungner, G. Principles and Practice of Screening for Disease. World Health Organization, 1968.Google Scholar
19Costello, E. Early detection and prevention of mental health problems: developmental epidemiology and systems of support. J Clin Child Adolesc Psychol 2016; 45: 710–7.Google Scholar
20Fisher, P, Chamberlain, P, Leve, L. Improving the lives of foster children through evidence-based interventions. Vulnerable Child Youth Stud 2009; 4: 122–7.Google Scholar
21Dozier, M, Bernard, K. Attachment and biobehavioral catch-up: addressing the needs of infants and toddlers exposed to inadequate or problematic caregiving. Curr Opini Psychol 2017; 15: 111–7.Google Scholar
22Mason, W, Chmelka, M, Thompson, R. Responsiveness of the strengths and difficulties questionnaire (SDQ) in a sample of high-risk youth in residential treatment. Child Youth Care Forum 2012; 41: 479–92.Google Scholar
23Bazalgette, L, Rahilly, T, Trevelyan, G. Achieving Emotional Wellbeing for Looked After Children. National Society for the Prevention of Cruelty to Children, 2015.Google Scholar
24House of Commons Education Committee. Mental health and well-being of looked-after children Fourth Report of Session 2015–16. House of Commons Education Committee, 2016.Google Scholar
25Department for Education, Department of Health. Promoting the Health and Wellbeing of Looked After Children. Department for Education, Department of Health, 2015.Google Scholar
26Department of Health, Department for Education. Mental Health and Wellbeing of Looked-After Children: Government Response to the Committee's Fourth Report of Session 2015–16. Department for Education, Department of Health, 2016.Google Scholar
27Department for Education. Children looked after in England (including adoption and care leavers) year ending 31 March 2015. Department for Education, 2015.Google Scholar
Figure 0

Fig. 1 Summary of Department for Education SDQ aggregated data 2009–2017. Percentage with ‘normal’, ‘borderline’ and ‘abnormal’ scores (left-hand axis); mean total difficulties score (right-hand axis); x-axis shows percent SDQ returns from those eligible in each year.

Figure 1

Table 1 Summary of Department for Education SDQ aggregated data over 7 years (2009–2017)

Figure 2

Table 2 Characteristics of those with and without an SDQ data return in 2009 (4–17 year olds)

Figure 3

Table 3 Characteristics of those with and without SDQ data for 2, 3 and 4 consecutive years

Submit a response

eLetters

No eLetters have been published for this article.