Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-11-23T00:22:15.297Z Has data issue: false hasContentIssue false

Predicting the development of schizophrenia

Published online by Cambridge University Press:  02 January 2018

Alex J. Mitchell*
Affiliation:
University of Leicester, UK. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Type
Columns
Copyright
Copyright © 2012 The Royal College of Psychiatrists 

Chuma & Mahadun Reference Chuma and Mahadun1 report on a much needed and topical meta-analysis of prospective studies investigating the predictive validity of prodromal criteria in schizophrenia. The potential importance of early identification and treatment cannot be underestimated. The authors should be congratulated for helping clarify whether the identification component is currently worthwhile. I have no doubt that this paper is generally well conducted and for the ‘ultra-high-risk strategy’ sample size reasonable but I am afraid I cannot agree with their interpretation of results. In particular, they conclude that that both ultra-high-risk and basic-symptoms criteria are valid and useful tools in predicting the future development of schizophrenia among the ‘at-risk population’, and that ultra-high-risk criteria were able to ‘correctly predict schizophrenia’ (citing sensitivity of 81%), while being able to ‘exclude this condition with some certainty’ (citing specificity of 67%). Taken at face value, clinicians would conclude that these methods both rule in those who are going to develop schizophrenia and rule out those who will not develop schizophrenia with high certainty. A small point, but sensitivity relates more closely the ability to rule out a condition (and is linked with negative predictive value) and specificity to ruling in a condition; hence the Sacket acronym SP-in and SN-out. In black and white terms, a specificity of 67% immediately suggests there will be a problem with false positives. But neither sensitivity nor specificity is a substitute for positive predictive value and negative predictive value which are the actual accuracy rates for every person identified as at high risk (screen positive) or low risk (screen negative) by these tools after taking into account the conversion rate. I am uncertain why the authors have presented clinically obscure statistics like DOR but omit the informative ones, namely positive predictive value (PPV)/negative predictive value (NPV).

Using the pooled estimate of 81% sensitivity and 67% specificity and a conversion rate of 21% (402 of 1918 at baseline), the PPV of the ultra-high-risk method(s) would be 39.4%, meaning only four out of ten identified as ‘will progress to schizophrenia’ actually would do so, and six would not. Of course we do not know whether others would progress if we extend the follow-up period but this is currently speculation requiring re-examination of these tools over a longer period. Hypothetically, if 30% of people progressed, then the PPV of ultra-high-risk method(s) could rise to 50%, which is still disappointing in my opinion. More encouragingly perhaps, even at 21%, the NPV would be 93.0%; meaning almost 19 out of 20 thought to be at low risk would not progress. The numbers for those using basic symptom criteria are similar but with even better NPV (PPV = 38.6%; NPV = 98.7%). That said, it is not initially obvious that only 60–70% of people who will not convert are put in a low-risk category by the tool (i.e. it is redundant in a third) and basic symptom data come from only one study with 160 participants.

I appreciate that many might find these statistical terms confusing. Previously, I have proposed a simple adjustment of false positives and false negatives per every 100 patients seen, which I called real-world interpretation/yield. So, for every 100 individuals thought to be at risk and subject to ultra-high-risk criteria, 17 would be correctly classified as converters to schizophrenia and 4 would be missed; and 53 would be correctly classified as non-converters but 26 would be falsely identified. In effect, there would be six times as many false positives as false negatives. If each ‘positive’ were treated, then (by ratio of false positives to true positives) 50% more patients without any prospect of psychosis would be treated than those actually at risk of psychosis. I wonder whether these error rates are really acceptable when mental health resources are stretched and long-term adverse effects of antipsychotics are more than ever before seen as problematic. I therefore ask the authors to reconsider whether these approaches are entirely valid for both rule-in and rule-out purposes when the data suggest mainly the latter. I also suggest a novel future study in which clinicians working with high-risk patients are randomised to predicting risk with and without the tools, a method that would elucidate the ‘added value’ in clinical practice.

References

1 Chuma, J, Mahadun, P. Predicting the development of schizophrenia in high–risk populations: systematic review of the predictive validity of prodromal criteria. Br J Psychiatry 2011; 199: 361–6.Google Scholar
Submit a response

eLetters

No eLetters have been published for this article.