Kruijt & Carlbring judiciously uncover significant methodological problems of the narrative re-analysis by Grafton and colleaguesReference Grafton, MacLeod, Rudaizky, Holmes, Salemink and Fox1 on our previous meta-analysis on the effectiveness of cognitive bias modification (CBM) interventions in anxiety and depression.Reference Cristea, Kok and Cuijpers2 The letter reinforces what we had previously noted in our invited comment,Reference Cristea, Kok and Cuijpers3 namely that our approach had been grossly misconstrued. In the meta-analysis, we had pooled all anxiety outcomes measured on validated instruments at post-intervention, whether these measured clinical symptoms, state or trait anxiety. We specifically excluded measures applied after a stressor induction task. If multiple measures in the same outcome category (for example general anxiety) were reported, we averaged them at study level. Grafton and colleagues claim to have re-analysed the anxiety data so as to reflect ‘change in emotional vulnerability’ (p. 268). Not only is this construct vague and its application susceptible to bias, but, as Kruijt & Carlbring justly note, Grafton et al simply selected some of the already computed effect sizes and pooled them again. Essentially, this approach reflects the same mix comprising all anxiety outcomes, measured in the absence of a stressor induction task, and averaged at study level, just stemming from a more restricted pool of studies. To implement their new set of criteria, Grafton and colleagues should have recalculated effect sizes from study-level data, excluding measures and time points they did not deem appropriate for the elusive construct of emotional vulnerability. As it is, their re-analysis remains an arbitrary post hoc selection of study effects.
Yet a larger and more crucial problem relies in the central claim of Grafton et al, echoed by many leading CBM advocates: the effectiveness of these interventions should only be weighed if they successfully modified bias. Kruijt & Carlbring adeptly liken this to familiar arguments for homeopathy. However, it also reflects a fundamental misunderstanding of how causal inferences and confounding function in a randomised design. Identifying the trials in which both bias and outcomes were successfully changed is only possible post hoc, as these are both outcomes measured after randomisation; reverse engineering the connection between the two is subject to confounding. Bias and symptom outcomes are usually measured at the same time points in the trial, thus making it impossible to establish temporal precedence.Reference Kazdin4 Circularity of effects, reverse causality (i.e. bias change causes symptom change or vice versa) and the distinct possibility of third variable effects (i.e. another variable causing both symptom and bias changes) further confound this relationship.Reference Kazdin4 For instance, trials where both bias and symptom outcomes were successfully modified could also be the ones with higher risk of bias, conducted by allegiant investigators, maximising demand characteristics or different in other, not immediately obvious, ways from trials where neither bias nor symptoms changed. Randomised controlled studies can only show whether an intervention to which participants were randomised has any effects on outcomes measured post-randomisation.Reference Kaptchuk5 Disentangling the precise components causally responsible for such effects is speculative and subject to confounding. To this point, randomised studies show CBM has a minute, unstable and mostly inexistent impact of any clinically relevant outcomes.
eLetters
No eLetters have been published for this article.