How do we cope with the stress of caring for loved ones, and grieving their loss? Cameron et al Reference Cameron, Chu, Matte, Tomlinson, Chan and Thomas1 followed up 280 caregivers of patients who had spent at least a week of mechanical ventilation in an intensive care unit. In this, the largest and longest study of its kind, almost three-quarters of the caregivers were women, and over 60% were caring for a spouse; 67% reported initial high levels of depressive symptoms, persisting in 43% a year later. Patient demographic and clinical characteristics did not consistently impact upon outcomes, but those of the carer did: younger age, greater impact on other activities, reduced social support and personal growth, and loss of a sense of control over one's life were all significantly associated with worsened outcomes. Unpaid caregivers are estimated to save the UK economy almost £100 billion per annum, and without them the NHS would be unsustainable. The toll their efforts take can be very profound, and they need to be supported.
Complicated grief is under-recognised, occurring in about 7% of bereavements. Although it is associated with low mood, it differs from depression through core symptoms of yearning and sorrow, difficulty accepting the loss, and a preoccupation with thoughts of their loved one. Shear et al Reference Shear, Reynolds, Simon, Zisook, Wang and Mauro2 report the first placebo-controlled randomised clinical trial on the topic; 400 bereaved adults (75% female) with complicated grief were all given psychoeducation, grief monitoring, and encouragement to engage in activities: they were also randomised to receive either flexible-dosing citalopram or placebo, and half were given complicated grief therapy (CGT). Those who received the 16-week manualised therapy showed significant improvement over those who did not, including in reduction of suicidal ideation, and while citalopram reduced concomitant depressive symptoms when added to CGT, by itself it was no more effective than placebo; CGT would appear to be the treatment of choice at this time.
The excitement about ketamine's potential as an antidepressant is matched only by the concern about dissociative side-effects, risk of addiction, and the disappointment with the brevity of its effects. Ketamine is an antagonist at glutamatergic NMDA receptors (NMDAR) that allow influx of calcium and sodium when the membrane potential reaches a certain threshold. Zanos et al Reference Zanos, Moaddel, Morris, Georgiou, Fischell and Elmer3 show that the metabolite of the common racemic mixture (R,S)-ketamine, (2S,6S;2R,6R)-hydroxynorketamine (HNK), and the (2R,6R)-HNK enantiomer are essential for the antidepressant effects in mice. However, direct (2R,6R)-HNK administration (instead of (R,S)-ketamine or the (2S,6S)-HNK enantiomer) lacks the undesirable dissociative, anaesthetic and misuse/dependence side-effects associated with ketamine. Further, in contrast to the original proposed mechanism of action of ketamine, administration of (2R,6R)-HNK caused longer-term upregulation of AMPA (rather than NMDA) receptors as well as immediate antidepressant effects. Zanos and colleagues suggest that (2R,6R)-HNK may present a new pharmacological derivative of ketamine that is effective and safe for clinical use. One learns caution with novel antidepressant data, but two of the major problems with the first generation of glutamatergic antidepressants appear to have been overcome in this animal-model experiment.
Work-based stress causes depression, and depression causes people to stop working (absenteeism) or stop working effectively (‘presenteeism’). A good occupation is a positive predictor of social, financial and health outcomes, yet of course the higher-powered one's position is, the greater the anticipated degree of related stress. It's a surprisingly under-studied area given the scale of the issue but Mandelli and colleagues Reference Mandelli, Serretti, Souery, Mendlewicz, Kasper and Montgomery4 stratified over 650 patients with major depressive disorders by occupational level, and measured their response to treatment. Perhaps unsurprisingly, those in a high occupational level had greater levels of academic achievement; they also showed a significantly poorer response to treatment, with reduced remission rates and greater levels of treatment resistance than other groups. This group had more men, fewer individuals on selective serotonin reuptake inhibitors, and more adjunct psychotherapy, but the study was not designed to disentangle whether a high-level group get offered different interventions or request them.
Bringing this closer to home, how does this translate to psychiatrists? Burnout is a syndrome with three major domains of overwhelming exhaustion, cynicism and detachment from work, and a sense of lack of accomplishment and ineffectiveness. In a special article in World Psychiatry, Maslach & Leiter Reference Maslach and Leiter5 write that some aspects of psychiatrists' burnout mirrors the wider literature in terms of causes and outcomes: jobs with intense interpersonal contact (our patients) are rewarding but stressful, and engender a culture of selflessness, long hours, and giving one's all to work. However, psychiatry also has some unique aspects and, notably, at times we have to deal with particularly difficult, angry and violent patients and relatives. This can prove emotionally draining, make us psychologically distance ourselves from our work, and lead to poorer patient care. Despite its prevalence, there is a lack of research on interventions to prevent or manage burnout; the authors argue that psychiatry is ideally positioned to take a lead on this. We face high levels of burnout – the paper notes that 89% of psychiatrists had thought about or experienced a clear threat of severe burnout – and with high-level occupations, our prognoses would appear to be poorer; we need to take care of ourselves.
The UK's National Psychiatric Morbidity Survey estimates that one-fifth of the days lost to employment are due to mixed anxiety and depressive disorders. It has remained a challenge to understand how the same brain circuits create both helpful protective anxiety and toxic fear responses. One hypothesised mechanism for maladaptive anxiety is over-activity of neural ‘survival circuits’ responsible for defensive responses. Writing in Nature, Tovote et al Reference Tovote, Esposito, Botta, Chaudun, Fadok and Markovic6 use in vivo optogenetic and in vitro single-cell physiological experiments to explore the architecture of these behaviours in mice. The ventrolateral periaqueductal grey region (vlPAG) plays a key role in expressing these behaviours. First, the authors examined the output pathway showing that freezing responses are expressed via excitation on glutamatergic outputs from the vlPAG to pre-motor cells of the magnocellular nucleus of the medulla (Mc). Optogenetic activation and inhibition of these vlPAG neurons during conditioned and unconditioned fear stimuli induced and abolished freezing responses respectively. The inputs to the vlPAG include threat-perception signals from the central nucleus of the amygdala (CEA). Cells in the output (vlPAG-Mc) pathway appear organised as micro-circuits that selectively produce either freeing or non-freezing behaviour and are disinhibited (e.g. to allow expression of freezing behaviours) by inhibitory inputs from the CEA to GABAergic inputs in the vlPAG. The authors propose that this mechanism enables very rapid switching between behaviours, and suggest future research examining reciprocal higher cortical inputs to the same circuits might explain how maladaptive anxiety behaviour can emerge in humans from these primitive circuits.
The celebrated American engineer W. Edwards Deming remarked ‘In God we trust. All others must bring data’. The American Statistical Association recently reported on the use (and abuse) of statistical tests and in particular, reliance on the P-value and the reproducibility crisis in modern science. Over 80 years ago, Ronald Fisher described how P-values could be used to aid inference of a ‘significant’ result. Importantly, he noted their use to suggest something worthy of follow-up and that the actual null hypothesis should be rejected only if multiple replications of the same experiment yielded similar significance results. According to Steven Goodman, Reference Goodman7 scientists have somehow forgotten this, and he describes a phenomenon of ‘bright line thinking’ – that if a single experiment subjected to an analysis gives a P-value below a level of significance (e.g. the ‘classic’ bright line of Type I error being 5%; equivalently, a P-value of 0.05) it is anointed as proof that a hypothesis is correct (conversely, the null hypothesis is rejected).
Goodman argues this is due to inferential statistical test procedures being taught as ‘anonymised procedures, universally applicable seemingly without controversies or alternatives’. Publication in prestige journals only serves to compound this problem alongside the array of ‘common’ statistical computing packages which allow black-box applications of tests that effectively absolve the user of understanding the assumptions, algorithms and the contingent interpretation of the results produced. To quote another statistical heavyweight, David Cox: ‘There are no routine statistical questions, only questionable statistical routines’. Goodman continues by showing that between disciplines, there are differences in the generally accepted threshold for significance: in physics, thresholds far below the biological sciences ‘default’ of 0.05 are used (our exception being genome-wide association where 10−8 is generally deemed acceptable). The solution? Goodman offers alternative interpretations of how data ‘speak’ to the hypotheses on offer. For example, using ‘Bayes factors’ the inferential question is reframed as follows: start with an a priori assignment of beliefs about the hypotheses – these depend on contextual factors such as the plausibility of the proposed mechanism being observed or the expected/desired response rate for the study. Collect the data – measure a ratio of likelihoods that describe how the data shift a priori beliefs towards or away from the available hypotheses. Good luck trying to get that past a standard review.
Finally, single men and women typically drink alcohol more frequently and in greater quantity than those who are married, but why? Social status and personality traits could impact upon both alcohol use and likelihood of getting married; a selection effect means that heavy earlier-life drinking could delay marriage; and the obligations of maintaining a marriage and health-monitoring by a spouse might reduce consumption. Kendlar et al Reference Kendler, Larsson Lönn, Salvatore, Sundquist and Sundquist8 tried to unpick these factors in a prospective cohort study of over 3 000 000 people. Marriage was associated with a significant reduction in the risk of developing an alcohol use disorder (AUD), more so for women (a 71% reduction in risk, compared with a 60% reduction for men). The confounders of socioeconomic status, criminal behaviour and drug use, and a positive family history of AUD did not alter the findings, suggesting that the direction of causality is marriage protecting against AUD; indeed the protective effect was strongest in those with positive family histories. However, marriage to a spouse with a lifetime alcohol use disorder increased the risk of developing AUD; this is especially enlightening as it suggests that it is not the social obligations of marriage itself, but the health-monitoring by one's partner that is protective. The data are more support for the psychological and social benefits of marriage: perhaps time to offer advice to unmarried colleagues?
eLetters
No eLetters have been published for this article.