Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-11-25T11:13:43.619Z Has data issue: false hasContentIssue false

Risk, psychiatry and the military

Published online by Cambridge University Press:  02 January 2018

Simon Wessely*
Affiliation:
King's Centre for Military Health Research, Institute of Psychiatry, King's College London, Weston Education Centre, Cutcombe Road, London SE5 9RJ, UK. E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Summary

The relationship between combat and psychiatric breakdown has been well recognised for decades. The change to smaller, professional armed forces has reduced the risk of large-scale acute psychiatric casualties, and should have led to a corresponding decrease in long-term ill health, but this expected reduction seems not to have happened. Likewise, attempts at preventing psychiatric injury, by screening before deployment or debriefing after, have been disappointing. Three reasons for this are proposed: a rethinking of the relationship between trauma and long-term outcome, catalysed by the attempts of US society to come to terms with the Vietnam conflict; a broadening of the scope of psychiatric injury as it moved to the civilian sector; and the increased prominence of unexplained syndromes and contested diagnoses such as Gulf War syndrome. Traditional psychiatric injury is predictable, proportionate and can, in theory, be managed. These newer forms of injury are in contrast unanticipated, paradoxical, ill understood and hard to manage. Traditional approaches to risk management by reducing exposure have not been successful, and may increase risk aversion and reduce resilience. However, the experiences of civilians in wartime or the military show that people are not intrinsically risk-averse, provided they can see purpose in accepting risk.

Type
Special Paper
Copyright
Copyright © 2005 The Royal College of Psychiatrists 

We are becoming obsessed with risk. Use of the word itself is increasing in epidemic proportions, not only in the mass media but in the medical journals (Reference SkolbekkenSkolbekken, 1995). Not for nothing has Beck's Risk Society become one of the most influential contemporary social texts (Reference Beck and RitterBeck, 1992). Reducing risk is increasingly the purpose of public health, and indeed politics. Whenever anything is identified as a ‘risk’, it is inevitable that this is closely followed by calls to remove it. However, there remains one section of society whose raison d'être is to take risks: the armed forces. That is the nature of the military contract (Reference DandekerDandeker, 2001). So when men (and increasingly women) go to war, it remains the case, now and then, that some do not come back, some come back physically injured, and some come back with invisible but often equally damaging psychiatric injuries. The notion that a military operation could ever be free of physical casualties is something devoutly to be wished for but unlikely to be achieved, and so it is with psychiatric casualties.

War provides an exaggerated, perhaps extreme, version of the entire range of human experience - not just fear, hate and guilt, but also excitement, love, friendship and achievement (Reference BourkeBourke, 1999). There is no single ‘experience of war’, for good or ill. There are some for whom active service remains the best thing that ever happened to them, and for whom life afterwards is dull and monochromic. For many, though, especially those who are not part of modern, professional, volunteer militaries, war is not the ‘best days of their lives’, and when they return appear hale in body, but not in mind. It is these experiences that form the first part of this paper.

PSYCHIATRIC BREAKDOWN: ACUTE AND CHRONIC

The first of my two themes is risk and psychological breakdown - what it is, why it is so difficult to prevent, but easier to manage, and why the armed forces have little to fear from psychiatry.

We know a great deal about psychiatric breakdown in battle. If you read classic accounts of military psychiatry, you will learn much about the acute psychiatric casualties of war (Reference BelenkyBelenky, 1987). Military psychiatry is based on doctrines developed and tested in both World Wars. Modern textbooks have not much changed in their descriptions of the acute breakdown, the combat stress reaction or the soldier frozen with fear. Careful statistical inquiries in the Second World War related this to the intensity of fighting - the greater the number of physical casualties, the greater the number of psychiatric casualties (Reference Jones and WesselyJones & Wessely, 2001). Over the next half-century, it is true to say that our basic understanding of the immediate psychiatric consequences of combat did not change much (Reference Belenky, Noy and SolomonBelenky et al, 1985).

Acute psychiatric breakdown refers to the short-term consequences, but what about the long term? Once again, acceptance of the long-term psychiatric costs of war is nothing new. The hundreds of thousands of pensions paid under the labels of ‘shell shock’, ‘effort syndrome’, ‘war neurosis’ and ‘neurasthenia’ meant that the long-term consequences could hardly be denied by later generations, even before the advent of ‘post-traumatic stress disorder’ (Reference Jones, Palmer and WesselyJones et al, 2002). It was the fact that both the USA and the UK began the Second World War with asylums still full of ex-service men, and a staggering pensions bill left over from the Great War, that they were determined to do things better this time around (Reference ShephardShephard, 2000).

So, despite the occasional contemporary Whiggish view of the inexorable forward march of psychiatric knowledge, there is probably little we could now teach either the Regimental Medical Officers of the First World War, or the psychiatrists of the Second, about the psychological effects of war. Nevertheless, something has changed. Let us imagine for a moment what the medical authorities of the two World Wars might have predicted in the way of psychiatric casualties after recent operations of the UK armed forces. Nowadays our modern, professional, volunteer military could never sustain anything remotely like the high-intensity, prolonged attritional campaigns such as those of the Western Front, the Pacific War or the Strategic Air Offensive, and we can be thankful for that. Instead, we can be confident that, on the basis of their knowledge of psychiatric casualties in either World War, the doctors of the Great War or the Second World War would not have anticipated too much in the way of psychiatric casualties during most recent deployments, and judging by the paper by Turner et al (Reference Turner, Kiernan and McKechanie2005, this issue) they would have been right. Furthermore, on the basis of their own observations, confirmed by later careful long-term follow-up studies of war veterans from the USA and Israel, they would have predicted that those who stayed well in the short term were likely to stay well in the long term (Reference SolomonSolomon, 1989; Reference Lee, Vaillant and TorreyLee et al, 1995). The best predictor of long-term ill health was acute ill health during conflict.

However, these assumptions would only have been half correct. Evidence from the Falklands conflict, the Persian Gulf War and the opening phase of the Iraq war suggest that classic psychiatric casualties - ‘combat stress reactions’ as we now call them - have indeed been relatively few, and have created little in the way of operational difficulties; but it is the apparent long-term consequences of recent operations that would have been both a surprise and a puzzle to our predecessors. For example, as I write this only a few weeks after President Bush declared active hostilities ‘over’, American newspapers are making predictions that up to 25% of their military personnel in Iraq will become victims of post-traumatic stress disorder (PTSD), despite the fact that casualties during the invasion were remarkably few, and their victory overwhelming. Perusal of some of the recent British media might lead to similar conclusions.

What has changed is the expected link between short-term and long-term outcomes. It no longer seems to be the case that the level of short-term acute psychiatric casualties is a good guide to long-term consequences. At the heart of this change has been a fundamental shift in contemporary formulations of why some people do not seem to recover from the acute psychiatric injuries of war. For the first half of the 20th century it was assumed that if you broke down in battle, and the cause was indeed the stress of war, then your illness would be short-lived - and if it was not, then the cause of your ill health was not really the war at all, but events before you went to war. At the risk of oversimplification, if you belonged to the dominant school of psychiatric thinking from the latter half of the 19th century to the latter half of the 20th century, then the reason was hereditary. This could be expressed in terms of ‘degeneration’, which gave way to genetic concepts, but it was your constitutional inheritance that determined most psychiatric disorders other than the transient. In apparent contrast, Freud and the founders of psychoanalysis said that the cause was your parents and the way they treated you in your first few months and years. Either way it was much the same - your cards were marked, and well marked, long before you joined the Services. In war eventually every man had his breaking point, but if you broke down and never recovered, then the real cause was not the war, but either your genetic inheritance or your upbringing. The war was merely the trigger. This general view held good for the first half of the century, began to be eroded by the literature on concentration camp survivors, but was not fundamentally challenged until the Vietnam War.

VIETNAM AND THE COMING OF POST-TRAUMATIC STRESS DISORDER

It is hard for us, knowing what we do now, to appreciate that for a short time the Vietnam War was regarded as a psychiatric success story. As Albert Glass, the most influential military psychiatrist of the post-1945 period, wrote:

‘according to authoritative reports, military psychiatry in the Vietnam conflict achieved its most impressive record in conserving the fighting strength’ (Reference Glass and CaplanGlass, 1974).

Psychiatric casualties were ‘surprisingly low’ (Reference BeyBey, 1970). Casualties were, reported another psychiatrist, ten times lower than in the Second World War, and three times lower than in Korea, or lower than ‘any recorded in previous conflicts’ said a third (Reference BeyBey, 1970; Reference BourneBourne, 1970). Likewise, the implementation of forward psychiatry created the ‘impression that psychiatric casualties were rarely produced by the unique nature of combat in Vietnam’ (Reference Glass and CaplanGlass, 1974), while ‘psychiatric casualties need never again become a major cause of attrition in the United States military in a combat zone’ (Reference BourneBourne, 1970). It is possible, as Ben Shephard argues, that these accounts were self-serving (Reference ShephardShephard, 2000). There is also evidence that substance misuse and behavioural problems were rife even in the early days of the conflict (Reference De GrootDe Groot, 2000), but nevertheless standard psychiatric doctrine would have predicted that these problems would not be on the scale seen in previous wars, and should not have given rise to what was reported by Lifton, Shatan and others.

However, as the war drew to its unsatisfactory (for the USA at least) close, and the soldiers started to come home, the picture changed dramatically. By the 1970s the Vietnam veteran came increasingly to be seen as a major social problem - alienated, abandoned, disturbed by nightmares of atrocities seen and committed, out of control, violent, suicidal and a social time bomb. To explain this phenomenon psychiatrists rapidly introduced a new condition into the psychiatric lexicon - the diagnosis of post-traumatic stress disorder (PTSD).

So what was new about PTSD? That war could lead to large numbers of mentally ill soldiers was not news; but the existing doctrines said confidently that it should not have happened after Vietnam, since standard teaching linked the numbers of acute psychiatric casualties with the numbers of chronic casualties. If you ended the war mentally unscathed, then you were likely to stay that way. Second, doctrine taught that if you did develop long-term psychiatric disorder, then the war was only the trigger, not the real cause. However, the formulators of PTSD did not accept that. They believed, for honourable reasons, that the war was unquestionably to blame. It was an insane, unpopular and unjust conflict, and the US Vietnam veterans were as much its victims as the Vietnamese civilians.

The cause of PTSD was the ‘T’, the trauma. Both the attraction and the danger of this concept lay its simplicity - here at last was a psychiatric disorder with a simple cause: adult trauma. We could dispense with all the difficult business of heredity, upbringing and so on, and concentrate on the matter in hand - the experience of Vietnam. In fact it was too simple, and many soon realised that the individual's predisposition, the bag and baggage that one brought to military service, continued to have an important role, especially when rates and intensity of trauma were relatively low. Nevertheless, it would still take many years before people began to accept that a major cause of the Vietnam veteran problem lay not solely in the jungles of Vietnam, but also in the social climate of an America that was turning against the war in particular, and the military in general (Reference ScottScott, 1993; Reference Wessely and JonesWessely & Jones, 2004). Indeed, one of the reasons for the modest, to put it kindly, successes of the vast and costly programme of psychological treatments for Vietnam veterans may have been because it was rooted too much in the jungles of Vietnam, and paid too little attention either to contemporary American culture or the iatrogenic role of the government's response (Reference ShephardShephard, 2000; Reference JohnsonJohnson, 2004).

THE RISE OF THE CULTURE OF TRAUMA

Moving on to the present, is the British military really now facing an epidemic of PTSD? The answer is probably not. Our studies, for example, showed a threefold increase in the rate of PTSD in sick veterans of the 1991 Gulf War, but only from 1% to 3% (Reference Ismail, Kent and BrughaIsmail et al, 2002). This is a significant increase, but it remains the case that 97% of the unwell group did not fulfil criteria for PTSD. Clearly this is nothing like enough to explain the substantial increase in subjective ill health that we and others have confirmed in the aftermath of that conflict (Reference Unwin, Blatchley and CokerUnwin et al, 1999). Nor is PTSD even the main mental health problem facing the armed forces - depression and alcohol misuse are more common (Reference Rona, Hooper and JonesRona et al, 2004). I suspect that future research will suggest that overstretch and the increasing number of deployments, with their adverse effect on family life and well-being, will be a more potent cause of mental health problems than conventional psychiatric injury. Likewise, alcohol culture and availability may pose more problems than PTSD.

Yet even if there has been no real epidemic of PTSD in the British armed forces, reading the media might suggest otherwise, and there has certainly been an epidemic of stories about PTSD. The Vietnam veteran story did play a significant part in one established fact - the reawakening of interest in trauma and its psychological consequences across Western society. However, Vietnam was not the only reason for this. As social commentators never tire of telling us, the 1960s was marked by major shifts in social values. One of the key changes relevant to our story is the shift from the community or group values that had shaped the war years to a society that increasingly valued the individual over the group. Views as to how one should emotionally deal with adversity also changed - from a belief in the importance of reticence and emotional restraint, to one that encouraged emotional expression.

There is no simple right or wrong answer as to how we should manage our emotions. Emotional responses, like everything else, are subject to fashion, and fashions change. During the 1960s and beyond, the ‘stiff upper lip’ was satirised by Beyond the Fringe and Monty Python, whereas more recently emotional expression has been encouraged and rewarded, until we reach the reductio ad absurdum of Jerry Springer and the talk-show culture. Talking about yourself, and the bad things that may have happened to you, is now the fashion (Reference FurediFuredi, 2003).

Some have claimed that trauma and its consequences have become more common because of the changing nature of modern life, but this seems unlikely. What has happened has been a widening of the boundaries of psychiatric injury. In its initial formulation PTSD could only be diagnosed after situations that were genuinely threatening to life and limb, but with every further iteration of the diagnostic criteria, this has been broadened to include situations where people felt that they were in peril, even if they were not, and, finally, to any adverse experience, which can include viewing the attack on the New York World Trade Center on television, receiving a medical diagnosis or even normal experiences such as childbirth. The diagnostic label of PTSD has become a shorthand for all distress, and as it has moved from its initial rigorous formulation in the military context into the civilian sector it has become inflated. We may not face an epidemic of PTSD, but we have experienced an epidemic of stories about it. In consequence we all have our favourite ‘stupid stress stories’, reported with glee by the right-wing media. Damages for post-traumatic stress have been received for the trauma of receiving a strippagram, spilling tea (Daily Mail, 4 November 1998), watching a stranger have an epileptic fit in the street (Daily Telegraph, 9 September 2002) or owning a ‘mentally stressed’ racehorse (Daily Mail, 6 July 2002) - and many more. These stories can be amusing, and serve as grist to the mill of the anti-political-correctness lobby. But they are also harmful, because they devalue the real narratives of PTSD such as that experienced by Falklands veteran Simon Weston, who has movingly described his struggles to come to terms with not just his physical disability, but his psychological scars as well. Hence these silly ‘I tripped over a paving stone and am now suing for PTSD’ stories inadvertently trivialise the genuine stories of psychiatric distress and disorder. The inflation of PTSD has led to its increased acceptance by society, but as Chancellors of the Exchequer are always telling us, inflation leads to devaluation.

PTSD AND THE MYTHS OF PREVENTION

The seductions of screening

Even if it is not as common as some believe, PTSD (like all psychiatric disorders) is bad news if you develop it. Because it seems so obvious that prevention is better than cure, the cry for better prevention has gone up after every conflict of the past century. Perhaps the most appealing strategy involves screening those at risk before they are exposed to adversity. If we could know who was going to break down in battle, we could screen them out beforehand. This would give us a stronger military, and be better for the service men and women themselves, their families and the Chancellor. The historical record is indeed full of pleas made by those having to command men in battle to those responsible for selection imploring them to do a better job (Reference Jones, Hyams and WesselyJones, E., et al, 2003). My favourite is quoted in Ben Shephard's classic account of psychiatrists at war (Reference ShephardShephard, 2000), and is a signal sent by a senior officer in the Eighth Army in Egypt in 1942 back to the War Office, begging them not to send him men who ‘can't stand the brothels of Cairo, let alone the Afrika Korp’.

One answer seems to be mass psychological screening. Back in the Second World War, the Americans - as optimistic then as they are now - believed that they could identify those who were going to make bad soldiers and future psychiatric cases. They enlisted the enthusiastic help of the best psychiatrists in the land, led by Harry Stack Sullivan, one of the most famous psychiatrists of the mid-20th century. The psychiatrists gave their all for the war effort, removing over 2 million men from the draft on the basis of personality testing that predicted future breakdown (Reference Jones, Hyams and WesselyJones, E., et al, 2003). However, the Americans nearly lost the war in consequence. By 1944, when no less a person than George C. Marshall called a halt, they were running out of men (Reference GinzbergGinzberg, 1959). What then happened was that many of those previously rejected on psychiatric grounds were enlisted - a vast natural experiment. To everyone's surprise, studies showed that most made perfectly good soldiers. Some broke down, proportionately more than those who had not been screened out - the psychiatrists were not totally wrong - but up to 85% made perfectly adequate soldiers (Reference AitaAita, 1949).

There were many reasons why screening for psychological vulnerability to breakdown before deployment failed then, reasons which remain fundamentally unchanged to the present day. A major risk factor for breakdown is experiencing a traumatic event - but that has not yet happened (and may never do so), so pre-deployment screening is deprived of the best single predictive factor. What remains is a collection of risk factors, which although statistically significant are all relatively weak individual predictors of future breakdown (Reference Brewin, Andrews and ValentineBrewin et al, 2000). Furthermore, excluding people who have these risk factors (coming from a single-parent family, having a family history of psychiatric disorder, a poor school record and so on) would have many untoward consequences. Denying military service to people with these risky backgrounds, for example, would clearly have a serious effect on recruitment, especially for the army, which traditionally recruits from areas of social disadvantage. It would also deny some of the social goals and benefits of military service - giving people from disadvantaged backgrounds a chance to learn a skill, and gain self-respect.

Labelling people as potentially psychologically unstable, before anything has happened to prove that label correct, is also not without risks. It changes people's views of themselves in unpredictable ways, and exposes them to stigma. The American experience showed that some denied the opportunity to serve their country because of concerns for their psychological stability returned to their home communities and were exposed to shame and ridicule.

In conclusion, the case for psychological screening is difficult to make. It is hard to see how a psychological screening programme for the armed forces could ever fulfil the criteria that the National Health Service (NHS) insists upon before introducing any new screening programme, and indeed, in the recent seminal PTSD judgment in favour of the Ministry of Defence, Mr Justice Owen came to the same conclusion (Multiple Claimants v. The Ministry of Defence, 2003). Nevertheless, as I write, voices are again raised calling for psychological screening in the military. This time it is not to prevent breakdown in battle, but to prevent suicide during military service. However, the arguments against this are, if anything, even more compelling than the arguments against screening to prevent breakdown after battle. Suicide during military service is rare, and like all rare events, almost impossible to predict. Once again, it is loosely associated with variables indicating social disadvantage that are common in military recruits. A major risk factor not amenable to screening is also the availability within the military of the means of suicide - firearms. Rather than concentrating on excluding people from risky backgrounds from joining the armed forces, a more sensible strategy might be to increase the support they receive in service.

The disappointments of debriefing

If screening does not work, there is still much that can be done to reduce the risk of psychiatric breakdown before people go into battle. Men fight for their friends, and the best protectors against breakdown in battle are group cohesion and bonding (Reference Shils and JanowitzShils & Janowitz, 1948; Reference PalmerPalmer, 2003). Issues such as morale, leadership, good equipment and training are all relevant. None of this is news, and little of it has much to do with psychiatry. But what about after deployment, after people have been exposed to unpleasant sights or dangerous situations? Just as with screening, the idea that immediate psychological interventions could prevent later breakdown sounds intuitively appealing, and has had numerous supporters over the years. However, just as the negative experiences of psychological screening during the Second World War should give us pause for thought, we have the example of psychological debriefing to provide us with another cautionary tale.

Most people will be familiar with the concept of single-session psychological debriefing. This is an intervention led by a mental health professional carried out with people (individually or in groups) shortly after they have been exposed to some form of adversity. The procedure involves some element of telling the story of the event, asking how people felt emotionally during the event and now, and teaching about likely further emotional reactions over time. Its purpose, enthusiastically proclaimed by its protagonists, is to prevent later psychiatric disorder such as PTSD.

In our contemporary culture, the arrival of what the media inevitably call ‘trained counsellors’ has become as much a part of the theatre of disaster as that of the emergency services. It has become part of the social recognition of disaster, and our collective desire that ‘something must be done’ (Reference GistGist, 2002). The problem is that to date, research has failed to show any benefit from single-session psychological debriefing (Reference Wessely and DeahlWessely & Deahl, 2003), and indeed there is evidence that it may increase the risk of subsequent psychological disorder (Reference Emmerik, Kamphuls and HulsboschEmmerik et al, 2002). There are many reasons for the ineffectiveness and possible adverse effects of debriefing. I favour the view that it impedes the normal ways in which we deal with adversity - talking to our friends, family, general practitioner, the padre and so on - and instead professionalises distress.

So the debriefing saga is a warning against naïve efforts that we can prevent - and I emphasise the word ‘prevent’ - the psychological consequences of trauma. Prevention, as opposed to treatment, does not work.

So to conclude about psychiatric injury and risk: the only certain way of preventing PTSD and psychiatric injury is by not sending people to war. All else is speculative, uncertain or even erroneous. When people do develop psychiatric disorders, however, we can and should do better - I use the word ‘we’ advisedly, since as shown by Iversen et al (Reference Iversen, Dyson and Smith2005, this issue), the main problems of care arise when veterans have left the armed forces and returned to NHS care.

Contrary to the views in some quarters, it is wrong to say that the military know nothing and do nothing about psychiatric injury. The military have an enviable record for innovation in psychiatry - it was military psychiatry that initiated group psychotherapy (Reference Harrison and ClarkeHarrison & Clarke, 1992). Likewise, modern community care and assertive outreach began with the military doctrine of ‘proximity, immediacy and expectancy’ that is the standard management of combat stress, and gave the intellectual stimulus to crisis intervention (Reference ArtissArtiss, 1997). Psychiatric injury and its management is not new territory for the armed forces. It poses certain problems, but these are neither unfamiliar, unpredictable, nor beyond comprehension.

THE SYNDROMES ARE COMING

If psychiatric injury is, to coin a phrase, nothing to be afraid of, the same is not true of my next examples. This is the area of risk that really does at times appear inexplicable and baffling. It is the world of unexplained symptoms and syndromes, exemplified in the military context by the story of the so-called Gulf War syndrome (Reference WesselyWessely, 2001) (The term ‘Gulf War syndrome’ is strictly speaking a misnomer, since there is no compelling evidence of a constellation of signs or symptoms uniquely associated with Gulf service. The correct term should be ‘Gulf War illness’ or ‘Gulf War illnesses’, but it is ‘Gulf War syndrome’ that has entered the lexicon.) Some time after the end of hostilities in the 1991 Gulf War, reports started to emerge in the USA, and subsequently the UK, of service men and women coming forward with inexplicable health complaints. These did not constitute any recognised condition in medical science, but were instead a collection of diverse symptoms such as overwhelming fatigue, concentration difficulties, generalised pain and malaise, problems with memory and many others. At the same time Gulf veterans who had fathered children with congenital disabilities also blamed this on their military service. Numerous causes were advanced in the media, ranging from smoke from oil fires, use of pesticides, exposure to depleted uranium, new infections, reactions to the vaccination programmes used to protect against biological warfare, medications given to protect against chemical warfare, and even exposure to nerve agents themselves.

This is not the place to analyse the growing literature on Gulf War illness (see Reference Barrett, Gray and DoebbelingBarrett et al, 2003). However, it is fair to say that no single cause, and no pathological process, has been found to explain the problem, and problem it undoubtedly is. Up to 20% of the UK armed forces deployed to the Gulf have increased health complaints, and similar numbers believe themselves victim of this mysterious syndrome (Reference Chalder, Hotopf and HullChalder et al, 2001; Reference Cherry, Creed and SilmanCherry et al, 2001).

Gulf War syndrome is not, however, a problem unique to the military. Its symptoms overlap with numerous other similar syndromes, such as multiple chemical sensitivity, dental amalgam syndrome, repetitive strain injury, total allergy syndrome, sick building syndrome and many others. Many of these are likewise blamed on possible environmental hazards that are difficult to assess or quantify, such as low-level radiation, chemicals, food additives, pesticides, pollution and the like (Reference Aceves-Avila, Ferrari and Ramos-RemusAceves-Avila et al, 2004). It is these associations with controversial and unwelcome features of our environment and technology that have led to the proposal that these syndromes should be labelled ‘illnesses of modernity’ (Reference Petrie and WesselyPetrie & Wessely, 2002).

RISKS: PERCEPTIONS AND PARADOXES

New syndromes such as those described above make a little more sense if we consider the question of contemporary health concerns, and the explanations that people give for illness. The health concerns of the public are not the same as the health concerns of doctors and scientists. As good doctors, we try hard to convince people not to smoke, to drink less, drive more slowly and eat more vegetables, but it is an uphill struggle. Public health physicians plod on, because they know these are the real risks to health and survival. Sadly, the public remains fairly unwilling to do much about it, and rather unconcerned when all is said and done. None of this is surprising, because the public does not rate risks in the same statistical way scientists do. For a scientist, something that kills 100 people is twice as risky as something that kills 50 people a year; is twice as dangerous, twice as bad. This is simple, statistical, and almost completely misses the point. The public judge risk by other criteria, in which statistics play a relatively small part. For example, did I accept the risk voluntarily, when I chose to smoke or drive too fast, or was it outside my control? Invisible risks - viruses, chemicals, radiation - are more scary than visible ones, and are associated with particular dread. Unnatural risks rate higher than natural ones: although many people have died in the UK - let alone the world - from floods, far more column inches and campaign hours are devoted to the threat from nuclear power stations, yet to cause a single death in the UK.

People are almost more prepared to accept risks if they also perceive some individual benefit to themselves from taking that risk. In Britain, the government has been unable to persuade the public that genetically modified foods offer any benefit to our society (as opposed to developing countries). In contrast, despite all the media attempts to generate mobile telephone scares, people still accept this risk (if there is one) because the benefits are so obvious. Hence we have the strange situation of the Stewart Committee concluding that although there was no evidence that mobile phones were a health hazard, they recommended restricting use by children ‘as a precaution’ (Independent Expert Group on Mobile Phones, 2000). As anyone with adolescent children will know, never was government advice so openly ignored.

People worry about risks because of factors other than statistics. In the UK, it is not smoking, obesity, poor diet, speeding and lack of exercise that are associated with popular concerns and outrage. It is issues such as landfill sites, chemicals, food additives, silicone breast implants, dental amalgam, low-level radiation, childhood inoculations and so on. These are the risks, some of them more virtual than real, that make the media excited, the public worried and the politicians perplexed.

All of this matters. People's appraisals of risks, their concerns, directly affect their health. We know that the greater the degree of worry shown by a person about the potential effects of, for example, living near a landfill site, the greater the number of symptoms (Reference Roht, Vernon and WeirRoht et al, 1985). There is also compelling evidence from a prospective New Zealand study led by psychologist Keith Petrie (Reference Petrie, Broadbent and KleyPetrie et al, 2005). He had advance warning of a plan to eliminate a particular pest, the painted apple moth, by spraying some Auckland suburbs with pesticide. Before this could take place, he asked a large sample of residents about their particular concerns about health and the environment. The spraying then took place, and he repeated the study, looking at how people had been affected by the spray. What he found was that the more people registered concerns about, for example, genetically modified food, mobile phone masts or food additives before the spray, the more they reported symptoms afterwards. They even reported more health problems in their pets. So what we think of our environment, and the explanations we give for our symptoms, matter, and affect how we will react when exposed to these agents. Remember, if the effects of the pesticides were solely toxicological, then beliefs should not make a difference. Once you have taken the decision to smoke, your risk of developing cancer is unaffected by your views on the link between smoking and cancer, nor by the fact that your Uncle Albert smoked 60 a day and still reached his 100th birthday.

None of this is surprising. Much of the public share concerns about the quality of our food, water and air. Many support the efforts of organisations, especially non-governmental organisations, to improve our environment. Many share the views of the same organisations about the links between our environment and health. But taken overall, and in historical context, it seems baffling, and paradoxical. In Westernised countries we now live longer and are healthier than in any other period of human history. Our environment, be it the air we breathe, the food we eat or the water we drink, has little relationship to that of a hundred years ago, testament to a century of extraordinary successes in public health. Yet this is not reflected in self-rated health: we complain of more symptoms, spend more days in bed and rate our health as worse than we did 40 years or even 80 years ago (Reference VerbruggeVerbrugge, 1984; Reference ShorterShorter, 1992). This has been aptly described as the paradox of health (Reference BarskyBarsky, 1988).

Our current concerns with the quality of our food or water seem to have become disconnected from the real advances that have been made. Some idealists look back nostalgically to a period when our food was ‘natural’ and free from contamination, before the rise of the food industry and mass farming; but any reading of classic descriptions of working-class life in London or industrial Salford in the 19th century would serve as an antidote to over-romantic readings of history. Back then our food, air and water really were toxic. Victorian food was grossly contaminated - strychnine in rum, copper sulphate in pickles and preserves, lead in mustard, ferrous sulphate in tea and beer, lead and mercury in sugar and chocolate. A Punch cartoon in 1855 shows a little girl approaching a grocer and saying, ‘If you please, sir, mother would like a pound of tea to kill the rats with, and an ounce of chocolate to get rid of the beetles’ (Reference DalrympleDalrymple, 1998).

So the undeniable changes in all objective indices of health do not seem to have been mirrored in a collective increase in subjective health and well-being - rather the opposite. The increased tempo of regulation exemplified by the ‘precautionary principle’ has not been reflected in increased public well-being, confidence or reassurance. Instead, as numerous commentators have noted, excessive regulation, coupled with a media that seems to thrive on a diet of health-scare stories, leads to the danger that we are worrying ourselves sick.

THE MILITARY: ACCEPTABLE AND NON-ACCEPTABLE RISKS

So far I have been considering the position for civilian society, but there is little reason to suspect that things are different for the military. We know that the military do accept certain risks and hazards for which they see a purpose - serving members of the armed forces make it clear that they accept the risks of war that go with the job, and hence the chance of physical and even psychological injury. Like civilians, the military seem accepting of other risks over which they feel they have a choice - such as driving or sports injuries, a perennial cause of serious injury and staffing difficulties. These types of risk are clear, and associated with a greater burden of morbidity and mortality than any of the hazards that have been linked with (for example) Gulf War syndrome, yet it is the latter that dominates the media columns.

I suggest four possible reasons for this. First, these risks are similar to those that are already known from the civilian literature to score high on the measures of risk perception already considered. Second, these apparently new risks are not seen as part of the traditional military contract. Third, there are questions about fairness and equity. Finally, we cannot ignore the growing problem of mistrust of all institutions, particularly those with military connections.

The first reason that might help us to understand the emergence of ‘Gulf War syndrome’ is the link between the potential hazards blamed for the syndrome and the health concerns of non-military populations. Concerns about the effect of smoke from the oil fires burning in Kuwait, even though these have not been substantiated, may relate to civilian concerns about air pollution and quality. Concerns about the use of organophosphate insecticides during the Gulf campaign have direct civilian counterparts, back to Rachel Carson's seminal book Silent Spring (Reference CarsonCarson, 1962) and the beginnings of the ecology movement. Given the continuing crisis in the UK over the measles, mumps and rubella (MMR) vaccine, one does not need to labour the overlap between civilian and military concerns about vaccination. Another source of anxiety and column inches is the use of depleted uranium munitions. The main hazard of exposure (assuming that one survives the actual impact) comes not from its modest radioactive properties but because it is a heavy metal. The risks from depleted uranium fragments are closer to those from lead rather than plutonium (Reference Fulco, Liverman and SoxFulco et al, 2000). Instead, the reason for the high level of public and media concern may come not from its properties as a heavy metal, but its lexical links to radiation, conjuring up images of Hiroshima and Chernobyl, and thus scoring as high as one can get on measures of risk perception.

There is a second reason why the military find these hazards so problematic. Those ‘toxic’ risks are not what service men and women signed up for; and it is worse if these risks appear to be self-inflicted - hence the anxiety and distrust over the use of medical countermeasures such as pyridostigmine or biowarfare vaccinations, or alternatively from the side-effects of our use of depleted uranium munitions. These are the medical equivalents of ‘friendly fire’, itself an emotive issue with great resonance for the armed forces.

Third, we already know that risk perception and tolerance are linked to questions of equity. Risks that are equally distributed across the population are seen as less problematic than those that affect a small group, especially if that group is seen as disadvantaged. During the 2001 anthrax crisis in Washington, DC, there was a perception that officials reacted more vigorously to the threat to Congress than to the threat to the postal workers, who were more likely to come from disadvantaged ethnic minorities. The consequences of that misjudgement are still being felt. Turning to the military, no longer do the UK and the USA have citizen armies, based on national service or conscription. Consequently, both the British and American militaries contain an overrepresentation of those from disadvantaged backgrounds and regions of the country. This is in contrast to the Second World War, when one could argue that all social classes were equally exposed to danger, both in the military and in the civilian sector. What is striking about the seminal long-term studies of the outcome of combat performed by George Vaillant on the Harvard class of 1942 (Reference Lee, Vaillant and TorreyLee et al, 1995) is that nearly all of that undergraduate class, drawn from the most privileged in American society, joined the armed services, and two-thirds of them served overseas, most seeing combat. The lack of parallels with the present is clear. Exposure to risk is no longer equitable.

Finally, all of these narratives take place in a society that has become less accepting of authority or expertise, and less deferential. The legacy of episodes perceived to be examples of official denial or less than full disclosure, such as Agent Orange or the side-effects of nuclear test programmes in the 1950s, is that the public and the rank and file of the armed forces are less likely to accept official reassurance, and more likely to believe information obtained from the internet, irrespective of its scientific merit. This general loss of trust in institutions amplifies risk concerns and risk awareness across society (Reference SlovicSlovic, 1999).

RISKS: PROPORTIONAL AND NON-PROPORTIONAL

The military have little to be afraid of from acknowledging the reality of psychiatric injury. Understanding it better, and accepting it more sympathetically, poses no danger to them, provided it is managed within the context of military culture, and that they do not heed the siren voices who claim that stress can be avoided or prevented, as opposed to managed. The Ministry of Defence fought and won the massive PTSD legal case on the basis that it is utopian to believe that stress can ever be eliminated from a military organisation. Indeed, this is undesirable. The military deliberately stretch and test people because war is a stressful business, and it is best to come prepared.

However, things are not perfect, and one thing the armed forces can do better is to promote a climate in which people will come forward and declare they are having problems - stigma remains a serious issue. The current initiative launched within the Royal Marines to encourage peer group support (Trauma Risk Management, TRIM) might have a role here (Reference Jones, Roberts and GreenbergJones, N., et al, 2003), provided we remember the cautionary tale of debriefing. No matter how intuitively appealing an intervention seems, there is no substitute for sound evidence of efficacy. In the meantime, we need to improve the availability and acceptability of services for those with psychiatric problems after they leave the armed forces.

I believe that none of this will weaken the fundamental purpose of the armed forces, of fighting and winning wars. However, what the military should be worried about, and what may reduce their operational effectiveness, is the wider risk-averse culture that is now so entrenched in the civilian world. We have as a society become too risk-averse, terrified of our shadows, able to contemplate a measles epidemic that will kill children because of fears of a vaccine that does not. If the armed forces embrace a similar risk-averse culture, fuelled by rumour and anecdote, then the consequences could be as severe. This is because there are fundamental differences between the psychiatric and non-psychiatric risks that I have been considering. Psychiatric injuries are proportionate to risk, since there is some relationship between exposure and outcome. Furthermore, we have a reasonable, if not perfect, understanding of why psychiatric injury occurs, and some idea of what to do when it does. But our new ‘modern’ risks, which I have outlined above, are more difficult. There are few simple links between exposure and outcome, the mechanisms involved are either obscure or occasionally non-existent, and we have little idea of what to do about them. Indeed, because we do not understand these new risks, our approach tends to be based on precaution, which may only further increase our anxieties.

The precautionary approach, which is currently the accepted doctrine for managing these small risks, seems to be failing. People do not appear to be reassured by ever more draconian measures to reduce ever-smaller risks. The consequence seems to be increased, not reduced, anxiety. There are always more things that might cause cancer and more things to scare us, rendering us blind to the real situation: that we have never lived longer, or been safer. Clinical psychology has established that reassuring an excessively anxious person not only fails, but is counterproductive (Reference Warwick and SalkovskisWarwick & Salkovskis, 1985). Perhaps the same applies to populations as well (Reference Durodie and WesselyDurodie & Wessely, 2002).

FROM RISK AVERSION TO RESILIENCE

Is this precautionary trend unstoppable? Not necessarily. Because there is one piece of the jigsaw that is missing. A glance at history will confirm that people are not intrinsically risk-averse, provided that they are given reasons why they should accept the risk. The record of populations under extreme stress provides numerous examples of resilience in the face of adversity. Our own work on psychological reactions to the London Blitz and the absence of wide-spread public panic confirms one well-known example (Reference Jones, Woolven and DurodieJones et al, 2004), Thomas Glass's appraisal of the evacuation of the World Trade Center in New York is another (Reference Glass and Schoch-SpanaGlass & Schoch-Spana, 2002). It seems clear that people can behave with great resilience, even heroism, in circumstances when experts beforehand had predicted mass panic and civil breakdown. One reason may be that people can see a wider purpose to accepting these risks, and also become active participants in the process. During the Second World War the vast majority of the British public had some voluntary participation in the war effort in some shape or form (Reference Jones, Woolven and DurodieJones et al, 2004).

In contrast, if all the authorities can offer is safety for its own sake, in which the only purpose of risk management is to reduce risk, then such measures not only fail, but may generate not greater reassurance but greater anxiety. Maintaining population resilience is not simply a matter of reducing risk. Safety first is not enough. People need to know that there is a wider purpose to accepting risk. Public health measures that are based solely on fear, on alarming the public, rarely work, and even if they remove one source of anxiety, seem merely to store up trouble for the next. The challenge is to find a positive agenda of engagement that is based on more than simply reducing risk. The goal of a risk-free society, let alone a risk-free armed forces, is unachievable, and probably unpalatable; but at present that seems to be the only purpose of policy, which lacks any vision other than precaution. ‘Better safe than sorry’ may seem sensible, but the danger is that we will end up no safer, and a lot sorrier.

Acknowledgements

Numerous people have shaped my views on risk, psychiatry and the military over the years. A few of them might be appalled to learn this. Others may detect traces of their own better-articulated views in this paper - imitation is the sincerest form of flattery. My thanks therefore to Christopher Brewin, Christopher Dandeker, Martin Deahl, William Durodie, Craig Hyams, Edgar Jones, Leigh Neal, Rick McNally, Ian Palmer, Keith Petrie, Sally Satel, Ariel Shalev and Ben Sheppard. Most of all I am fortunate to have worked with, and to continue to work with, a remarkably talented group of researchers at King's College London.

References

Aceves-Avila, F., Ferrari, R. & Ramos-Remus, C. (2004) New insights into culture driven disorders. Best Practice and Research in Clinical Rheumatology, 18, 155171.Google Scholar
Aita, J. (1949) Efficacy of brief clinical interview method in predicting adjustment: 5 year follow-up study of 304 Army inductees. Archives of Neurology and Psychiatry, 61, 170178.Google Scholar
Artiss, K. L. (1997) Combat psychiatry: from history to theory. Military Medicine, 162, 605609.Google Scholar
Barrett, D., Gray, G., Doebbeling, B., et al (2003) Frevalence of symptoms and symptom-based conditions among Gulf War veterans: current status of research findings. Epidemiologic Reviews, 24, 218227.Google Scholar
Barsky, A. (1988) The paradox of health. New England Journal of Medicine, 318, 414418.Google Scholar
Beck, U. (1992) Risk Society: Towards a New Modernity (trans. Ritter, M.). London: Sage.Google Scholar
Belenky, G. L. (1987) Varieties of reaction and adaptation to combat experience. Bulletin of the Menninger Clinic, 51, 6479.Google Scholar
Belenky, G. L., Noy, S. & Solomon, Z. (1985) Battle Stress. Military Review, 2937.Google Scholar
Bey, W. (1970) Division psychiatry in Vietnam. American Journal of Psychiatry, 127, 146150.CrossRefGoogle Scholar
Bourke, J. (1999) An Intimate History of Killing. London: Granta.Google Scholar
Bourne, P. (1970) Military psychiatry and the Vietnam experience. American Journal of Psychiatry, 127, 481488.Google Scholar
Brewin, C., Andrews, B. & Valentine, J. (2000) Meta-analysis of risk factors for posttraumatic stress disorder in trauma exposed adults. Journal of Consulting and Clinical Psychology, 68, 748766.Google Scholar
Carson, R. (1962) Silent Spring. Reprinted 2000. Harmondsworth: Penguin.Google Scholar
Chalder, T., Hotopf, M., Hull, L., et al (2001) Prevalence of Gulf war veterans who believe they have Gulf war syndrome: questionnaire study. BMJ, 323, 473476.Google Scholar
Cherry, N., Creed, F., Silman, A., et al (2001) Health and exposures of United Kingdom Gulf war veterans. Part 1: The pattern and extent of ill health. Occupational and Environmental Medicine, 58, 291298.CrossRefGoogle ScholarPubMed
Dalrymple, T. (1998) Mass Listeria: The Meaning of Health Scares. London: Deutsch.Google Scholar
Dandeker, C. (2001) On the need to be different: military uniqueness and civil–military relations in modern society. RUSI Journal, 146, 49.Google Scholar
De Groot, G. (2000) A Noble Cause? America and the Vietnam War. Harlow: Longman.Google Scholar
Durodie, W. & Wessely, S. (2002) Resilience or panic: the public's response to a terrorist attack. Lancet, 360, 19011902.Google Scholar
Emmerik, A., Kamphuls, J., Hulsbosch, A., et al (2002) Single session debriefing after psychological trauma: a meta analysis. Lancet, 360, 736741.Google Scholar
Fulco, C., Liverman, C. & Sox, H. (eds) (2000) Gulf War and Health. Vol. I: Depleted Uranium, Sarin, Pyridostigmine Bromide, Vaccines. Washington, DC: Institute of Medicine.Google Scholar
Furedi, F. (2003) Therapy Culture: Cultivating Vulnerability in an Anxious Age. London: Routledge.Google Scholar
Ginzberg, E. (1959) The Lost Divisions. New York: Columbia University Press.Google Scholar
Gist, R. (2002) What have they done to my song? Social science, social movements and the debriefing debates. Cognitive and Behavioral Practice, 9, 273279.Google Scholar
Glass, A. (1974) Mental health programs in the Armed Forces. In American Handbook of Psychiatry (ed. Caplan, G.), pp. 800809. New York: Basic Books.Google Scholar
Glass, T. & Schoch-Spana, M. (2002) Bioterrorism and the people: how to vaccinate a city against panic. Clinical Infectious Diseases, 34, 217223.Google Scholar
Harrison, T. & Clarke, D. (1992) The Northfield experiments. British Journal of Psychiatry, 160, 698708.Google Scholar
Independent Expert Group on Mobkle Phones (2000) Mobile Phones and Health. London: IEGMP.Google Scholar
Ismail, K., Kent, K., Brugha, T., et al (2002) The mental health of UK Gulf war veterans: phase 2 of two-phase cohort study. BMJ, 325, 576579.Google Scholar
Iversen, A., Dyson, C., Smith, N., et al (2005) ‘Goodbye and good luck’; the mental health needs and treatment experiences of British ex-service personnel. British Journal of Psychiatry, 186, 480486.Google Scholar
Johnson, A. (2004) Long-term course of treatment-seeking veterans with posttraumatic stress disorder. Journal of Nervous and Mental Diseases, 192, 3541.Google Scholar
Jones, E. & Wessely, S. (2001) Psychiatric battle casualties: an intra- and inter-war comparison. British Journal of Psychiatry, 178, 242247.Google Scholar
Jones, E., Palmer, I. & Wessely, S. (2002) War Fensions (1900–1945): changing models of psychological understanding. British Journal of Psychiatry, 180, 374379.Google Scholar
Jones, E., Hyams, K. & Wessely, S. (2003) Screening for vulnerability to psychological disorders in the military: an historical inquiry. Journal of Medical Screening, 10, 4046.CrossRefGoogle Scholar
Jones, E., Woolven, R., Durodie, W., et al (2004) Public panic and morale: are assessment of civilian reactions during the Blitz and World War 2. Journal of Social History, 17, 463479.Google Scholar
Jones, N., Roberts, P. & Greenberg, N. (2003) Peer-group risk assessment: a post-traumatic management strategy for hierarchical organizations. Occupational Medicine, 53, 469475.Google Scholar
Lee, K., Vaillant, G., Torrey, W., et al (1995) A 50-year prospective study of the psychological sequelae of World War II combat. American Journal of Psychiatry, 152, 516522.Google Scholar
Palmer, I. (2003) The emotion that dare not speak its name? British Army Review, 132, 3137.Google Scholar
Petrie, K. & Wessely, S. (2002) Modern worries and medicine. BMJ, 324, 690691.Google Scholar
Petrie, K., Broadbent, E., Kley, N., et al (2005) Worries about modernity predict symptom complaints following environmental spraying. Psychosomatic Medicine, in press.Google Scholar
Roht, L., Vernon, S., Weir, F., et al (1985) Community exposure to hazardous waste disposal sites: assessing reporting bias. American Journal of Epidemiology, 122, 418433.Google Scholar
Rona, R., Hooper, R., Jones, M., et al (2004) Screening for physical and psychological illness in the British Armed Forces: III The value of a questionnaire to assist a Medical Officer to decide who needs help. Journal of Medical Screening, 11, 158163.Google Scholar
Scott, J. (1993) The Politics of Readjustment: Vietnam Veterans Since the War. New York: DeGruyter.Google Scholar
Shephard, B. (2000) A War of Nerves: Soldiers and Psychiatrists I9I4–I994. London: Cape.Google Scholar
Shils, E. & Janowitz, M. (1948) Cohesion and disintegration in the Wehrmacht in World War II. Public Opinion Quarterly, 12, 280315.Google Scholar
Shorter, E. (1992) From Paralysis to Fatigue: A History of Psychosomatic Illness in the Modern Era. New York: Free Press.Google Scholar
Skolbekken, J. (1995) The risk epidemic in medical journals. Social Science and Medicine, 40, 291305.Google Scholar
Slovic, P. (1999) Trust, emotion, sex, politics, and science: surveying the risk assessment battlefield. Risk Analysis, 19, 689702.CrossRefGoogle ScholarPubMed
Solomon, Z. (1989) A 3-year prospective study of post-traumatic stress disorder in Israeli combat veterans. Journal of Traumatic Stress, 2, 5973.Google Scholar
Turner, M. A., Kiernan, M. D., McKechanie, A., et al (2005) Acute military psychiatric casualties from the war in Iraq. British Journal of Psychiatry, 186, 476479.Google Scholar
Unwin, C., Blatchley, N., Coker, W., et al (1999) The health of United Kingdom servicemen who served in the Persian Gulf War. Lancet, 353, 169178.Google Scholar
Verbrugge, L. (1984) Longer life but worsening health? Trends in health and mortality of middle aged and older persons. Millbank Memorial Fund Quarterly, 62, 475519.Google Scholar
Warwick, H. M. & Salkovskis, P. M. (1985) Reassurance. BMJ, 290, 1028.Google Scholar
Wessely, S. (2001) Ten years on, what do we know about the Gulf War syndrome? Clinical Medicine (JRCPL), 1, 2837.Google Scholar
Wessely, S. & Deahl, M. (2003) Psychological debriefing is a waste of time. British Journal of Psychiatry, 183, 1214.Google Scholar
Wessely, S. & Jones, E. (2004) Psychiatry and the lessons of Vietnam: what were they and are they still relevant? War and Society, 22, 89103.Google Scholar
Multiple Claimants v. The Ministry of Defence [2003] EWHC 1134 (QB).Google Scholar
Submit a response

eLetters

No eLetters have been published for this article.