Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-23T16:15:36.694Z Has data issue: false hasContentIssue false

Chapter 17 - The Pharmaceutical Industry and the Standardisation of Psychiatric Practice

from Part II - The Cogwheels of Change

Published online by Cambridge University Press:  17 June 2021

George Ikkos
Affiliation:
Royal National Orthopaedic Hospital
Nick Bouras
Affiliation:
King's College London

Summary

As of 1960, British academic psychiatry was ‘social’. Social meant epidemiological rather than committed to the idea that mental illness was social rather than biological in origin. The designation ‘biological psychiatry’ became current in the 1990’s. In the 1960s, after an astonishing flood of new drugs, a nascent pharmaceutical industry, previously run by chemists and clinicians, brought in management consultants to ensure the breakthroughs continued but drug discovery in psychiatry has dried up. The pharmaceutical industry has colonised medical research, education and clinical practice. Evidence-based medicine (EBM) and clinical guidelines have served to extend rather than contain the influence of the industry. Antidepressants are now the second most prescribed drugs to teenage girls after contraceptives, in the face of thirty RCTs of antidepressants given to depressed minors – all negative. While they came with drawbacks, through to 1990 the psychotropic drugs introduced from the late 1950s onwards extended the range of clinical capabilities and likely did more good than harm. It is difficult to make the same claims about developments since 1990.

Type
Chapter
Information
Mind, State and Society
Social History of Psychiatry and Mental Health in Britain 1960–2010
, pp. 163 - 170
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

Introduction

The 1950s saw the largely serendipitous discovery in clinical settings of a series of psychotropic drugs, produced primarily in the pharmaceutical divisions of European chemical companies. This accompanied the discovery of antibiotics and medicines for other clinical conditions. While there were large chemical companies and a proprietary medicines industry, there was then no pharmaceutical industry as we know it. The nascent companies producing psychotropics, however, were quick to set up international meetings that brought basic scientists and clinical delegates together from all continents.

1960–80

In the wake of the Second World War, German psychiatry and medicine lost ground, opening a door for English to become the lingua franca of the medical world. This and the detour American psychiatry took into psychoanalysis fostered the reputations of British psychiatrists.

As of 1960, British academic psychiatry was ‘social’. Social meant epidemiological rather than committed to the idea that mental illness was social rather than biological in origin. Social psychiatrists began thinking in terms of the operational criteria and other procedures that would enable research on the incidence and prevalence of nervous problems. Among the leading figures were Michael Shepherd, John Wing and others from the Institute of Psychiatry, who worked to establish methods which laid the basis for an international pilot study of schizophrenia on the one hand to studies on the incidence of primary care nervous disorders, not then part of psychiatry, on the other. These latter studies provided a template for other studies undertaken since then that, perhaps even more in mental health than in any other branch of medicine, created markets for pharmaceuticals.1

This group of psychiatrists played a central role in incorporating mental disorders into the International Classification of Diseases, which influenced the third edition of the American Diagnostic and Statistical Manual of Mental Disorders (DSM) that, published in 1980, laid a basis, along with controlled trials, for an industrialisation of psychiatry.

Randomised controlled trials (RCTs) originated in Britain. The first RCT had been done by Tony Hill on streptomycin in 1948. Michael Shepherd at the Institute of Psychiatry became a coordinator of the Medical Research Council (MRC) clinical trials committee soon after and, working with Hill, fostered the development of RCTs within mental health. Shepherd also ran the first placebo-controlled parallel group RCT comparing reserpine to placebo in anxious depression, which reported in 1955.

RCTs have since invaded medicine. This was not because they are a good way to evaluate a drug but because of the thalidomide crisis. The birth defects thalidomide caused led, in 1962, to a set of amendments to the US Food and Drugs Act that made RCTs the primary method to evaluate treatment. It was not clear then that the necessary focus on a primary end point RCTs require made them, almost by definition, a poor method to evaluate a treatment. Food and Drug Administration (FDA) regulations made RCTs the standard through which industry made gold, and trials done across medicine ever since have been industry-related.2

Before 1945, there were few academic psychiatry posts outside of Germany. After the war, university posts were created in the United States, the UK and elsewhere. Through the 1960s, however, there were comparatively few academic psychiatrists globally and this scarcity along with the involvement of British academics in early epidemiological research, the development of protocols for RCTs and their facility at English gave them a magisterial status at international meetings.

Among the notable figures were Martin Roth, whose concepts of endogenous and neurotic depression were influential. Linford Rees was a prolific clinical trialist. Michael Shepherd brought a scepticism of the enthusiasm for new drugs, most clearly demonstrated in high-profile arguments about the role of lithium (see also Chapters 2 and 16). Max Hamilton is perhaps the best-known figure now, by virtue of having his name on the standard scale for assessing the efficacy of antidepressant drugs – the Hamilton Rating Scale for Depression.

Hamilton’s 1974 view about this scale seems prescient:

It may be that we are witnessing a change as revolutionary as was the introduction of standardization and mass production in manufacture. Both have their positive and negative sides.3

Rating scales, along with operational criteria and RCTs, have made for a standardisation of clinical practice and a development of managed services that have latterly left US and UK psychiatry, and health services more generally, increasingly similar – where, in 1960, they could not have been more different.

In the years between 1960 and 1980, clinicians like George Ashcroft, Alex Coppen, Michael Pare, Donald Eccleston and others played a part in formulating monoamine,4 especially serotonergic, hypotheses for mood disorders and in bringing the notion of a receptor into psychopharmacology, along with researchers in pharmacology like Merton Sandler, Gerald Curzon, Geoff Watkins, John Hughes and others.5 These hypotheses were discarded by the 1970s but in the 1990s in the hands of pharmaceutical marketing departments they provided a basis for a bio-babble that has profoundly shaped public culture.6

In 1974, the British Association for Psychopharmacology (BAP) formed. The BAP became a forum for lively interdisciplinary exchanges for twenty years after which, as in other forums, the divide between clinicians and neuroscientists became increasingly hard to bridge.7 The BAP provided a template for a European College of Psychopharmacology and at the same time a European Psychiatric Association was forming, which along with a World Psychiatric Association was largely underpinned by industry funding.

In addition to new tricyclic antidepressants and neuroleptics for traditional illnesses, LSD, benzodiazepines and contraceptives were at the centre of vigorous debates in the 1960s, playing a key role in stimulating revolutionary ferment in 1968. Did these drugs enhance or diminish us? In the case of the deinstitutionalisation that followed the introduction of the psychotropic drugs, who was being deinstitutionalised – patients or mental health staff?8

As a symbol of the role of medicine and psychiatry in particular, at this time, in 1968, revolting students occupied and ransacked the Paris office of Jean Delay, the discoverer of chlorpromazine, and occupied the Tokyo Department of Psychiatry for ten years, protesting against the biological experiments being undertaken. The biochemical psychopharmacology being undertaken by clinicians like Coppen in England and Van Praag in Holland appeared dehumanising.9

While psychotropic drugs were a focus for concern, it was the prospect of Big Medicine and medical arrogance rather than Big Pharma that was alarming at the time. Physicians working in industry such as Alan Broadhurst at Geigy did a great deal to ‘market’ depression, along with the tricyclic antidepressants and the Hamilton Rating Scale. This was considered respectable medical education then rather than disease mongering as it might be seen now.10 George Beaumont, also at Geigy, took a lead in the promotion of clomipramine as a treatment for obsessive-compulsive disorder (OCD).11

1980–2000

While anti-psychiatry seemed largely contained in the 1970s, from 1980 mainstream figures in Britain like Peter Tyrer, Malcolm Lader and Heather Ashton raised concerns about the risks of dependence on benzodiazepines that seemed continuous with anti-psychiatric concerns about psychotropic drugs in general.12 It was at this point that the pharmaceutical industry slipped into the line of fire,13 symbolised by a set-piece engagement when Ian Oswald accused Upjohn of fraud in clinical trials of their hypnotic Halcion.14

Benzodiazepine dependence fed into a growing public debate about both mental health and health issues since, from 1980 onwards, more people were encountering psychiatrists and physicians than ever before, as both psychiatry and medicine deinstitutionalised from acute inpatient care into chronic disease management in outpatient and outpatient care (i.e. hypertension, osteoporosis, Type 2 diabetes, etc.).

Prior to 1980, other than for major events such as heart transplants, health rarely featured in the media but, in the 1980s, routine stories about benzodiazepine dependence in TV programmes like That’s Life marked a change and health stories now figure in every issue of almost every newspaper and regularly on the headlines of news bulletins.

The media focus then was on breakthroughs and risks. As of 1970, the word risk had featured in the headlines and abstracts of medical articles on 200 occasions.15 By 1990, prior to any mention of Risk Societies, this figure was more than 20,000 articles. Within health, a new numbers-based operationalism pitched medicines as a way to manage risks.

The benzodiazepine controversies opened a door for the Royal College of Psychiatrists to promote a Defeat Depression Campaign in 1990, whose message was that, rather than treat the superficial symptoms of anxiety, it would be better to diagnose and treat underlying depressions with non-dependence–inducing antidepressants.

This campaign coincided with the development and launch of selective serotonin reuptake inhibiting drugs (SSRIs). Eli Lilly and other companies ensured the Defeat Depression message was heard. The campaign helped make Prozac and later SSRIs into blockbuster drugs – drugs which earned a billion or more dollars per year – something unheard of before 1990, even though within three years of its launch there were more reports to regulators of dependence on another SSRI paroxetine than on all benzodiazepines over a twenty-year period.

The triumph of Prozac with its message of becoming Better than Well in books like Listening to Prozac, accompanied by a race to complete the Human Genome Project, seemed around 1990 to be ushering in a new biomedical era that left historians from Roy Porter to Roger Cooter wondering if it was possible any longer to write history of medicine. The term Biological Psychiatry was coined at this time.16

The years around 1990 also saw the emergence of evidence-based medicine (EBM). EBM pitched RCTs as offering gold standard knowledge of what medical drugs did and argued that this kind of knowledge should replace the knowledge born from clinical experience. The supposed validity of the RCT process meant that even trials funded by pharmaceutical companies would offer valid knowledge, although physicians needed to remain alert to tricks companies might get up to on the margins of trials.

As with RCTs, EBM largely began and took shape in Britain, symbolised by the establishment of the Cochrane Collaboration in 1992 (see also Chapter 16). Cochrane’s mission was to review trials systematically, whittle out duplicate publications and take a critical view of efficacy. Around 1990, the pharmaceutical industry seemed increasingly powerful, leading to the establishment of organisations like No Free Lunch that encouraged physicians to beware of Pharma-bearing gifts. For many, Cochrane and EBM seemed the best tool with which to rein in the pharmaceutical industry, given its focus on scientific procedures rather than morality.

Until 1980, clinical trials had been run in single universities or hospitals by academics who knew their patients. By 1990, they were multicentred and run by clinical research companies who collected the trial data in a central repository to which no academics or physicians had access. The reporting of trial results was contracted out to medical writing agencies so that the articles were mostly ghostwritten with academic names chosen for the authorship lines primarily for their value to marketing rather than their knowledge of the issues. British psychiatrists were no longer magisterial figures, who could make or break a drug; they had become ciphers in an industrial process, taking second place to Americans who had now discovered biological psychiatry. The appearances of scientific process remained the same, so few physicians or psychiatrists and no one outside the profession had any sense of the changes.

The first medical guidelines appeared in the mid-1980s aimed at stopping clearly unhelpful practices like stripping varicose veins. As of the early 1990s, a series of bodies like the BAP began to develop guidelines based on RCTs that made recommendations about what to do rather than what not to do. Industry also began to support guidelines but stopped when companies realised their control of publications meant they controlled the guidelines others created.

Industry control of the evidence became almost complete with the establishment by a Labour government of the National Institute for Clinical Excellence (NICE; now the National Institute for Health and Care Excellence) guideline apparatus in 1997. We appeared to have an independent body sifting the evidence without anyone realising that the evidence being sifted had been mostly ghostwritten and there was no access to the underlying data.

Events following a 1990 paper in the American Journal of Psychiatry brought home the change. This paper carried accounts of six cases of patients becoming suicidal on Prozac that offered compelling evidence of causality as traditionally established in medicine.17 Eli Lilly, the makers of Prozac, claimed their trials did not show that Prozac caused suicidality and that, while individual cases might be harrowing, the plural of anecdote was not data.18 Lilly’s defence ran in the British Medical Journal (BMJ), whose editor, Richard Smith, was a proponent of EBM. Lilly’s defence, hinged on a meta-analysis, seeming to show industry playing by EBM rules.

Prozac survived. The BMJ missed the fact that the small print of the meta-analysis showed a significant excess of suicidal acts on Prozac compared to placebo. The tie-up between BMJ and Lilly fuelled support for EBM and transformed medical journals and clinical practice. Up till then, clinicians had received regular drugs bulletins outlining the hazards of treatments, but these were replaced by guidelines which only mentioned benefits. Journals preferentially published RCTs and meta-analyses, which companies paid for, and it became close to impossible to publish case reports or anything on the hazards of treatment.19 Few clinicians noted that the party most consistently exhorting them to practice EBM was the pharmaceutical industry. Industry profits, meanwhile, grew twentyfold in the thirty years from 1980 to 2010. EBM did not rein in industry.

Through to the 1990s, many significant problems on treatment, such as the acute sexual effects on antidepressants or tardive dyskinesia on antipsychotics, were recognised within a year or two of a drug’s launch. After 1990, significant treatment hazards such as impulsivity disorders on dopamine agonists, enduring sexual dysfunction following finasteride, isotretinoin and antidepressants, and the mental state changes linked to asthma drugs like montelukast might wait twenty to thirty years, the expiration of a patent, or company efforts to market new drugs, to come to light.20 If treatment hazards cannot be formally recognised, they are unlikely to be registered in clinical practice. As a result, an increasing part of patients’ experience no longer registered on the eyes or ears of clinicians.

2000–2010 and Beyond

In 2002, a Labour government made NICE guidelines central to a new National Health Service (NHS) plan, which aimed at levelling up health provision supposedly in accordance with best practice. Guidelines would also enable managers to ensure clinicians delivered services rather than exercised discretion and allow nurses and other staff to replace doctors to carry out defined tasks. Health services began to replace health care, and in the new services the exercise of medical discretion was a problem rather than something to be celebrated.

The transition from care to services became clear in 2004, when NICE began drawing up guidelines for the treatment of childhood depression, just as a crisis developed about the efficacy and safety of antidepressants given to children. Investigative journalists rather than scientists, academics or clinicians scrutinised what was happening and found that the clinical trial literature was entirely ghostwritten or company-written and that publications claiming treatments were effective and safe were at odds with what RCT data showed. A Lancet article and editorial ‘Depressing Research’ suggested no guidelines should be written unless there was access to the data.21

The crisis was raised in a House of Commons Health Select Committee meeting later that year, but in response both the editor of the Lancet and a founder of the Cochrane Collaboration assured the committee that the ghostwriting of clinical trials and the lack of access to trial data were not a significant problem.22

There was a brief stay in the increasing rate of antidepressant prescriptions to children in Britain just after this, but antidepressants are now the second most commonly prescribed drugs to teenage girls after contraceptives, in the face of thirty RCTs of antidepressants given to depressed minors – all negative.23

In 1960, RCTs were expected to temper the enthusiasm for new treatments generated by open studies claiming astonishing benefits. A negative RCT would stop therapeutic bandwagons – as with the demise of the monoamine oxidase inhibitor (MAOI) antidepressants following a negative MRC trial in 1965. Now psychiatry leads the world in having the greatest concentration of negative trials ever done for any indication in any age group, but this has had no effect, other than a paradoxical one, on rates of treatment utilisation.

When concerns first arose around 2004 about the use of antidepressants in children the problem could be seen as a rotten apple in a barrel problem that could be put right by professional and media attention. There was some professional attention to the problem of children and antidepressants around 2004 with the then president of the Royal College, Mike Shooter, instituting a review of conflict of interest policies. Industry warned the College to back off.24

We now appear to have a rotten barrel with politicians, health bureaucrats, academics and the media unable to grapple with a problem that extends to both the efficacy and the safety of all drugs across medicine. There are limp discussions about the need to rein in conflicts of interest – transparency – predicated on the idea that we are still dealing with rotten apples. At a time when it would be helpful to have some magisterial clinicians, it is difficult to see any psychiatrist the industry might be worried about.

Almost all industries have an interest in standardising methods and processes. This standardisation and operationalism is at the heart of what is called neoliberalism but has arguably been more apparent in medicine (neo-medicalism) than in any other domain of life since 1980.25 Just as in 1976, according to the then prevalent dogmas of the Chicago School of Economics, the money supply in Chile became a thermostat function that dictated how the Chilean economy would operate, so in medicine numbers, such as those for blood pressure, peak flow rates, bone densities, rating scale scores or the five of nine criteria needed to make a diagnosis of depression in DSM, now dictate what happens.

The room for discretion vanished with the development of guidelines which were embraced by governments of both right and left and in particular the Labour government in the UK, who saw a means to level up care. Instead, guidelines provided a vehicle to expand the role of management in clinical practice, transforming what had been health care into health services and making health part of the wider service sector.

Qualitative assessments of a patient, which had been judicial in nature, best exemplified in the effort to establish whether a treatment is causing an adverse effect or not, were replaced by quantitative processes against which clinical practice would be evaluated.26

In 2016, the pharmaceutical industry declared they were pulling out of mental health because they could make more money elsewhere. There was no apparent fiduciary duty to physicians or patients; their primary fiduciary duty to their shareholders required a maximising of revenues.

Industry’s intention was to turn to anti-inflammatory drugs, among others. This turn did not mean that mental health would be neglected completely but that anti-inflammatory drugs would be developed which would come at a high cost and could then be sold for a variety of indications such as mental health disorders. It is no surprise that in the last decade we have heard a lot more about a possible inflammatory basis to mood disorders – an inflammo-babble. It is unlikely this move will lead to cures of nervous problems in that, as Goldman Sachs recently noted, curing patients is not a good business model.27

Conclusion

In the 1960s, after an astonishing flood of new drugs, a nascent pharmaceutical industry, previously run by chemists and clinicians, brought in management consultants to ensure the breakthroughs continued. The consultants installed professional managers and recommended process changes involving an outsourcing of clinical trials and medical writing initially, followed by drug discovery as drug pipelines dried up. Latterly, public relations have been outsourced so that pharmaceutical industry personnel rarely defend industry in public. Debate about the role of drugs or hazards linked to drugs has been silenced, as media organisations adopt policies to avoid False Balances – a strategy introduced by industry think tanks, the mirror image of Doubt Is Our Product. If drugs are approved by regulators and endorsed in guidelines, dissenting viewpoints should not be aired in order to avoid alarming the public.

The standardisation of processes extended to clinical services in the 1990s and to professional bodies like the Royal College of Psychiatrists from around 2010. An installation of managers is one of the headline features of these changes, but these are not managers in the sense of people who manage conflict or who are entrepreneurial. They are rather bureaucrats ticking operational boxes. This is bad for drug discovery, inimical to health care and may toll a death knell for psychiatry as a profession. Psychiatrists, on current trends, are more likely to end up as middle-grade managers, ensuring nurses and others meeting with patients adhere to guidelines and minimise risks to the organisation, than as clinicians who might exercise discretion or academics who might follow a serendipitous observation.

Key Summary Points

  • As of 1960, British academic psychiatry was ‘social’. Social meant epidemiological rather than committed to the idea that mental illness was social rather than biological in origin. The designation ‘biological psychiatry’ became current in the 1990s.

  • In the 1960s, after an astonishing flood of new drugs, a nascent pharmaceutical industry, previously run by chemists and clinicians, brought in management consultants to ensure the breakthroughs continued but drug discovery in psychiatry has dried up.

  • The pharmaceutical industry has colonised medical research, education and clinical practice. EBM and clinical guidelines have served to extend rather than contain the influence of the industry.

  • Antidepressants are now the second most prescribed drugs to teenage girls after contraceptives, in the face of thirty RCTs of antidepressants given to depressed minors – all negative.

  • While they came with drawbacks, through to 1990 the psychotropic drugs introduced from the late 1950s onwards extended the range of clinical capabilities and likely did more good than harm. It is difficult to make the same claims about developments since 1990.

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×