Hostname: page-component-586b7cd67f-g8jcs Total loading time: 0 Render date: 2024-11-27T22:10:03.409Z Has data issue: false hasContentIssue false

Learning from critical incidents

Published online by Cambridge University Press:  02 January 2018

Rights & Permissions [Opens in a new window]

Abstract

Critical incident reviews are an integral part of modern psychiatric practice. The issue is central to the clinical governance agenda in the UK, yet there is widespread debate about their usefulness. There is a lack of systematic research into their impact on clinical outcomes, with most authors commenting on their form, their political implications, and whether they should exist at all. This article explores the historical basis to incident investigation, outlines an ‘ideal’ method of review and discusses the concepts of the learning organisation and root cause analysis. Further discussion focuses on what the objectives of critical incident review might be and whether organisations as a whole can learn from them.

Type
Research Article
Copyright
Copyright © The Royal College of Psychiatrists 2008 

Critical incidents in mental health generate many emotive issues. Despite the fact that the morbidity and mortality associated with mental illness are well-accepted, there is often a sense of failure associated with an outcome such as suicide or homicide. The National Confidential Inquiry into Suicide and Homicide by People with Mental Illness (Reference Appleby, Shaw and KapurAppleby et al, 2006) found that in over a quarter of suicides reported in the UK, the individuals had been involved with mental health services in the year prior to their death. At final contact, the risk of suicide had been deemed to be low or absent for 86% of these patients. Reference SzmuklerSzmukler (2000) asks of homicide inquiries: ‘What sense do they make?’ He questions the logic behind them and raises the issue of the financial cost, the harm to staff, the distorting influence of the media and the reinforcing of stigma.

The aims of this article

It would be too simplistic in this article merely to try to answer Szmukler's question; rather, my aim is to raise awareness of the issues. As I will explain, by focusing on investigating critical incidents purely to improve clinical outcomes, we may miss many opportunities. The evidence base provides no clear instructions, but perhaps we can learn from other organisations and find a way forward.

What is clear is that there will always be a need for public accountability. The frequent public inquiries into perceived ‘failures’ in care have not satisfied anyone, and have arguably generated more debate over their form than their content. In a similar vein to the revalidation issue, the psychiatric profession has the choice of generating its own solutions or having solutions imposed on it. Given the many factors involved in adverse events, definite solutions may not exist. Rather, the principle of learning through mistakes requires serious and systematic study, as this article will demonstrate. The following quotations perhaps best encompass the boundaries of the discussion and debate:

Risk management should be recognised within an organisation as an integral part of good management practice and should become part of the organisation's culture. It should be integrated into its philosophy, practices and business plans rather than be viewed or practiced as a separate programme. When this is achieved, risk management becomes the business of everyone in the organisation [Department of Health, Social Services and Public Safety, 2003: p. 1].

Serious Incident Inquiries are unhelpful as they all reach similar conclusions, add nothing to our current knowledge and do more harm than good in terms of adverse publicity for mental health services [Reference SalterSalter, 2003].

The history

Significant government papers and inquiries in the UK from the 1960s to the early '90s have been described as critical of mental health practice, and perhaps focused unfairly on perceived failures of community care. The major lesson that emerged was that better communication and coordinated care were needed between disciplines, and this led to the care programme approach (CPA). However, the effectiveness of the CPA was limited because some services placed all of their patients on it and others placed very few. This produced little evidence that clinical outcomes were substantially improving and that previous ‘failures’ would not be repeated. There was perhaps more debate over the style of reviews and inquiries than over what could be learned.

Publications

The National Confidential Inquiry (key publications: Safer Services and Avoidable Deaths: Reference Appleby, Shaw and AmosAppleby et al, 1999, Reference Appleby, Shaw and Kapur2006 respectively) continues to do much to improve our understanding of who among people with mental disorder takes their own life or commits homicide, as well as examining when and where. However, the Inquiry's finding that 86% of these people were deemed to be at low or absent risk at final contact raises the question of whether critical incident review is futile.

Over the past decade multiple publications have emerged on the subject. To Err is Human: Building a Safer Health System (Reference Kohn, Corrigan and DonaldsonKohn et al, 1999), published in the USA, introduced a wide audience in healthcare to the concepts of systems error, changing safety culture, and learning theory. The Department of Health's papers An Organisation with a Memory (2000), Building a Safer NHS for Patients (2001a) and Doing Less Harm (2001b) continued the same themes in the UK. More recent papers from the National Patient Safety Agency (NPSA) in England and Wales, and National Health Service Quality Improvement Scotland (NHSQIS), maintain the focus on a system-centred approach to error investigation, which is underpinned by an evolving organisational culture.

The key drivers for developing critical incident review can be seen in the phases of its history: the perceived failure of community care and widespread criticism of mental health services; the repeated public inquiries; the call for a more ‘sensible’ approach; and the search for a ‘scientific’ approach through root cause analysis and centralised think tanks (NPSA, NHSQIS). Such organisations focus much of their attention on tackling what they perceive to be the ‘blame culture’ inherent in review systems. Reference Swinson, Ashim and WindfuhrSwinson et al (2007) look forward to future work of the National Confidential Inquiry into Suicides and Homicides and reaffirm its aim ‘to improve services, not to blame them’. ‘Patient safety’ has now been adopted as the agreed terminology to encompass critical incident review. All of the above papers have drawn considerably from learning theory, specifically from organisational learning and the concept of the learning organisation. Safe Today – Safer Tomorrow (National Health Service Quality Improvement Scotland, 2006) is intended to take the patient safety agenda forward. The report has ten recommendations, of which two key ones are to change the culture of healthcare professionals concerning the reporting of incidents, including near-misses and closing the feedback loop, and ‘to encourage systematic local action and ownership of incident reporting’ (p. 35). The document also focuses considerable attention on mapping out a preferred system for incident reporting.

An ‘ideal’ method

Arguably the starting point in developing a process of incident investigation that can be subjected to scientific study lies in the adopted methodology. Box 1 outlines key elements of an ‘ideal’ method of critical incident review.

Box 1 Key elements of an ‘ideal’ method of critical incident review

  1. A clear process of reporting through senior management

  2. Clear, unambiguous guidance on what merits full incident review

  3. A standardised process of reviewing and reporting (e.g. root cause analysis)

  4. A clear statement of the ‘philosophy’ of review (separating learning from establishing blame, identifying failure and deciding disciplinary action)

  5. Quality assurance of reviews/reports

  6. Comparison with national data (National Confidential Inquiry)

  7. Incorporation of lessons back into the organisation as a whole

  8. Transparency and user/carer involvement

Particularly note the final bullet point. A frequent issue highlighted during critical incident review is anonymity, and the question of who has access to review information. A possible barrier to ‘honest’ reporting is fear of the consequences, from disciplinary action to litigation. This issue has generated specific legislation within other high-income nations (Patient Safety and Quality Improvement Act 2005, USA), and has prompted much debate and further exploration beyond the scope of this article. However, the key to success is achieving a balance between learning as much as possible from the process, and establishing and maintaining transparency and public trust in critical incident review.

This ‘ideal’ method is fine in theory, but can it be achieved in practice, bearing in mind the competing demands on clinical time and the expected dividends? Over the past 4 years the Patient Safety Group in Glasgow has assembled a package of documentation on critical incident review. It provides comprehensive guidance for managers dealing with incident reporting, the methodology of reviews, the composition of review teams, and the involvement of relatives. Box 2 outlines the contents of the package, which is available from the author on request.

Box 2 Guidance on critical incident review reporting

The package developed by the NHS Greater Glasgow and Clyde Mental Health Partnership Patient Safety Group includes the following.

  1. Critical incident management matrix Describes the various tasks that have to be carried out to manage a critical incident and identifies who is responsible for each

  2. Critical incident review process time line Gives review teams a feel for where in the review process they should be during the 3-month time frame for carrying out the review

  3. Information leaflet Designed to give information to relatives of patients who have been involved in a critical clinical incident

  4. Guidance for dealing with requests for critical incident review reports Clarifies the actions that should be taken when a request is received for a copy of a critical clinical incident review report

  5. Root cause analysis training Information on the courses available and contact details for training providers

  6. NHS Greater Glasgow & Clyde Management of Significant Clinical Incidents Policy, December 2006 (currently under review)

  7. Guidance for critical incident review report writing How to set out the report

  8. Critical Clinical Incident Review Report template The required format for reports

  9. Critical Clinical Incident Briefing Note A form to alert senior clinicians and managers that a serious incident has occurred

  10. NHSQIS Core Risk Assessment Matrix The national risk assessment matrix developed by NHS Quality Improvement Scotland (currently under review)

Is there any evidence that adopting this type of approach can help healthcare professionals? A significant proportion of critical incident reports in Glasgow 5 years ago were unstructured narratives written solely by senior clinicians. Currently, the Patient Safety Group evaluates all of the incident reports that it receives to ensure that correct procedure has been followed and that the conclusions and recommendations generated are backed up by the body of the report. If not, the review team receives further guidance. This is all in addition to established local processes in which the relevant clinical director and senior manager are responsible for ascertaining the need for a review, selecting the review team, ensuring the review is completed within the recommended time (3 months) and acting on the findings of the review. Local management teams are also responsible for training staff in critical incident review.

Over time, the Patient Safety Group has used this process to identify recurring ‘themes’ in incidents. Not surprisingly, they include communication (8%), risk management (11%), training (5.5%), practice (12%), environment (6.5%), administration (4.5%) and medication (4.5%); the most common theme is records (48%). These figures are based on critical incidents reported in Glasgow for the year 2006–7 (the total number of incidents was 99). Critical incident reviews highlight issues that may not have directly led to the outcome, but which can lead to learning, as is commonly the case with records. Often, the reviews generate recommendations that may not address a ‘cause’ of the incident. Reviews can also uncover latent errors and possible near-misses. Reference HelmreichHelmreich (2000) describes latent errors as ‘existing conditions that may interact with ongoing activities to precipitate error’ (p. 783). It could be argued that similar lessons could be learned by subjecting a random set of case notes to review.

To date, the above work has been done manually (although at the time of writing a new computer-based system is being developed). Furthermore, the above themes are arbitrarily named and follow no recognised guidance. Generating standardised terms may provide evidence over time of whether learning has occurred. Clearly, if the same themes recur, then learning has not taken place.

Incorporating lessons back into the organisation as a whole seems to be one of the most difficult objectives to achieve. In addressing this area, most of the recent policy documents have drawn on the concept of the learning organisation.

The learning organisation

The notion that a collection of individuals and resources working to a common set of objectives can act as a single entity may seem alien to a predominantly scientific profession. The concept grew out of the study of learning, frequently quoting such ‘gurus’ as Chris Argyris and Donald Schon (Reference Argyris and SchonArgyris & Schon, 1978) and Peter Reference SengeSenge (1990). Reference HandyHandy (1990: p. 199) defines learning organisations as: ‘[o]rganisations which encourage the wheel of learning, which relish curiosity, questions and ideas, which allow space for experiment and for reflection, which forgive mistakes and promote self-confidence’.

Although the original concept of the learning organisation was described in the 1970s, it was not embraced by the business world until the '90s. This was fuelled by a consequence of the ‘de-layering’ that many companies implemented to improve efficiency: the companies lost staff and with them their knowledge and expertise. Another significant driver to adopting the concept of the learning organisation was the increasing rapidity of change in the business environment, mainly because of technological advances. The theory was that a company or organisation that could continually adapt and innovate would be more successful. Furthermore, if the organisation could capture the knowledge of individuals, that knowledge would remain when the individuals left.

The learning organisation explained

Using the simple example of a domestic heating thermostat, Reference Huczynski and BuchananHuczynski & Buchanan (2001) explain the concept of Reference Argyris and SchonArgyris & Schon's (1978) single-loop and double-loop learning. In single-loop learning the thermostat is simply given a predetermined setting at which to respond to changes in temperature. It accepts the setting without question. In double-loop learning, the thermostat functions in the same way but challenges and perhaps refines the setting according to changes in its environment. In essence, the thermostat develops a form of ‘intelligence’. Similarly, the learning organisation as a whole, as opposed to individuals within it, becomes ‘aware’ and can adapt rapidly to changing circumstances. It can even anticipate change.

A responsive service

Reference Rose and LawtonRose & Lawton (1999) discuss the concept of the learning organisation in the context of ‘responsive’ public services. They highlight the need for public services to move away from bureaucratic cultures to more creative, adaptive and continuously changing cultures. In 1970, Alvin Toffler used the term ‘adhocracy’ to describe this new kind of organisation. The learning organisation does not simply continually react to problems and external pressures. Rather, it sets up multiple feedback loops to facilitate continuous learning. In turn, this learning enables the organisation to become more proactive in response to the changing environment.

The generally agreed characteristics of a learning organisation, based mainly on Reference SengeSenge's (1990) work, are described in Box 3.

Box 3 Characteristics of a learning organisation

  1. The sharing of information within the organisation by creating a ‘boundary-less’ environment

  2. A strong sense of teamwork to facilitate collaboration

  3. ‘Empowered’ teams and employees, with little need for direction or control by managers

  4. Managers functioning as facilitators, supporters and advocates of working teams

  5. Leaders facilitating the formation of a clear vision for the organisation's future

  6. An open culture of trust, with employees able to communicate, experiment and learn, without fear of criticism or punishment

  7. A strong sense of ‘community’ and caring within the organisation

  8. The structure of the organisation helps, not hinders, the employees in carrying out the organisation's business.

Management science

At this point it would be useful to reflect briefly on the development of management ‘science’, as this appears to underpin much of the policy generated by critical incident review. Management as a science has grown rapidly over the past 100 years. Theory has developed from practice in a similar fashion to medicine. Frederick Taylor, in his quest for better efficiency, introduced the theory of scientific management in his pursuit of the ‘one best way’. However, it gradually became apparent that this could not be realised in all cases. Interest turned to people and organisational behaviour, and management science came to be dominated by psychology and sociology, rather than economics. In the 1950s, a group of American experts described the concept of total quality management (TQM). This combined economic efficiency with empowered employees and customer focus. This invention was largely ignored in the USA but was adopted wholeheartedly in Japan, which led to a period of superiority in Japanese manufacturing.

The public sector increasingly drew from this ‘management evidence base’. The need to manage the rapidly expanding public sector resulted in widespread adoption of private-sector management practices. This was arguably driven in part by politicians needing to reduce costs without appearing to reduce public services. However, Peters (cited in Reference Rose and LawtonRose & Lawton, 1999: p. 348) eloquently states:

All these paeans of praise were being raised to the private sector despite evidence that the private sector was not performing particularly well in many of the industrialised countries. The same governments that were telling their own employees to emulate the private sector were bailing out banks, auto manufacturers, steel makers and a host of other financially failing enterprises.

Root cause analysis

Root cause analysis (commonly abbreviated as RCA) has been introduced to critical incident review to try to facilitate no-blame learning and provide consistent methodology in investigations. Reference Neal, Watson and HicksNeal et al (2004) states that ‘root cause analysis is a component of the broader field of total quality management, which has arisen from the world of business management’. In simplistic terms, root cause analysis is a tool that might be useful in reviewing critical incidents. It is not a cure-all. Neither is it a tool to establish the right answer or an alternative to good judgement, clear thinking and sound clinical knowledge applied to incident investigation.

The aim of root cause analysis in medical practice was to steer critical incident review away from establishing blame. The key to this is seeing care delivery as taking place in systems. Reference BentleyBentley (2001) discusses two national inquiries in the UK: the Bristol Royal Infirmary inquiry into abnormally high death rates during children's heart surgery at the hospital, and the case of Rodney Ledward, a gynaecologist struck off the Medical Register for serious failures in clinical practice. He argues that in Bristol, the focus in the organisation was on individual surgeons, and not on the political and managerial aspects that allowed situations to develop and continue. Similarly, had systematic audit and robust clinical governance been in place in the hospital where Ledward practised, the problem may have arisen but might not have continued.

The tools of analysis

Root cause analysis uses various tools to enable a systematic examination of what is happening. It also draws on the theory of human error, on which James Reference ReasonReason (1990, Reference Reason2000) is the authority. Box 4 gives a brief description of the process (further information can be found on the NPSA website at www.npsa.nhs.uk).

Box 4 The process of root cause analysis

  1. 1 Define the terms and boundaries of the investigation

  2. 2 Obtain all available documents relating to the period of the critical event

  3. 3 Collect and tabulate data as a chronology of events

  4. 4 Conduct interviews and site visits

  5. 5 Refine the chronology

  6. 6 Use tools (e.g. the ‘five whys’, fishbone) to aid analysis

  7. 7 Formulate findings and make suggestions for review

(After Reference Neal, Watson and HicksNeal et al, 2004)

However, as with the CPA and, perhaps, risk assessment tools, it is not so much the tools themselves that are crucial but the context in which they are used and how they are applied.

Root cause analysis arrived in the wake of the much maligned public inquiries and was seen as the key to no-blame investigations, perhaps placing the tool under too much pressure to deliver. It also necessitates training (in Glasgow it is recommended that all lead investigators have the full 3-day training). Managers and busy clinicians from all disciplines rightly question whether this training is a priority, especially in the light of such a small evidence base in the literature pertaining to medical practice.

Root cause analysis has its critics. For example, Reference DekkerDekker (2002) argues that it can be biased by subjectivity and hindsight. He further contends that it is an oversimplification to believe that there is always a single cause for an error, stating that ‘multiple factors – each necessary and only jointly sufficient – are needed to push a complex system over the edge of breakdown’ (p. 34).

In Safe Today – Safer Tomorrow (National Health Service Quality Improvement Scotland, 2006) the NHSQIS points out that causal analysis can be based on the opinions of investigators, and that it generates predominantly qualitative data that are difficult to aggregate and turn into meaningful learning.

In general, when drawing on the experience of incorporating tools into clinical practice, an ideal tool without critics is as elusive as the Holy Grail. Again, it needs to be heeded that how the tool is applied is of key importance.

The aim of critical incident review

The primary objective of critical incident review is to improve patient safety, although a well-designed system can generate other objectives (Box 5).

Box 5 Objectives of critical incident review

  1. Improved patient safety

  2. Continuous learning

  3. Improved practice (for clinical staff)

  4. Improved outcomes (for patients)

  5. No-blame culture

  6. Minimal cost (diverting negligence costs to clinical care)

  7. Demonstrable clinical governance

  8. Achievement of central/government targets

It could be argued, given the plethora of reports from the government on this topic, that the process described so far is wholly centrally driven. Very little of the documentation seems to have stemmed from ‘real-world’ clinical practice. This raises the question of whether critical incident review is designed to demonstrate ‘governance’, public accountability and, perhaps, the pursuit of central targets, or to genuinely improve clinical care. Can the process do both, or are these tasks mutually incompatible?

I believe that a major challenge in adopting a standardised and comprehensive process is to encourage key individuals, particularly medical staff and senior managers, to invest time, effort and money in a process that struggles to demonstrate tangible results. For any initiative for change to be successful, all stakeholders must believe that change is required and that it will generate outcomes that are meaningful to them. In reflecting on the suggested objectives of critical incident review, individual professional groups may focus on only one. For example, management may see the process purely as a method of demonstrating governance, and clinicians as a method of reducing suicides. This narrow focus among professional groups would not lead to successful collaboration and subsequent change.

The inadequacy of clinical evidence to support the immediate value of critical incident review has meant that evidence, rightly or wrongly, has been drawn from other organisations, in particular the aviation industry.

Lessons from elsewhere

Let me pose an interesting question. Would you feel safer checking in for an international flight or for elective surgery?

In the 1980s, research into the aviation industry concluded that human factors played a major part in 73% of accidents. A spectacular and tragic example that serves as an illustration was the Tenerife disaster of 1977, in which 583 people died (www.panamair.org/accidents/victor.htm). The KLM captain took off when he knew the runway might not be clear. His apparently inexplicable action resulted from a number of factors, including his flight engineer's unwillingness to challenge him, the use of ambiguous and non-standardised language, and a set of circumstances driven by a bomb threat that led to the diversion of the two aircraft involved to an unfamiliar airport. The learning that followed included adopting standardised language and challenging existing team behaviours. Regarding the latter, the existing culture of pilots was very similar to that within medical teams: the captain was at the top of a hierarchical system, communication was top-down, and the questioning of systems and practice was uncommon. The learning led to the creation of crew resource management (Box 6), which focuses on team coordination, communication, leadership and behavioural awareness. Crew resource management was widely adopted by the industry and has become integral to pilot culture. The Civil Aviation Authority has made it a mandatory requirement of pilot training.

Box 6 The main components of crew resource management

  1. Situation awareness

  2. Risk management

  3. Communication

  4. Choosing behaviour (self-awareness)

  5. Feedback

  6. Leadership and motivation

The aviation industry claims that this has greatly improved safety. However, Reference Helmreich, Merritt and WilhelmHelmreich et al (1999: p. 23) discuss the validation of crew resource management and highlight difficulties in relation to proving better outcomes: ‘because the overall accident rate is so low (one death in eight million passenger flights in 2001) and training programmes so variable, it will never be possible to draw strong conclusions about the impact of training (crew resource management) during a finite period of time’.

Helmreich et al conclude that crew resource management is not a mechanism to eliminate error but is one of a number of tools that organisations can use to manage error. In a similar vein, critical incident review in itself probably cannot demonstrate clear outcomes. However, perhaps the process as a whole can be used to manage error.

How much do better systems improve clinical outcomes?

Perhaps the most compelling argument, backed up by the National Confidential Inquiry statistics (Reference Appleby, Shaw and KapurAppleby et al, 2006), is that of how much effort is required to reduce, for example, local suicide risks. What other work is being left undone when the clinician is attempting to reduce suicide rates by a factor of perhaps one or two a year?

As an illustration of the pitfalls of building a system on local statistics, Table 1 shows the figures for all incidents and suicides that went through critical incident review in Glasgow in 2004 and 2005. What can we conclude by analysing these figures? Overall, the suicide rate in Glasgow increased by 57% from 2004 to 2005 (a breakdown of the figures is available from the author on request). Coincidently, the work of the Critical Incident Review Group highlighted the importance of central reporting so that more incidents went through the system.

Table 1 Incidents reported in Glasgow in 2004 and 2005

Year All incidents, n Suicides, n
2004 67 21
2005 81 33

The figures provide only a snapshot of what is going on, and so must be viewed with caution. So, if data cannot be safely interpreted, can we draw any conclusions? Should critical incident review continue? Do we accept that the patient safety agenda may be purely an exercise in public relations?

If the process is discarded, clinicians risk both the loss of public trust and having an unwelcome and unhelpful alternative imposed on them; the lessons learned from the public inquiry process will have been lost. Furthermore, as stated previously, if the process is seen as an exercise in investigating purely to reduce the number of incidents, then potential benefits from the process itself may be lost.

Can organisations learn from critical incidents?

From a purely pragmatic point of view it makes sense to reflect on significant incidents, particularly when the outcome is suicide. It is good practice for the treatment team to review all suicides. At the local level, all of those involved need support and understanding to continue their clinical practice in the wake of tragic outcomes.

As chair of Glasgow's Critical Incident Review Group, I have read all of the reports of the past 4 years and feel that the reading has been invaluable, as has been my role in the process. Am I, as a result, a ‘safer’ doctor? Again, the raw figures would probably be inconclusive and open to interpretation. However, on the basis of our learning so far, we have begun to stage regular learning events. These have multiple objectives:

  1. to provide feedback to participants on themes generated by incident reviews

  2. to generate discussion of the processes adopted

  3. to facilitate learning through interactive workshops

  4. to allow participants to focus and reflect on incident review

  5. to demonstrate the commitment of senior management to learning from incidents, and crucially the commitment of those with responsibility for deploying resources and performance-management

  6. to encourage participants' involvement in planning future learning events and the further development of policies.

This approach is not novel (Reference RoseRose, 2000). What is different is the desire to be as comprehensive and inclusive as possible. It has the backing of senior management, yet the process is driven on a clinical basis by clinicians. The aim is to begin to close the loop between investigations, the feedback of meaningful information, and resultant clinical improvements.

One further issue that this process needs to address is that of evaluation. The learning events take place on a regular basis. Each participant is asked to complete an evaluation and this is used, in turn, to inform future learning events. The key to evaluation is to facilitate continuous learning and for participants, rather than a predetermined agenda, to drive the evolution of the learning events. After each event, learning is fed back through established clinical governance structures. The Patient Safety Group is responsible for collating information from learning events and reporting this information to the senior management team, which in turn expects evidence that learning is being translated into actions.

The Department of Health (1998) defines clinical governance as ‘a system through which NHS organisations are accountable for continuously improving the quality of their services and safeguarding high standards of care, by creating an environment in which clinical excellence will flourish’.

The desired end-state is a strong focus on patient safety and an embedded culture of learning. This is the vision for the organisation's future. It is hoped that some of the characteristics of a learning organisation will emerge in time. The process will need to generate some indication of benefit or it will cease. However, key to this is the question of how success is measured. As demonstrated above, suicide rates themselves are unhelpful. Perhaps the best route forward is to bear in mind the multiple possible objectives of critical incident review, and to keep an open mind regarding what should be used as indicators of success.

In focusing solely on outcomes (i.e. a reduction in adverse incidents), learning from the process itself is missed. In Glasgow to date, our adopted methodology has:

  1. generated themes that could be used to track evidence of learning

  2. standardised the process of incident review

  3. exposed staff to learning about the method of incident analysis

  4. encouraged an inclusive approach

  5. given guidance on involving relatives.

It is too soon to demonstrate improved patient outcomes. However, if the components of an ideal incident review system are successfully incorporated into an organisation, coupled with the principles of a true learning organisation, then improved clinical care should result.

Discussion

As shown above, much has been done in relation to NHS policy. However, there is much debate over whether, in general, health policy is backed up by a clinical evidence base or is driven by vested interests and politics. Many recommendations exist in relation to the management of critical incidents. Mandatory inquiries were unpopular. The introduction of root cause analysis as an alternative may have put unfair pressure on the tool. The belief was that it could somehow magic away the emotive, or ‘blame’, element of incident investigation. As described, raw figures at a local level will do little to support the existence of rigorous critical incident evaluation. Any comparative discrepancy locally compared with national figures (given the actual numbers involved) must also be viewed with extreme caution, especially if there is a suggestion of major changes in clinical practice on the basis of these figures.

It makes sense to investigate serious critical incidents and pay heed to the findings. In doing this in a rigorous and systematic manner, what are the opportunity costs and what is the measurable improvement generated in clinical care? The risk apparent is that this process remains an exercise in public relations and is never embraced by clinicians and managers as a means to improving clinical practice. Furthermore, if the process of critical incident investigation is solely about identifying when things go wrong, and fixing it to prevent recurrence, then other opportunities are missed.

Answering the question

Finally, returning to Szmukler's question about critical incident reviews: ‘What sense do they make?’ The answer is unknown. It is unlikely that critical incident review will ever unambiguously demonstrate measurable clinical improvements. Despite this, critical incident review is high on the clinical governance agenda within the NHS and is likely to remain there. A return to mandatory inquiries is undesirable, as is allowing the process to be used solely to generate targets and performance figures. The process will only become a clinical priority, if it generates meaningful clinical benefit.

To put it more simplistically, can healthcare organisations learn from their mistakes? Surely the answer must be yes. However, the ability to learn from mistakes is linked with maturity. Mature individuals are able to accept their failings, can recognise the desirability of continuous learning and are openly receptive to criticism. These are the key qualities embedded in crew resource management in the aviation industry. They are also key qualities of a true learning organisation. Advances in clinical practice do not always arise solely from clear and unambiguous evidence – often the clinician will start only with an intuition. There is no overwhelming evidence base in support of critical incident review, but equally, all clinicians would surely agree on the benefit of reflecting on when things go wrong.

Declaration of interest

None.

MCQs

  1. 1 Characteristics of a learning organisation:

    1. a sharing information within the organisation by creating a ‘boundary-less’ environment

    2. b a strong sense of individual working

    3. c teams and employees working within rigid managerial control

    4. d multiple objectives with no one single vision

    5. e a strict hierarchical organisational structure.

  2. 2 Root cause analysis:

    1. a is a method of establishing blame for an incident

    2. b is best carried out by one person

    3. c requires no training

    4. d originated in the field of total quality management

    5. e focuses on individuals, not systems.

  3. 3 3In the National Confidential Inquiry's Avoidable Deaths, the percentage of patients reported to be at no or low suicide risk at final contact with services was:

    1. a 5

    2. b 25

    3. c 86

    4. d 50

    5. e 96.

  4. 4 4Key elements of an ideal method of critical incident review are:

    1. a a clear process of reporting through senior management

    2. b individuals choosing what merits full incident review

    3. c a flexible process of review/report

    4. d a process linked closely with disciplinary procedures

    5. e the lessons learned are fed back solely to senior management.

  5. 5 Clinical governance is:

    1. a a system for continuously improving quality of services

    2. b a process to identify poorly performing doctors

    3. c the domain of senior management only

    4. d an audit tool

    5. e concerned with financial matters only.

MCQ answers

1 2 3 4 5
a T a F a F a T a T
b F b F b F b F b F
c F c F c T c F c F
d F d T d F d F d F
e F e F e F e F e F

References

Appleby, L., Shaw, J., Amos, T. et al (1999) Safer Services: Report of the National Confidential Inquiry into Suicide and Homicide by People with Mental Illness. TSO (The Stationery Office).Google Scholar
Appleby, L., Shaw, J., Kapur, N. et al (2006) Avoidable Deaths: Five Year Report by the National Confidential Inquiry into Suicide and Homicide By People with Mental Illness. University of Manchester.Google Scholar
Argyris, C. & Schon, D. (eds) (1978) Organisational Learning. Addison-Wesley.Google Scholar
Bentley, D. (2001) Clinical or crew resource management? Medico-Legal Journal, 68 (4), 113114.Google Scholar
Dekker, S. (2002) The Field Guide to Human Error Investigations. Ashgate Publishing.Google Scholar
Department of Health (1998) A First Class Service: Quality in the New NHS. TSO (The Stationery Office).Google Scholar
Department of Health (2000) An Organisation with a Memory (Report of an Expert Group on Learning from Adverse Events in the NHS). TSO (The Stationery Office).Google Scholar
Department of Health (2001a) Building a Safer NHS for Patients – Implementing an Organisation with a Memory. Department of Health.Google Scholar
Department of Health (2001b) Doing Less Harm. Department of Health.Google Scholar
Department of Health, Social Services and Public Safety (2003) Risk Management. DHSSPS (http://www.dhsspsni.gov.uk/risk_03.doc).Google Scholar
Handy, C. (1990) Inside Organisations. BBC Books.Google Scholar
Helmreich, R. L. (2000) On error management: lessons from aviation. BMJ, 320, 781785.Google Scholar
Helmreich, R. L., Merritt, A. C. & Wilhelm, J. A. (1999) The evolution of Crew Resource Management training in commercial aviation. International Journal of Aviation Psychology, 9 (1), 1932.Google Scholar
Huczynski, A. & Buchanan, D. (2001) Organizational Behaviour: An Introductory Text. Financial Times/Prentice Hall.Google Scholar
Kohn, L. T., Corrigan, J. M. & Donaldson, M. S. (eds) (1999) To Err Is Human: Building A Safer Health System; Committee On Quality Of Health Care In America. Institute of Medicine, National Academy Press.Google Scholar
National Health Service Quality Improvement Scotland (2006) Safe Today – Safer Tomorrow; Patient Safety – Review of Incident and Near-Miss Reporting. Scottish Executive.Google Scholar
Neal, L. A., Watson, D., Hicks, T. et al (2004) Root cause analysis applied to the investigation of serious untoward incidents in mental health services. Psychiatric Bulletin, 28, 7577.Google Scholar
Reason, J. (1990) Human Error. Cambridge University Press.Google Scholar
Reason, J. (2000) Human Error: Models and Management. BMJ, 320, 768770.Google Scholar
Rose, A. & Lawton, A. (1999) Public Services Management. Financial Times/Prentice Hall.Google Scholar
Rose, N. (2000) Six years' experience in Oxford; Review of serious incidents. Psychiatric Bulletin, 24, 243246.Google Scholar
Salter, M. (2003) Serious Incident Inquiries: a survival kit for psychiatrists. Psychiatric Bulletin, 27, 245247.Google Scholar
Senge, P. (1990) The Fifth Discipline: The Art and Practice of the Learning Organisation. Doubleday Currency.Google Scholar
Swinson, N., Ashim, B., Windfuhr, K. et al (2007) National Confidential Inquiry into Suicide and Homicide by People with Mental Illness: new directions. Psychiatric Bulletin, 31, 161163.Google Scholar
Szmukler, G. (2000) Homicide inquiries. What sense do they make? Psychiatric Bulletin, 24, 610.Google Scholar
Figure 0

Table 1 Incidents reported in Glasgow in 2004 and 2005

Submit a response

eLetters

No eLetters have been published for this article.