Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-25T14:19:53.849Z Has data issue: false hasContentIssue false

Digital literacy in contemporary mental healthcare: electronic patient records, outcome measurements and social media

Published online by Cambridge University Press:  04 November 2022

Derek K. Tracy*
Affiliation:
Medical Director of West London NHS Trust, London, UK and a senior lecturer at King's College London and University College London, UK.
Romayne Gadelrab
Affiliation:
Clinical Research Fellow at King's College London, an honorary consultant psychiatrist at South London and Maudsley NHS Foundation Trust, London, UK and co-chair of the Royal College of Psychiatrists’ (RCPsych's) Digital Special Interest Group, London, UK.
Ayesha Rahim
Affiliation:
Consultant psychiatrist and Chief Medical Information Officer, Surrey and Borders Partnership NHS Foundation Trust, Leatherhead, UK and Clinical Lead for Digital Mental Health at NHS England, UK.
Gabrielle Pendlebury
Affiliation:
Clinical Director of Psychiatric Services, Onebright, York, UK.
Hashim Reza
Affiliation:
Consultant psychiatrist at Oxleas NHS Foundation Trust, London, UK.
Rahul Bhattacharya
Affiliation:
Associate Clinical Director at East London NHS Foundation Trust, London, UK, Clinical Lead for New Models of Care at London Clinical Networks (NHS England), London, UK and an honorary senior clinical lecturer at Barts and the London School of Medicine and Dentistry, London, UK.
Asif Bachlani
Affiliation:
Consultant psychiatrist and Clinical Director for the Priory’s Acute and PICU Service Networks, the Hospital Medical Director for the Priory Hospital, Woking, UK, the RCPsych’s General Adult Faculty Finance Officer and a committee member of the RCPsych Digital Special Interest Group, London, UK.
Katherine Worlley
Affiliation:
Consultant psychiatrist and local college tutor at Broadmoor Hospital, Crowthorne, UK and electronic prescribing and medicines administration (ePMA) lead at West London NHS Trust, London, UK.
David Rigby
Affiliation:
Consultant psychiatrist for the Dunstable Community Mental Health Team, East London NHS Foundation Trust, Dunstable, UK and the co-chair of the RCPsych's Digital Special Interest Group, London, UK.
Jonathan Scott
Affiliation:
Consultant psychiatrist and Chief Clinical Information Officer, West London NHS Trust, London, UK.
Subodh Dave
Affiliation:
Consultant psychiatrist and Deputy Director of Undergraduate Medical Education at Derbyshire Healthcare Foundation Trust, Derby, UK, Professor of Psychiatry at the University of Bolton, UK and Dean of the RCPsych, London, UK.
*
Correspondence Derek Tracy. Email: [email protected]
Rights & Permissions [Opens in a new window]

Summary

Digital psychiatry has become increasingly important and understanding of certain aspects is essential for practising clinicians. This article discusses electronic patient records (EPRs), from their origins to current and future use, the growth and embedding of outcome measurements, the use of social media, and learning and research in virtual arenas.

Type
Article
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press on behalf of the Royal College of Psychiatrists

LEARNING OBJECTIVES

After reading this article you will be able to:

  • recognise the problems emerging from electronic patient records and identify potential means of minimising them

  • appreciate the various types of benefits from greater collection and integrated use of clinical outcome measurements

  • recognise the opportunities and risks from the use of social media.

This is the second of two articles exploring contemporary aspects of digital psychiatry that, we argue, are essential for contemporary healthcare practice. The first article (Tracy Reference Tracy, Gadelrab, Rahim and Pendlebury2022a) discussed online assessments and mobile health apps. This second article focuses on electronic patient records (EPRs), outcome measures and social media.

Electronic patient records (EPRs)

Origins, current limitations, future goals

Although we might occasionally curse our computers for not working, few of us who remember and worked with paper records would wish for their return. The frustration of doing an out-patient clinic but finding that somebody's notes were ‘tracked to the other hospital’ and therefore having to review a patient ‘blind’ is not yet forgotten. The ability to securely record information, accessible almost anywhere to appropriate staff, has been an enormous boon to healthcare, in terms of both safety and quality of decision-making.

However, the way mental health services in many countries, including the UK, have transitioned from paper records to electronic patient records (EPRs) has meant that we have not yet fully realised the benefits of digital. Too often, EPRs have become little more than electronic filing cabinets, which makes it hard to access relevant notes for the patient in the vast swathes of data. Too many clicks are often needed to get to the relevant page, with multiple entries to be made about the same patient in different parts of the care record. Such information within EPRs is often called ‘unstructured data’ – a term that originated, in this context, within a specific database known as CRIS (Clinical Record Interactive Search) – and some people refer to this as ‘free text’, where one can write information freely without, for example, having multiple tick boxes or drop-down menus. There is greater complexity to the issue, which is beyond the remit of this article, where one might consider ‘unstructured’, ‘semi-structured’ and ‘fully structured’ data. Although many clinicians find free text intuitively easier, to optimise the potential of EPRs we need to move to generally more structured data such as standardised templates. In addressing potential challenges, we recognise that experiences will vary between organisations and between EPR systems. At present RiO and SystmOne are the two most commonly used in National Health Service (NHS) mental health services, but others (and internal configurations) can vary. The following provides our experiential overview; the area of EPRs more generally is poorly evidenced in the scientific literature.

Other barriers which are unrelated to the EPR include: outdated hardware/devices, poor internet connectivity, gaps in training on how to use the EPR most effectively, and clinical processes and recording procedures that are based on paper notes rather than digital workflow. To further confuse things, the same EPR product can look and feel entirely different in different organisations, depending on how it has been configured locally. The good news is that there are some practical steps that organisations can take to minimise the burden of clinical record-keeping in a digital system, whether or not the problem is directly related to the EPR (Box 1).

BOX 1 Practical tips to enhance the usability of electronic patient records (EPRs)

Redesign your clinical workflow – Although it may feel convenient to simply replicate existing paper-based processes when moving to an electronic process, this change is an opportunity to transform not just the ‘what’ but the ‘how’. For example, could making written notes in out-patient clinics, then adding a progress note and then dictating a clinic letter be transformed into making notes into the EPR and generating a clinic letter from that entry?

Get involved in developing/improving your EPR – ‘Design thinking’ is predicated on the idea that the best products are designed with the end user in mind. Therefore it is vital that clinicians and administrative staff are fully involved in the development cycle from discovery (what do you need?) to delivery (what do you get?). Although this might not seem like a high priority for busy clinicians, without this engagement they risk ending up with products that do not fit around the tasks that they need to do. The first step might be to contact your organisation's chief clinical information officer, usually a senior clinician who works alongside the digital team to deliver digital transformation.

Invest in the time to revisit your EPR training – International research conducted by Klas Research, an organisation specialising in the evaluation of digital health solutions (predominantly EPRs), has demonstrated that a robust training programme to support the deployment of a new EPR system was the strongest predictor of overall clinician satisfaction with the system (Duda Reference Duda2020). This step is often skimped on in the interests of minimising disruption to clinical work, but experience shows that doing this simply exacerbates the difficulties associated with large-scale change. Instead, consider whether revisiting training modules or rebooking a place on training sessions: EPRs change over time, and you may discover new shortcuts that make your life easier. KLAS note that shared ownership (i.e. digital leadership) and personalisation of EPRs are critical and one explanation as to why different organisations get very different staff feedback on the same system (Duda Reference Duda2020).

Check whether your device is up to date – Old laptops or desktops do not always perform well with the heavy demands of daily use. Of course, devices cannot be replaced every couple of years, but if yours (or the shared device you have access to) is looking particularly old, it may be worth contacting your IT service desk to see if a replacement is due.

Report connectivity problems on trust sites – Many of the ‘performance’ problems that people experience when using EPRs may in fact be attributable to connectivity blackspots. Not all of these problems are brought to the attention of the digital team, particularly in large shared spaces, as the assumption is that someone else has reported it. If you do notice a problem, logging the job formally with your IT service desk can flag up an issue that was not previously known about.

Consider what additional tools might help you – Several organisations have additional tools that might help reduce the documentation burden further. For example, speech-to-text solutions exist that allow you to dictate your clinical entries rather than type them. We can generally speak quicker than we type, and using tools to support this can be efficient for longer clinical entries.

To move from data collection for ‘data's sake’ to making this relevant to front-line clinicians and patients, there needs to be an increased focus on collecting clinically relevant data. This will become increasingly important for mental health organisations to demonstrate the value that they provide for their patients and local populations as we move into the NHS's integrated care systems (ICSs): population care provided across such geographical ‘footprints’ will include acute secondary and tertiary care, mental health, primary care and social care. Although principled on better cooperation and cross-organisational treatment, services that can better evidence their outcomes are likely to be in stronger positions in terms of securing funding. Without such routine data collection, the health inequalities facing our patients are likely only to worsen not improve (see ‘Clinical outcome and experience data’ below).

At a systems level, it is undoubtedly problematic that multiple systems exist in the first place, and at a national level, most NHS trusts’ systems do not adequately ‘talk’ to each other, to social care or to primary care in a sophisticated way. We recognise that this is not the universal experience and, for example, SystmOne, when used by overlapping mental health and primary care services, can provide such a service. It is ironic that some EPRs do have the function set up to share information across organisations but for various contractual and cultural reasons this has not been implemented: in other words, the interoperability is technically feasible, but so far suppliers have not appeared to prioritise it. This is beginning to change with the emergence of ‘shared care records’, which allow healthcare providers in primary and secondary care in one or more integrated care system (ICS) to share clinical information. There is still a long way to go in this space: unless such information gets copied and coded into the receiving organisation's clinical information system, it cannot be accessible to clinical decision support systems. The most sophisticated type of information sharing consists of mutually accessible ‘structured’ information that can be reliably coded by the machine with minimal human intervention. EPRs currently in use vary in their ability to capture coded data from clinical information in the care records, and in practical everyday use, it has been our experience that this can provide rather basic and limited functionality of coding for diagnoses and other parameters that have been determined by statutory reporting requirements imposed through the data-sets for measuring and monitoring activities in clinical services. Although these statutory data reports can provide insights into care needs and service delivery for populations at local and regional level, they offer far less in supporting delivery of high-quality care to individual patients.

We need our EPRs to share information with a single click from one information system to another to ensure that a complete clinical picture is available in each care setting, avoiding wasting time gathering information that already has been provided or repeating interventions that have previously been ineffective. Mental health services can then start using clinical decision support tools that have been in routine use in primary care and many other specialty areas to monitor and manage patients according to National Institute for Health and Care Excellence (NICE) guidelines. This might include prompts regarding drug interactions or alerts to suggest physical health baseline investigations when a particular medication is prescribed. With the improved gathering of structured or ‘coded’ data, these prompts would appear automatically. Better structured data can become part of the care record, appropriately analysed and manipulated, for example for audit purposes, allowing for a much richer understanding of an individual patient and also of the service as a whole. Finally, and perhaps most excitingly, aggregating these data into a much wider pool allows analysis of populations. By ensuring that we collect data in a standardised way as far as is possible, we create the conditions to develop actionable insights that have far greater implications for whole populations.

Risk of cyber-attacks

Advances in technology have brought risks. In 2021, the Health Service Executive in Ireland (HSE.ie) secured injunctions from the High Court restraining any sharing, processing, selling or publishing of data stolen from its computer systems following a massive cyber-attack (Carolan Reference Carolan2021). Cyber risk management is the process of identifying, analysing, evaluating and addressing an organisation's cyber security threats. In this, the NHS is similar to many other organisations as it has data hosted on virtual ‘clouds’ and held by outsourced companies, typically the EPR providers, who have various ways of storing it. There is guidance on how such information can be provided to service providers for commercial purposes, as well as researchers in academia, unless a patient explicitly opts out of this. There is a register available of those who can access such information. Clearly, as in any organisation, these data are vulnerable to malevolent hacking.

Although there is no formally agreed or mandated approach, a sensible first step of any cyber risk management programme is a cyber risk assessment. This will give you a snapshot of the threats that might compromise the organisation's cyber security and how severe they are. Based on a risk appetite, a cyber risk management programme then determines how to prioritise and respond to those risks. Although specific methodologies vary, a risk management programme typically follows the approach laid out in Box 2.

BOX 2 The authors' summary of a common approach to managing cybersecurity

Treat – Modify the likelihood and/or impact of the risk, typically by implementing security controls.

Tolerate – Make an active decision to retain the risk (for example, because it falls within the established risk acceptance criteria).

Terminate – Avoid the risk entirely by ending or completely changing the activity causing the risk.

Transfer – Share the risk with another party, usually by outsourcing or taking out insurance.

Since cyber risk management is a continual process, monitor your risks to make sure they are still acceptable, review your controls to make sure they are still fit for purpose and make changes as required. Remember that your risks are continually changing as the cyber threat landscape evolves and systems and activities change. An unanswered question on many minds, and one not unique to healthcare, is whether, as technologies progress, the NHS or other organisations will be able to adequately repel all such threats.

Clinical outcome and experience data

Although there are many articulations of value in healthcare, the underlying concept is of a balance between health outcomes and the cost of services (Jabbal Reference Jabbal and Lewis2018). The best value in healthcare is obtained when the best outcomes are achieved at the lowest cost (Porter Reference Porter2010). We recognise that value-based commissioning is more an umbrella concept, with examples from the USA, The Netherlands and the UK. In the UK, NHS reform proposals suggest a move away from a system based on payment by results or activity-based payment towards more capitated (can be a fixed amount, at least in part, sometimes referred to as ‘block contract’) payment structures based on populations. This offers an opportunity for value-based solutions to form part of a strategy to improve population health outcomes. Within NHS secondary care, historically, mental health services have been focused on activities undertaken (number of patients seen) rather than patient recovery or outcomes (McGough Reference McGough2021). Following the publication of the Five Year Forward View for Mental Health (FYFVMH) (Mental Health Taskforce 2016) there is now an increased focus on quality and effectiveness of care. The FYFVMH sets out the vision for evidence-based treatment pathways, with clear transparency regarding quality and outcomes. It recommends a framework approach which includes outcomes that:

  • are clinically relevant and add value for clinicians

  • reflect what people and carers who use the service want

  • are culturally appropriate, reliable and aligned to the system-wide objectives

  • are established and are known to be reliable and valid.

It is expected that clinical services will implement collection of mental health outcome measures in routine day-to-day practice with the aim of achieving a range of benefits, including:

  • helping clinicians working with people with mental illnesses to achieve their patients’ recovery goals and improve their well-being by using these quality indicators

  • providing valuable feedback on patients’ recovery and progress to clinicians within the team at a patient and service level

  • supporting team and individual clinician development using systematic feedback of these quality and outcome measures, thereby promoting reflective practice for the clinicians within their team

  • providing leadership roles for patients who can co-develop services by being empowered to self-monitor and ensure services use their feedback and outcomes to improve quality

  • supporting mental health services to better use quality improvement methods and to be transparent about their outcomes and quality measures, thus enabling services to benchmark themselves against other similar services.

The direction of travel has been continued with the NHS Long Term Plan (NHS England 2019), further linking outcomes and resources in the context of population mental health and ICSs as mentioned above.

The Institute for Healthcare Improvement offers a helpful framework for using data gathered by organisations. Gathered data can be reported to commissioners or central monitoring systems to allow benchmarking, payment or resource allocation or safety and quality assurance (Institute for Healthcare Improvement 2022). We are dependent on digitalised data-sets often stored in EPRs to measure value. In mental healthcare we are often dependent on questionnaires for assessing the impact of services or measuring outcomes. These often use patient- and clinician-reported outcome and experience measures (PROMs, PREMs and CROMs). In the mid-1990s, researchers at Columbia University developed the nine-item Patient Health Questionnaire (PHQ-9) to assess depression and the seven-item Generalised Anxiety Disorder (GAD-7) scale to assess generalised anxiety disorder (Kroenke Reference Kroenke, Spitzer and Williams2001; Spitzer Reference Spitzer, Kroenke and Williams2006). These are both examples of diagnosis-specific, symptom-focused PROMs. The NHS's Improving Access to Psychological Therapies (IAPT) programme began in 2008 offering brief psychological therapy for common mental health conditions such as adult anxiety disorders and depression in England. The programme uses GAD-7 and PHQ-9 scores. Although individual outcome measures inform treatment goals, anonymised pooled outcome data digitally captured centrally allows performance management of services and payments linked to improvement of patient questionnaire scores over a course of therapy. Standardisation of the national data gathered allows national benchmarking and offers various advantages. For example:

  • demographic information on statutorily protected characteristics and socioeconomic status can be used to monitor and actively address any barriers to service provision, thereby ensuring monitoring of how equitable the uptake of services is

  • as the intervention is a therapy linked to a specific complaint or diagnosis, monitoring of the intervention and support of evidence-based care, for example NICE-recommended care, can also be measured and compared.

However, routine outcome data gathering has not always met with similar success across the board. The Royal College of Psychiatrists (RCPsych) worked on developing a generic clinician-rated outcome measure in the 1990s called the Health of the Nation Outcome Scales (HoNOS), to measure the health and social functioning of people with severe mental illness (Wing Reference Wing, Beevor and Curtis1998). The HoNOS were subsequently linked with further questions to group patients into ‘mental health clusters’ in preparation for payment by activity or result. However, after a decade of testing, these efforts are under further review (NHS England 2020). Moreover, it remains the case that psychiatrists have struggled to adapt to routine outcome data gathering (Gilbody Reference Gilbody, House and Sheldon2002; Zimmerman Reference Zimmerman and Mcglinchey2008). It is undoubtedly true that in longer-term chronic relapsing–remitting conditions there are challenges to identifying the start and end of an intervention or condition, and therefore outcome data gathered over two time points might not convey a consistent narrative. This is potentially complicated by seemingly myopic and non-aligning service reporting requirements, for example quarterly or annually, when sometimes the journey to recovery involves years.

The first step in absence of clear knowledge might be to steer behaviour towards gathering outcome measures routinely. Routine outcome data gathering over time would provide intelligence on what can be considered a ‘good’ outcome. To enable routine data gathering one needs to be mindful of the clinical burden of data gathering. Electronic patient records and data systems aimed at outcome data need to be devised with this in mind, for example minimising the number of clicks, the use of touch screens in capturing data such as through using visual analogue scales, and training and investment in the digital literacy of the workforce. Interoperability between the different EPRs can offer further opportunity in future, moving from service-based outcome data collection to outcome data that is wrapped around the person and travels with the person across different services. Clever adaptations of outcome scales that are built into clinical work, for example co-production of care plans, has been used to increase outcome data gathering. The DIALOG scale has been used as a routine patient outcome and experience measure in a mental health trust in London since 2017 (Mosler Reference Mosler, Priebe and Bird2020). DIALOG is a generic quality-of-life measure; DIALOG+ is derived from this, taking these scores to inform a care plan that is co-produced with the patient. Once embedded into routine clinical practice, pooled anonymised data allowed analysis demonstrating improvement in domains of quality of life (PROM) and patient experience (PREM) over a 3-year period (Mosler Reference Mosler, Priebe and Bird2020).

Patient-facing portals capturing patient-reported outcomes and experience with interface with EPRs offer great opportunity for routine data gathering while minimising clinical burden. Examples in use in the UK include Patients Know Best, which was been operating since 2008 (patientsknowbest.com), and Beth, launched by South London and Maudsley NHS Foundation Trust in 2020, during the COVID-19 pandemic (slam.nhs.uk/beth).

Digital literacy initiatives for both the workforce and patients are therefore critical to promote engagement and to minimise ‘them and us’ attitudes between staff and management. Visibility of outcome measures can inform and steer patient care, and team-level data and feedback on the benefits of work done with patients who have shown improvement have huge benefits for staff morale and well-being. Using a patient-focused holistic outcome measure may steer a patient-led conversation and the mere fact of its use may improve quality of life (Bullinger Reference Bullinger and Quitmann2014), something demonstrated with the use of DIALOG+ in people with severe mental illness (Priebe Reference Priebe, Kelley and Omer2015). Box 3 gives examples of sources of patient and service data and information for clinicians and others.

BOX 3 Some sources of patient and service data and information for clinicians in the UK

Your trust/organisation:

  • team manager

  • business manager

Your borough/local authority:

Regional:

National:

Social media

Social media offer professionals both opportunities and challenges (Tracy Reference Tracy, Vinchenzo, Nabavi, Langan Martin and Hughes2022b). Doctors’ use of social media can benefit patient care by engaging people in public health and policy discussions, establishing national and international professional networks and facilitating patients’ access to information about health and services (General Medical Council 2013). Platforms such as Twitter are widely used by psychiatrists, professionals in mental health and neuroscience, and patients and carers. They are a good means for ‘following’, being informed and having conversations with all these groups, and are a common way for academics, learned journals and institutions to disseminate new knowledge and events (Harrison Reference Harrison, Hayes and Woollard2019). There are also online forums for patients to share stories and offer each other peer-support, and text-based services for individuals with mental health problems.

It can be argued that social media use may carry more risks in our personal than professional lives, as we tend to let our guard down with what is sometimes called the ‘online disinhibition effect’. Clinicians should remember that patients and colleagues may see their postings from personal accounts, and indeed some will actively search for their healthcare professionals online, and they should consider the following (Pendlebury Reference Pendlebury and Simsek2021):

  • Would I say this out loud to a group of patients/peers (or my grandmother)?

  • Am I about to make an offensive comment about another person or colleague?

  • Am I about to make a comment that could be perceived as prejudiced against a person's ethnicity, sexuality, gender, religion or other protected characteristic?

  • Would what I am about to say put the reputation of my profession at risk?

Box 4 gives a summary of advice for doctors on social media use from the General Medical Council.

BOX 4 A summary of General Medical Council guidelines on doctors’ use of social media

  • If you are using social media in your professional capacity be clear about your role, title and name

  • Remember you are being watched and you are representing your profession; you should therefore behave in a way that does not bring the profession into disrepute

  • Maintain patient confidentiality at all times

  • It is fine to contribute your expertise, insights and experience but avoid giving definitive advice

  • Make sure you check the facts before posting and, wherever possible, quote your sources

  • Remain polite and respectful to all

  • When interacting with or commenting about individuals or organisations online, you should be aware that postings online are subject to the same laws of copyright and defamation as written or verbal communications, whether they are made in a personal or professional capacity

  • Be honest about any conflicts of interest or financial dealings

    (General Medical Council 2013)

Many value the ‘democratisation’ of space online, where all voices get to be heard ‘equally’, but there can also be challenges. As in all parts of life, differences of opinion are common, and these can sometimes be particularly marked in mental health. Consider for example the various viewpoints on electroconvulsive therapy (ECT) and, further, how the algorithmic interactive nature of social media platforms risks ‘group think’ and reinforcing rather than nuancing or challenging opinions (Tracy Reference Tracy, Vinchenzo, Nabavi, Langan Martin and Hughes2022b). Social media are commonly used as forums for political debate; like the rest of society, doctors will have their opinions on politics, government and policy, especially as it relates to healthcare. There is no bar on being politically involved, and indeed some will strongly encourage doctors, given their roles and experiences, to actively do so. However, one should be cognisant of how this might alienate patients or peers who have considered but contrary views. The short, ‘telegraphed’ nature of Twitter can make nuanced conversations more difficult and risks facilitating arguments rather than discussion or polite debate, as well as ‘group think’ and echo chambers, where individuals reinforce each other. At its worst it can lead to frank bullying and harassment, and what is known as ‘trolling’. Some patients have said that they do not believe that the online space is truly ‘equal’ and that professionals retain, by virtue of their position, more ‘power’ in what they say (Harrison Reference Harrison, Hayes and Woollard2019).

The effects of technology, in particular social media, on young people have raised much media attention. In January 2020 the RCPsych released a report on this topic (Royal College of Psychiatrists 2020). It called for regulators to ‘urgently review and establish a protocol for the sharing of data from social media companies with universities for research into benefits and harms on children and young people’. It has been suggested that questions about technology such as social media use should become a core part of biopsychosocial assessments and formulations, particularly in the context of assessing children and adolescents.

Learning and research in virtual arenas

Most of us will now be familiar with virtual learning: attending talks and conferences online. There are gains to this, not least practical issues about less travel and associated expense, which often has the further gain of lower registration fees. Indeed, some events now offer opportunities to join just for single talks, and it opens up a global network of events, many of which are free.

There are parallel opportunities for organising events. The 2020 National Psychiatry Summer School (NPSS) was, at short notice during the early days of the pandemic, changed to a virtual event, one of the first of its kind. This opened many opportunities: students are particularly affected by event costs and travel and, by being virtual, the 2020 NPSS turned into a global event with over 300 medical students from across the world attending, several times more than the previous in-person events. The organisers were further able to tap into a wider group of speakers, who could now address the conference from their own home or workplace (Vinchenzo Reference Vinchenzo, Nabavi and Tracy2022). Noteworthy in our opinion is that this event was arranged and run by two medical students, with some support from the RCPsych, and they have provided details of their learning for others (Nabavi Reference Nabavi, Vinchenzo and Tracy2020). No doubt many of us nevertheless miss the face-to-face aspects of such events and the networking, which is harder to replicate virtually. There are some as yet unresolved information governance challenges, such as recording and sharing content, particularly if it contains patient-related information. It would seem likely that conferences will adopt hybrid models of attendance.

Other new learning opportunities include those via external organisations and bodies. Academic journals and other reputable bodies, such as the RCPsych and Royal Society of Medicine, host regular events, and the BJPsych offers a free ‘virtual journal club’ to trainees, supporting them by bringing in paper authors and relevant experts to these events. There has been a significant growth in blogging as a means of communicating online, including in the mental health sphere. In the UK the influential Mental Elf website is probably the best example (www.nationalelfservice.net). This can also be an opportunity, particularly for trainees, to build up their own communication and writing skills by contributing via blogs.

As mentioned, many events are now openly available across the globe, although their suitability as CPD might need to be confirmed in advance. Readers need to remain mindful that blogs are not peer reviewed.

Conclusions

The COVID-19 pandemic accelerated and emphasised changes in the use of digital technology that had in any case been growing. These changes are here to stay, and post-pandemic we are entering a hybrid future. There are clearly opportunities and challenges here, and psychiatrists practising in 2022 and beyond need to be cognisant of these: embracing the good and being mindful of the risks. National and specialty training standards are catching up, sometimes behind the technology itself, and complex ethical, practical and evidence-base issues remain. Although the various technologies will affect different clinicians and patients in different ways, we would argue that digital competency is a core skill for all psychiatrists, even if these are new tools with which they did not train. We have previously noted the lack of curriculum-specific requirements related to digital psychiatry in the UK (Dave Reference Dave, Abraham and Ramkisson2021). This is beginning to change, and we note the positive development of the RCPsych's Digital Psychiatry Special Interest Group (DPSIG).

Author contributions

All authors meet all four ICMJE criteria for authorship, and are in approval with the submitted manuscript. D.K.T. conceptualised the article. All authors were involved with the writing of the article and approved the final submission.

Funding

This research received no specific grant from any funding agency, commercial or not-for-profit sectors.

Declaration of interest

None.

MCQs

Select the single best option for each question stem

  1. 1 Which of the following statements is not considered one of the challenges with contemporary electronic patient records?

    1. a EPRs often will not ‘talk’ to each other across different organisations

    2. b Information is often entered by clinicians in ways that makes it difficult to extract

    3. c Emerging integrated care systems (ICSs) are limiting information flow between organisations

    4. d There is inadequate follow-on training for staff on EPR use

    5. e Older computer hardware limits performance.

  2. 2 Regarding potential cyber-attacks, which of the following is not part of a standard response management plan?

    1. a Terminate: avoid the risk by ending the causing activity

    2. b Treat: modify the likelihood and impact by implementing security controls

    3. c Transfer: share the risk by outsourcing or taking out insurance

    4. d Traffic: limit email correspondence outside of the host organisation

    5. e Tolerate: actively retain the risk if it is deemed to fall within accepted risk criteria.

  3. 3 Which of the following is not a proposed benefit of collecting outcome measurements, according to the Five Year Forward View for Mental Health?

    1. a It allows underperforming clinicians to be identified

    2. b It helps individuals achieve their recovery goals through feedback

    3. c It helps services to improve via benchmarking and quality improvement

    4. d It provides leadership for patients to help more empowered co-design of services

    5. e It promotes reflective practice within teams in considering their performance.

  4. 4 Which of the following is part of General Medical Council guidance on doctors’ use of social media?

    1. a Doctors need not disclose their name when using in a professional capacity

    2. b Describing clinical vignettes is fine if no patient identifiable details are used

    3. c Online postings on international platforms such as Twitter are not covered by UK defamation laws

    4. d Doctors should not post about their expertise, insights or experience

    5. e Doctors should behave in a way that does not bring the profession into disrepute.

  5. 5 Which of the following is not considered a limiting factor in the growth in use of outcome measures in mental health?

    1. a Clinician perceptions that mental illness is difficult to capture using ‘simple’ scales

    2. b A lack of validity and reliability in the main scales in use

    3. c Historical lack of direct utility for clinicians collecting the information

    4. d A general use more for service funding than service improvement

    5. e A general lack of integration with electronic patient records.

MCQ answers

1 c 2 d 3 a 4 e 5 b

References

Bullinger, M, Quitmann, J (2014) Quality of life as patient-reported outcomes: principles of assessment. Dialogues in Clinical Neuroscience, 16: 137–45.Google Scholar
Carolan, M (2021) HSE secures injunctions restraining sharing of hacked data. The Irish Times, 20 May.Google Scholar
Dave, S, Abraham, S, Ramkisson, R, et al (2021) Digital psychiatry and COVID-19: the Big Bang effect for the NHS? BJPsych Bulletin, 45: 259–63.Google Scholar
Duda, C (2020) Impact report: EHR satisfaction for recent go-lives. KLAS Research (https://klasresearch.com/article/impact-report-ehr-satisfaction-for-recent-go-lives/727). Accessed 3 Aug 2022.Google Scholar
General Medical Council (2013) Doctors’ Use of Social Media. GMC.Google Scholar
Gilbody, SM, House, AO, Sheldon, TA (2002) Outcomes research in mental health: systematic review. British Journal of Psychiatry, 181: 816.Google Scholar
Harrison, JR, Hayes, JF, Woollard, J, et al (2019) #BJPsych and social media – likes, followers and leading? British Journal of Psychiatry, 214: 245–7.Google Scholar
Institute for Healthcare Improvement (2022) Science of Improvement: Establishing Measures (https://www.ihi.org/resources/Pages/HowtoImprove/ScienceofImprovementEstablishingMeasures.aspx). Accessed 3 Aug 2022.Google Scholar
Jabbal, J, Lewis, M (2018) Approaches to Better Value in the NHS: Improving Quality and Cost. King's Fund.Google Scholar
Kroenke, K, Spitzer, RL, Williams, JB (2001) The PHQ-9: validity of a brief depression severity measure. Journal of General Internal Medicine, 16: 606–13.Google Scholar
McGough, R (2021) Finding value: emerging value-based trends in healthcare. NHS Confederation News and Comment, 8 Oct (https://www.nhsconfed.org/articles/finding-value-emerging-value-based-trends-healthcare).Google Scholar
Mental Health Taskforce (2016) The Five Year Forward View for Mental Health. Mental Health Taskforce.Google Scholar
Mosler, F, Priebe, S, Bird, V (2020) Routine measurement of satisfaction with life and treatment aspects in mental health patients – the DIALOG scale in East London. BMC Health Services Research, 20: 1020.Google Scholar
Nabavi, N, Vinchenzo, P, Tracy, DK (2020) “You're on mute:” how to organise a virtual medical conference. BMJ, 371: m4942.Google Scholar
NHS England, NHS Improvement (2020) Mental Health Currency Review: A Proposed New Approach to Counting Mental Health Activity. NHS England, NHS Improvement.Google Scholar
Pendlebury, G, Simsek, C (2021) Social media and messaging apps - medico-legal advice for GPs. GPonline, 24 May.Google Scholar
Porter, ME (2010) What is value in health care?. New England Journal of Medicine, 363: 2477–81.Google Scholar
Priebe, S, Kelley, L, Omer, S, et al (2015) The effectiveness of a patient-centred assessment with a solution-focused approach (DIALOG+) for patients with psychosis: a pragmatic cluster-randomised controlled trial in community care. Psychotherapy and Psychosomatics, 84: 304–13.Google Scholar
Royal College of Psychiatrists (2020) Technology Use and the Mental Health of Children and Young People (College Report CR225). RCPsych.Google Scholar
Spitzer, RL, Kroenke, K, Williams, JB, et al (2006) A brief measure for assessing generalized anxiety disorder: the GAD-7. Archives of Internal Medicine, 166: 1092–7.Google Scholar
Tracy, D, Gadelrab, R, Rahim, A, Pendlebury, G, et al (2022a) Digital literacy in contemporary mental healthcare: online assessments and mobile health apps. BJPsych Advances [epub ahead of print] 7 Sep. Available from https://doi.org/10.1192/bja.2022.60.Google Scholar
Tracy, DK, Vinchenzo, P, Nabavi, N (2022b) Use of technology and social media. In Teaching Psychiatry to Undergraduates (eds Langan Martin, J, Hughes, P): 138–43. Cambridge University Press.Google Scholar
Vinchenzo, P, Nabavi, N, Tracy, DK (2022) ‘Choose psychiatry’ goes virtual: experiences and learning from the online 2020 national psychiatry summer school. BJPsych Bulletin, 46: 181–7.Google Scholar
Wing, JK, Beevor, AS, Curtis, RH, et al (1998) Health of the Nation Outcome Scales (HoNOS): research and development. British Journal of Psychiatry, 172: 11–8.Google Scholar
Zimmerman, M, Mcglinchey, JB (2008) Why don't psychiatrists use scales to measure outcome when treating depressed patients? Journal of Clinical Psychiatry, 69: 1916–9.Google Scholar
Submit a response

eLetters

No eLetters have been published for this article.