Although there is widespread excitement about the creative successes and new opportunities resulting from the recent transformative technological advancements in artificial intelligence (AI), one result is increasing patient exposure to medical misinformation. We now live in an era of synthetic media. Text, images, audio and video information can be created or altered by generative AI models based on the data used to train the model. The commercial use of automated content produced by generative AI models, including large language models (LLMs) such as ChatGPT, GPT-3 and image generation models, is expanding rapidly. Private industry, not academia, is dominating the development of the new AI technology.Reference Ahmed, Wahed and Thompson1 The potential business applications for generative AI models are wide-ranging: creating marketing and sales copy, product guides and social media posts, sales support chatbots for customers, software development and human resources support. But generative AI models such as ChatGPT can be unreliable, making errors of both fact and reasoning that can be spread on an unprecedented scale.Reference Marcus2 The general public can easily get incorrect information from generative AI on any topic, including medicine and psychiatry. The spread of misinformation created by generative AI can be accelerated by unsuspecting acceptance of content accuracy. There are serious potential negative consequences of medical misinformation relating to individual care as well as public health. Psychiatrists need to be aware of the rapid spread of misinformation online.
Introduction to generative AI
The focus of traditional AI is on predictive models to perform a specific task, such as estimate a number, classify data or select between a set of options. In contrast, the focus of generative AI is to create original content. For a given input, rather than one correct answer based on the model's decision boundaries, generative AI models produce text, audio and visual outputs that can easily be mistakenly attributed to human authors.
Generative AI models are based on large neural networks that are trained using an immense amount of raw data.Reference Goldstein, Sastry, Musser, DiResta, Gentzel and Sedova3 Three major factors have contributed to the recent advancements in generative models: the explosion of training data now available on the internet, improvements in training algorithms and increases in available computing power for training the models.Reference Goldstein, Sastry, Musser, DiResta, Gentzel and Sedova3 For example, GPT-3 was trained using an estimated 45 terabytes of text data, or about 1 million feet of bookshelf space.4 The training process broke the text into pieces of words called tokens and created 175 million parameters that generate new text by statistically identifying the most probable next token in a sequence of tokens.Reference Smith5 A newer version of GPT-4 is a multimodal LLM, responding to both text and video images.
Generative AI can create the illusion of intelligence. Although at times the output of generative AI models can seem astonishingly human-like, they do not understand the meaning of words and frequently make errors of reasoning and fact.Reference Marcus2,Reference Smith5 The statistical patterns determine the word sequences without any understanding of the meaning or context in the real world.Reference Smith5 Researchers in the generative AI field often use the word ‘hallucination’ to describe output generated by LLM that is nonsensical, not factual, unfaithful to the underlying content, misleading, or partially or totally incorrect. The many types of error from generative AI models include factual errors, inappropriate or dangerous advice, nonsense, fabricated sources and arithmetical errors. Other issues include outdated responses reflecting the year that LLM training occurred, and different answers to iterations of the same question. One example of inappropriate or dangerous advice is a chatbot recommending calorie restriction and dieting after being told the user has an eating disorder.Reference Bailey6
The output of generative AI models may contain toxic language, including hate speech, insults, profanity and threats, despite some efforts at filtering. The fundamental problem is the prevalence of biases in the internet data used for training generative AI models related to race/ethnicity, gender and disability status. Although human feedback is being used to score responses and improve the safety of generative AI models, biases remain. Another concern is that the output of generative AI models may contain manipulative language since internet data also contain a vast amount of manipulative content.
Attitudes to generative AI
In addition to widespread commercial expansion, generative AI, and ChatGPT in particular, is extremely popular with the general public. AI products, including generative AI, are routinely anthropomorphised, or described and characterised as having human traits, by the general public, media and AI researchers. It is easy for the general public to anthropomorphise the use of LLMs, given the simplicity of conversing and the authoritative-sounding responses. The media routinely describe LLMs using words suggestive of human intelligence, such as ‘thinks’, ‘believes’ and ‘understands’. These portrayals generate public interest and trust, but also downplay the limitations of LLMs that statistically predict word sequences based on patterns learned from the training data. Researchers also anthropomorphise generative AI, referring to undesirable LLM text errors as ‘hallucinations’. Since the general public will associate hallucinations with unreal human sensory perceptions, this word may imply a false equivalency between LLMs and the human mind.
Incorrect output from generative AI models often seems plausible to many people, especially those unfamiliar with the topic. A major problem with generative AI is that people who do not know the correct answer to a question will not be able to tell if an answer is wrong.Reference Narayanan and Kapoor7 Human intelligence is needed to evaluate the accuracy of generative AI output.Reference Narayanan and Kapoor7 Although generative AI products are improving, so is the ability to create outputs that sound convincing but are incorrect.Reference Narayanan and Kapoor7 Many people do not realise how often generative AI models are incorrect. People are unaware that unless they are experts in the field, they must carefully check the answers to questions, even if the text sounds very convincing.
Intentional spread of misinformation
Generative AI models enable the automation and rapid dissemination of intentional misinformation campaigns.Reference Goldstein, Sastry, Musser, DiResta, Gentzel and Sedova3 LLM products can automate the intentional creation and spread of misinformation on an extraordinary scale.Reference Marcus2,Reference Goldstein, Sastry, Musser, DiResta, Gentzel and Sedova3 Without having to rely on human labour, the automated generation of misinformation drives down the cost of creating and disseminating misinformation. Misinformation created by the generative AI models may be better written and more compelling than that from human propagandists. The spread of online misinformation in all areas of medicine is particularly dangerous.
In addition to knowledge of the subject area, an individual's understanding of technology and online habits will affect their acceptance and spreading of misinformation. People may be in the habit of sharing news on social media or be overly accepting of online claims. Some people with mental illness may be especially vulnerable to online misinformation. Generative AI products will further increase the volume of information shared, including on medical topics. The use of generative AI emphasises the need and importance of increasing digital training opportunities for the general public from validated sources.
Unique ethical issues
In addition to accuracy, reliability, bias and toxicity, there are many unsettled ethical and legal issues related to generative AI. There are privacy issues related to the collection and use of personal and proprietary data for training models without permission and compensation. There are legal issues that include plagiarism, copyright infringement and responsibility for errors and false accusations in generative AI output.
Conclusions
The use of generative AI products in commerce, healthcare and by the general public is rapidly growing. In addition to beneficial uses, there are serious potential negative impacts from AI-generated and widely spread misinformation. The misinformation created by generative AI about mental illness may include factual errors, nonsense, fabricated sources and dangerous advice. Measures to mitigate the dangers of misinformation from generative AI need to be explored. Psychiatrists should realise that patients may be obtaining misinformation and making decisions based on generative AI responses in medicine, and many other topics, that may affect their lives.
Data availability
Data availability is not applicable to this article as no new data were created or analysed in this study.
Author contributions
S.M. and T.G. wrote the initial draft. All authors reviewed and approved the final manuscript.
Funding
This work received no specific grant from any funding agency, commercial or not-for-profit sectors
Declaration of interest
J.R.G., Director of the NIHR Oxford Health Biomedical Research Centre, is a member of the BJPsych editorial board and did not take part in the review or decision-making process of this paper.
eLetters
No eLetters have been published for this article.