Hostname: page-component-848d4c4894-4rdrl Total loading time: 0 Render date: 2024-07-02T16:32:32.506Z Has data issue: false hasContentIssue false

Ways to make artificial intelligence work for healthcare professionals: correspondence

Published online by Cambridge University Press:  04 June 2024

Hinpetch Daungsupawong*
Affiliation:
Private Academic Consultant, Phonhong, Lao People’s Democratic Republic, Phonhong, Vientiane Province, Laos
Viroj Wiwanitkit
Affiliation:
2Department of Research Analytics, Saveetha Dental College and Hospitals, Saveetha Institute of Medical and Technical Sciences Saveetha University, Kanchipuram, Tamil Nadu, India
*
Corresponding author: Hinpetch Daungsupawong; Email: [email protected]

Abstract

Type
Letter to the Editor
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of The Society for Healthcare Epidemiology of America

Dear editor, we hereby discuss the publication “All aboard the ChatGPT steamroller: Top 10 ways to make artificial intelligence work for healthcare professionals.” Reference Non1 Non has already discussed several of the restrictions identified. The purpose of this letter is to underline and highlight some additional limitations of Large Langauge Model (LLM) that were not previously mentioned by the original author. While there are a number of possible advantages to ChatGPT integration in medicine, there are also a number of disadvantages and issues that need to be resolved. Although ChatGPT is a language model that has been extensively trained on data, it might not have the medical background or context required to deliver accurate and trustworthy results. Medical practitioners depend on evidence-based procedures, thus, there’s a chance ChatGPT will give inaccurate or misleading information, which could result in medical mistakes. Reference Sallam2 Because ChatGPT relies on its training data to function, privacy and bias problems are brought up ethically. The artificial intelligence (AI) chatbot can unintentionally reinforce prejudice or discrimination in healthcare encounters if it is not built using a broad and representative dataset. Furthermore, the protection of patient data has to be a top concern, and stringent privacy regulations must be followed while utilizing AI systems. Reference Bhargava, Jadav, Meshram and Kanchan3

To guarantee improved patient results, it’s critical to find a balance between technology and interpersonal communication. AI chatbots, such as ChatGPT should be subject to human oversight by medical practitioners in the future. This would entail incorporating a human-in-the-loop system in which medical professionals oversee and select the chatbot algorithms to guarantee correct and trustworthy responses. Reference Kleebayoon and Wiwanitkit4

The chatbot’s capacity to provide precise medical advice and expertise can be improved by creating specific AI models that have undergone thorough training using clinical guidelines, medical literature, and expert viewpoints. Enhancing the model’s domain-specific competency can be achieved by training it solely on healthcare-related data. Artificial intelligence chatbots in the medical field ought to be built to pick up on interactions and take patient and professional input into account. In addition to the well-known healthcare-tailored LLM, there are numerous healthcare-focused LLMs in development that have received little attention, such as ClinicalBERT and Med-PaLM 2. Reference Singhal, Azizi and Tu5,Reference Ji, Wei and Xu6 ClinicalBERT was recently presented to pre-train contextualized word representation models utilizing bidirectional transformers, significantly improving the accuracy of several natural language processing tasks. Reference Tu, Fang and Cheng7 Med-PaLM 2 was developed specifically for biomedical research, such as assessing gene-phenotype connections and generating fresh ideas, which can aid in genetic discoveries. Reference Ji, Wei and Xu6

In addition to being useful in general healthcare, specialized LLMs in infectious diseases and antibiotic stewardship are intriguing. The potential application in clinical consulting is emphasized, Reference Schwartz, Link, Daneshjou and Cortés-Penfield8 raising some concerns about the direction cognitive specialties are taking. Nevertheless, there are currently issues with LLMs that make safe clinical deployment in specialist consultations impossible. These issues include a tendency to recapitulate biases, frequent confabulations, a lack of contextual awareness that is essential for complex diagnostic and treatment plans, and mysterious and unexplainable training data and methods. Reference Schwartz, Link, Daneshjou and Cortés-Penfield8 By using an iterative process, the model will be improved over time, and its applicability and dependability in actual medical situations will be guaranteed. In the end, it’s critical to remember that the artificial intelligence system’s user ultimately decides whether or not to adhere to a just and moral standard. Reference Kleebayoon and Wiwanitkit4

Data availability statement

There is no new data generated.

Acknowledgements

None.

Author contribution

HD 50% ideas, writing, analyzing, approval.

VW 50% ideas, supervision, approval.

Funding

The articles have no fund and ask for waiving for any charge from the journal.

Competing interests

Authors declare no competing interests.

Ethical standard

Not applicable.

Consent for publication

Agree.

References

Non, LR. All aboard the ChatGPT steamroller: top 10 ways to make artificial intelligence work for healthcare professionals. Antimicrob Steward Healthc Epidemiol 2023;3:e243. https://doi.org/10.1017/ash.2023.512 CrossRefGoogle ScholarPubMed
Sallam, M. ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthcare (Basel) 2023;11:887 CrossRefGoogle ScholarPubMed
Bhargava, DC, Jadav, D, Meshram, VP, Kanchan, T. ChatGPT in medical research: challenging time ahead. Med Leg J 2023;91:223225 CrossRefGoogle ScholarPubMed
Kleebayoon, A, Wiwanitkit, V. ChatGPT, critical thing and ethical practice. Clin Chem Lab Med 2023;61:e221 CrossRefGoogle ScholarPubMed
Singhal, K, Azizi, S, Tu, T, et al. Large language models encode clinical knowledge. Nature 2023;620:172180 CrossRefGoogle ScholarPubMed
Ji, Z, Wei, Q, Xu, H. BERT-based ranking for biomedical entity normalization. AMIA Jt Summits Transl Sci Proc 2020;2020:269277 Google ScholarPubMed
Tu, T, Fang, Z, Cheng, Z, et al. Genetic Discovery Enabled by A Large Language Model. bioRxiv [Preprint]. 2023:2023.11.09.566468.CrossRefGoogle Scholar
Schwartz, IS, Link, KE, Daneshjou, R, Cortés-Penfield, N. Black box warning: large language models and the future of infectious diseases consultation. Clin Infect Dis 2023:ciad633 Google Scholar