ArticlePDF Available

Abstract

This study examines ChatGPT's role in clinical decision support, by analyzing its scope, application, and limitations. By analyzing patient data and providing evidence-based recommendations, ChatGPT, an AI language model, can help healthcare professionals make well-informed decisions. This study examines ChatGPT's use in clinical decision support, including diagnosis and treatment planning. However, it acknowledges limitations like biases, lack of contextual understanding, and human oversight and also proposes a framework for the future clinical decision support system. Understanding these factors will allow healthcare professionals to utilize ChatGPT effectively and make accurate clinical decisions. Further research is needed to understand the implications of using ChatGPT in healthcare settings and to develop safeguards for responsible use.
Vol.:(0123456789)
1 3
Annals of Biomedical Engineering
https://doi.org/10.1007/s10439-023-03329-4
LETTER TOTHEEDITOR
ChatGPT andClinical Decision Support: Scope, Application,
andLimitations
JannatulFerdush1· MahbubaBegum2· SakibTanvirHossain3
Received: 15 July 2023 / Accepted: 18 July 2023
© The Author(s) under exclusive licence to Biomedical Engineering Society 2023
Abstract
This study examines ChatGPT’s role in clinical decision support, by analyzing its scope, application, and limitations. By
analyzing patient data and providing evidence-based recommendations, ChatGPT, an AI language model, can help healthcare
professionals make well-informed decisions. This study examines ChatGPT’s use in clinical decision support, including
diagnosis and treatment planning. However, it acknowledges limitations like biases, lack of contextual understanding, and
human oversight and also proposes a framework for the future clinical decision support system. Understanding these factors
will allow healthcare professionals to utilize ChatGPT effectively and make accurate clinical decisions. Further research is
needed to understand the implications of using ChatGPT in healthcare settings and to develop safeguards for responsible use.
Keywords ChatGPT· CDS· Biasness· Ethics
Introduction
In modern healthcare, clinical decision support (CDS) sys-
tems provide valuable information and recommendations to
enhance the clinical decision-making. These systems utilize
various technologies and databases to analyze the patient
data, medical literature, and clinical guidelines. As a result,
healthcare professionals can diagnose diseases, determine
treatment options, and improve patient outcomes [13]. In
addition to promoting evidence-based practice, CDS sys-
tems aim to reduce errors. As healthcare evolves, the use of
artificial intelligence (AI) and natural language processing
(NLP) will become increasingly important for clinical deci-
sion support.
The ChatGPT [3] AI language model developed on the
basis of deep learning has gained considerable attention as
a powerful tool for natural language processing. Compared
to traditional AI, ChatGPT has few differences:
AI (Artificial Intelligence)
AI covers a wide range of technologies and applications,
such as machine learning, natural language processing,
computer vision, and robotics.
It aims at reproducing human-like intelligence and deci-
sion-making processes.
ChatGPT
OpenAI’s ChatGPT is an example of a conversational AI
model based on the GPT architecture.
Its primary focus is generating human-like text responses
in conversational settings.
The ChatGPT can generate coherent and contextually
relevant responses based on extensive training data.
BIOMEDICAL
ENGINEERING
SOCIETY
Associate Editor Stefan M. Duma oversaw the review of this
article.
Jannatul Ferdush, Mahbuba Begum and Sakib Tanvir Hossain have
contributed equally to this work.
* Jannatul Ferdush
jannatulferdush@just.edu.bd
Mahbuba Begum
mahbuba327@yahoo.com
Sakib Tanvir Hossain
sakib.tanvir.0905059@gmail.com
1 Department ofComputer Science andEngineering, Jashore
University ofScience andTechnology, Jashore7408,
Bangladesh
2 Department ofComputer Science andEngineering, Mawlana
Bhasani Science andTechnology, Tangail1902, Bangladesh
3 Department ofMechanical Engineering, Khulna University
ofEngineering andTechnology, Khulna9203, Bangladesh
J.Ferdush et al.
1 3
It is commonly used in chatbots, virtual assistants, as
well as other conversational applications, supporting cus-
tomers, answering queries, and engaging in dialogue.
Essentially, AI is a field that involves technologies and
applications that aim to simulate human intelligence. On
the other hand, ChatGPT, as a specific implementation of
AI, specializes in generating human-like text responses in
conversational settings, making it valuable for chatbots and
virtual assistants.
By using a transformer-based architecture, ChatGPT is
able to comprehend and generate texts that are human-like.
As a result of its ability to interact with users and respond
to their prompts, it is an ideal candidate for clinical decision
support. Using ChatGPT’s language generation capabili-
ties, it may be able to analyze the patient data, recommend
treatments, and offer insights based on its extensive knowl-
edge. By leveraging its vast knowledge base and language
understanding capabilities, ChatGPT can augment clinical
decision-making processes, potentially improving accuracy,
efficiency, and patient outcomes. However, it is crucial to
thoroughly investigate the scope, application, and limita-
tions of ChatGPT in clinical decision support to ensure its
responsible and effective integration into healthcare work-
flows. This study provides the following contributions to the
field of clinical decision support using ChatGPT:
1. Exploration of ChatGPT’s potential in clinical decision
support, highlighting its ability to analyze the patient
data and generate evidence-based recommendations.
2. Identification of practical applications and use cases for
ChatGPT in diagnosis, treatment planning, and patient
management. Discussion of the limitations and chal-
lenges of ChatGPT, including biases in training data
and the need for human oversight.
3. Implications for healthcare professionals, emphasizing
the importance of critical evaluation and future research
opportunities to optimize ChatGPT’s performance.
Overall, this research enhances our understanding of Chat-
GPT’s role in clinical decision support, offering insights for
improving patient care and outcomes.
Literature Review
We present the following literature review to provide an
overview of existing research and studies relating to Chat-
GPT in clinical decision support. It summarizes the current
knowledge and highlights the key findings, methodologies,
and gaps in the literature.
Artificial intelligence technologies are increasingly being
developed for a range of clinical circumstances [47]. In a
recent observational study conducted by Dakuo Wang and
their research group at Pace University, they interviewed 22
clinicians from six clinics in a rural area in China [8]. The
study revealed several challenges in the adoption of an AI-
based Clinical Decision Support System (AI-CDSS) in this
context. One of the major obstacles identified was the mis-
alignment between the design of the AI-CDSS and the local
workflow and context. The authors also found that clinicians
may resist accepting AI-CDSS due to concerns that it would
replace them in their jobs. Another significant barrier high-
lighted was the lack of interpretability and transparency of AI,
as clinicians found it difficult to understand how the algorithm
generated recommendations within the “black box” of the AI
system.
Interestingly, since the invention of ChatGPT in 2022 [9],
researchers have started exploring its potential in medical sci-
ence [1014], as well as clinical decision support system. In
2023 [15], the authors sought to investigate whether ChatGPT
could provide valuable suggestions to improve the logic of
clinical decision support systems (CDS) and compare its per-
formance to suggestions generated by humans. They discov-
ered that ChatGPT holds promise for leveraging large language
models and reinforcement learning from human feedback to
enhance CDS alert logic and potentially other medical areas
involving complex clinical reasoning. However, it is important
to note that the article did not discuss one of the key features
of ChatGPT–its ability to remember previous conversations,
which could be utilized in this research.
In contrast to traditional AI systems, ChatGPT offers unique
advantages that should be highlighted for future research. Its
automated chat interface, combined with the ability to retain
contextual information from prior interactions, presents excit-
ing opportunities for various applications. By addressing this
challenge, researchers can unlock the full potential of Chat-
GPT and explore its broader scope of applications in the field
of healthcare. Ultimately, this could contribute to the devel-
opment of advanced learning health systems, revolutionizing
clinical decision-making processes.
In conclusion, the literature indicates that ChatGPT holds
significant potential in clinical decision support, with promis-
ing applications in diagnosis, treatment planning, and deci-
sion-making. While acknowledging its limitations, researchers
and healthcare professionals are actively working on address-
ing biases, improving transparency, and integrating human
expertise to ensure responsible and effective use of ChatGPT
in healthcare decision-making processes.
ChatGPT andClinical Decision Support: Scope, Application, andLimitations
1 3
Scope ofChatGPT inClinical Decision
Support
ChatGPT holds significant potential in the realm of clini-
cal decision support. This section examines the scope
of ChatGPT’s application in this domain, highlighting
its ability to assist healthcare professionals in making
informed decisions.
Clinical decision support involves utilizing technology
and data analysis to provide healthcare professionals with
relevant information, recommendations, and alerts to aid
in clinical decision-making. It encompasses various tasks
such as diagnosis, treatment planning, and patient man-
agement. The scope of clinical decision support is broad,
spanning multiple medical specialties and healthcare
settings.
Exploration of How ChatGPT Can Contribute to Clini-
cal Decision-Making ChatGPT’s language generation
capabilities make it a valuable tool for clinical deci-
sion-making. It can analyze the patient data, medical
literature, and clinical guidelines to generate evidence-
based recommendations and insights. By engaging in
interactive conversations, ChatGPT can assist health-
care professionals in exploring different treatment
options, understanding complex medical information,
and obtaining relevant insights to support decision-
making.
Discussion of the Potential Benefits of Using ChatGPT
in Healthcare Settings with Examples The utilization of
ChatGPT in healthcare settings offers several benefits.
Firstly, it can provide rapid access to up-to-date medi-
cal literature and clinical guidelines, allowing healthcare
professionals to stay abreast of the latest research and
best practices. Secondly, ChatGPT’s ability to analyze
and interpret patient data can aid in accurate diagnosis
and personalized treatment planning. It can identify pat-
terns, predict outcomes, and provide treatment recom-
mendations based on existing medical knowledge.
ChatGPT, for instance, can provide potential diag-
noses and treatment options that may have been over-
looked in rare or complex diseases by leveraging its
vast knowledge base. Additionally, ChatGPT can help
healthcare professionals determine the likelihood of
certain outcomes or complications based on patient
characteristics and medical history.
By employing ChatGPT in clinical decision support,
healthcare professionals can benefit from its ability to
provide timely and contextually relevant information,
augmenting their decision-making process and potentially
improving patient outcomes.
Application ofChatGPT inClinical Decision
Support
ChatGPT provides healthcare professionals with advanced
tools for improving clinical decision-making based on evi-
dence and patient data, enabling them to make evidence-
based decisions. ChatGPT can be effectively employed in
several areas of clinical decision support, including:
Diagnosis Assistance: By analyzing patient symptoms,
medical history, and diagnostic test results, ChatGPT
can help diagnose. It can generate potential diagnoses
based on patterns and correlations found in medical
literature and databases. This aids healthcare profes-
sionals in considering a wide range of possibilities and
refining their diagnostic assessments.
For example, in a case where a patient presents with
ambiguous symptoms, ChatGPT can provide insights
and propose differential diagnoses, facilitating a more
comprehensive diagnostic evaluation.
Treatment Planning and Recommendations: ChatGPT
can offer valuable support in developing personalized
treatment plans by leveraging evidence-based guide-
lines, clinical expertise, and medical literature. It can
provide healthcare professionals with treatment recom-
mendations, considering factors such as patient demo-
graphics, medical history, and existing comorbidities.
For instance, ChatGPT can help explore various ther-
apeutic options for a specific condition. This includes
medications, dosage regimens, and potential side
effects
Clinical Guidelines and Best Practices: Using Chat-
GPT, healthcare professionals can access clinical
guidelines and best practices in real time, which can be
a valuable resource. With the latest information avail-
able regarding diagnostic criteria, treatment protocols,
and follow-up strategies, healthcare professionals can
align their practices with current and evidence-based
practices.
Rare or Complex Cases: It may be possible to get valu-
able insights and suggestions from ChatGPT in rare or
complex cases where expertise is limited. In order to
propose potential diagnostic or treatment approaches
that may have been overlooked, ChatGPT analyzes
similar cases from medical literature.
Clinical Research and Data Analysis: Healthcare pro-
fessionals can use ChatGPT to analyze the clinical
research data, extract relevant information, and iden-
tify the key findings. It can aid in data interpretation,
providing insights into treatment outcomes, prognostic
factors, and potential areas for further investigation.
For example, ChatGPT can analyze clinical trial data
J.Ferdush et al.
1 3
and generate summaries of the findings. In this way,
healthcare professionals can make informed decisions
about treatment efficacy and safety.
Patient Education and Communication; ChatGPT can
help patient education and communication. It serves as
a conversational agent to answer patients’ questions and
provide information about their condition, treatment
options, and potential side effects. It can improve patient
understanding, empower them to make informed deci-
sions, and enhance patient–provider communication.
ChatGPT, for instance, provides patients with custom-
ized educational materials based on their preferences and
needs.
Remote Healthcare and Telemedicine; Healthcare pro-
fessionals can use ChatGPT as a virtual assistant during
remote consultations in the context of remote healthcare
and telemedicine. Patients’ symptoms can be triaged,
initial recommendations provided, and remote health-
care experiences can be facilitated more smoothly. As
an example, ChatGPT can help determine a patient’s
urgency and provide initial guidance on next steps before
a healthcare professional conducts a more thorough
examination.
Healthcare Resource Allocation: Healthcare resource
allocation decisions can be improved through ChatGPT
during the periods of resource constraint, such as pan-
demics and natural disasters. As a result of analyzing
patient data and available resources, ChatGPT can pro-
vide insight into optimal resource utilization, care prior-
itization, and staffing allocation. The ChatGPT platform,
for instance, can be used to determine the best way to
distribute vaccine doses to maximize the population ben-
efits.
These applications illustrate the versatility of ChatGPT in
clinical decision support, extending its potential to vari-
ous aspects of healthcare delivery and decision-making
processes.
Limitations ofChatGPT inClinical Decision
Support
While ChatGPT presents significant potential, it is essential
to consider its limitations and challenges when used in clini-
cal decision support [16].
Discussion of Potential Biases in Training Data and
Model Outputs: ChatGPT’s responses are influenced by
the data it was trained on, which can introduce biases.
Biases in training data, such as the under representation
of certain demographics or medical conditions, may
affect ChatGPT’s recommendations accuracy and reli-
ability. Careful attention should be given to addressing
these biases to ensure equitable and unbiased decision
support.
Consideration of the Lack of Contextual Understanding
in ChatGPT’s Responses: Despite its ability to under-
stand medical situations’ context and nuances, ChatGPT
may generate responses that seem plausible, but lack the
depth or specificity necessary to make accurate decisions.
In order to ensure that ChatGPT’s responses are appro-
priate and relevant within the clinical context, human
oversight and critical evaluation are imperative.
Importance of Human Oversight and Critical Evaluation
of ChatGPT’s Recommendations: Despite ChatGPT’s
ability to provide valuable insights into clinical decision
support, healthcare professionals should exercise caution
and judgment when interpreting and applying the recom-
mendations. Human oversight is essential to verify the
accuracy, relevance, and appropriateness of ChatGPT’s
suggestions prior to making clinical decisions.
Mitigating Limitations andResponsible Use
ofChatGPT
It is essential to address ChatGPT’s limitations and develop
strategies for its responsible deployment in order to ensure
its responsible and effective use.
Strategies for Addressing Biases and Improving Trans-
parency in ChatGPT’s Training: Identifying and address-
ing biases in ChatGPT’s training data involve ensuring
diverse and representative datasets that cover a wide
range of demographics and medical conditions. Fur-
thermore, transparency in the training process, includ-
ing documentation of data sources and model training
methodologies, can improve ChatGPT’s understanding
and scrutiny.
Importance of Integrating Human Expertise and Over-
sight in Decision-Making: In order to effectively use
ChatGPT, healthcare professionals should actively par-
ticipate in the decision-making process and critically
evaluate its recommendations. By combining the knowl-
edge and experience of healthcare professionals with the
insights provided by ChatGPT, healthcare professionals
can make more informed and accurate decisions.
Ethical Considerations and Responsible Implementa-
tion of ChatGPT in Healthcare Settings: As part of the
implementation of ChatGPT in healthcare settings, ethi-
cal considerations play a significant role. When utilizing
ChatGPT for clinical decision support, it is essential to
maintain patient privacy, confidentiality, and informed
consent. Additionally, healthcare organizations should
formulate clear guidelines and protocols for the use of
ChatGPT andClinical Decision Support: Scope, Application, andLimitations
1 3
artificial intelligence technology, which will ensure that
ChatGPT is implemented responsibly and aligns with
ethical standards.
Proposed Framework
The provided figure (Fig.1) illustrates a general framework
for a Clinical Decision Support System (CDSS) based on
ChatGPT. The framework demonstrates various potential
applications of ChatGPT within a CDSS context.
In this framework, the CDSS utilizes a primary scru-
tinizing system to perform the initial assessment of tests
and provide recommendations from doctors. Additionally,
the CDSS includes a treatment and recommendation block,
which automatically designs treatment plans based on doc-
tors’ suggestions. To address privacy and ethical concerns
when accessing patient data, all data are encrypted before
being shared with external researchers. The CDSS-research
and data collection component handles this encryption pro-
cess. Patients and doctors can request access to their data
at any time, and an automatic report is generated for them.
One of the key features of this system is its explainabil-
ity. Patients can always inquire about their data, test results,
and the reasons behind certain outcomes. The CDSS, being
a chat-based agent powered by ChatGPT, aims to keep
interactions engaging and informative. Moreover, doctors
can use natural language processing capabilities of Chat-
GPT to communicate in their own language without needing
knowledge of computer programming languages. This aspect
makes it an excellent educational tool for both doctors and
patients, fostering learning and knowledge exchange.
Overall, the framework presents a comprehensive CDSS
powered by ChatGPT, combining automated decision-mak-
ing, personalized data access, and educational opportunities
for healthcare professionals and patients alike.
Conclusion
In conclusion, ChatGPT has the potential to significantly
impact clinical decision support in healthcare settings. This
article has explored the scope of ChatGPT’s application,
including its potential benefits in diagnosis and treatment
planning. There have also been a number of limitations iden-
tified, such as biases and a lack of contextual understand-
ing, that need to be addressed in order to ensure respon-
sible implementation. By mitigating these limitations and
integrating human expertise and oversight, ChatGPT can be
harnessed as a valuable tool in healthcare decision-making
processes. It is crucial to continue research, collaboration,
and the responsible adoption of ChatGPT in healthcare in
Fig. 1 An example of general
framework for ChatGPT-based
Clinical Decision Support Sys-
tem (CDSSS)
J.Ferdush et al.
1 3
the future in order to refine its capabilities, address limi-
tations, and identify the best practices for integration. To
maximize ChatGPT’s potential and ensure its responsible
and ethical use in healthcare, researchers, healthcare profes-
sionals, and AI developers need to collaborate.
Acknowledgements The authors acknowledge that this article was par-
tially generated by ChatGPT (powered by OpenAI’s language model,
GPT; http:// openai. com). The editing was performed by the authors.
Declarations
Conflict of interest The authors declare no conflict of interest
References
1. Ingraham, N. E., E. K. Jones, S. King, J. Dries, M. Phillips, T.
Loftus, H. L. Evans, G. B. Melton, and C. J. Tignanelli. Re-aiming
equity evaluation in clinical decision support: a scoping review
of equity assessments in surgical decision support systems. Ann.
Surg. 277(3):359–364, 2023.
2. Meunier, P.-Y., C. Raynaud, E. Guimaraes, F. Gueyffier, and L.
Letrilliart. Barriers and facilitators to the use of clinical decision
support systems in primary care: A mixed-methods systematic
review. Ann. Fam. Med. 21(1):57–69, 2023.
3. Xu, Q., W. Xie, B. Liao, C. Hu, L. Qin, Z. Yang, H. Xiong, Y.
Lyu, Y. Zhou, A. Luo, etal. Interpretability of clinical decision
support systems based on artificial intelligence from technological
and medical perspective: A systematic review. J. Healthcare Eng.
2023. https:// doi. org/ 10. 1155/ 2023/ 99192 69.
4. Pierce, R. L., W. Van Biesen, D. Van Cauwenberge, J. Decruyen-
aere, and S. Sterckx. Explainability in medicine in an era of ai-
based clinical decision support systems. Front. Genet. 13:903600,
2022.
5. Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understand-
ing the impact of explanations on advice-taking: a user study for
ai-based clinical decision support systems. In: Proceedings of the
2022 CHI Conference on Human Factors in Computing Systems,
pp. 1–9 (2022)
6. Sloane, E.B., Silva, R.J.: Artificial intelligence in medical devices
and clinical decision support systems. In: Clinical Engineering
Handbook, pp. 556–568 (2020)
7. Wang, L., X. Chen, L. Zhang, L. Li, Y. Huang, Y. Sun, and X.
Yuan. Artificial intelligence in clinical decision support systems
for oncology. Int. J. Med. Sci. 20(1):79, 2023.
8. Wang, D., Wang, L., Zhang, Z., Wang, D., Zhu, H., Gao, Y., Fan,
X., Tian, F.: “brilliant ai doctor” in rural clinics: Challenges in
ai-powered clinical decision support system deployment. In: Pro-
ceedings of the 2021 CHI Conference on Human Factors in Com-
puting Systems. CHI ’21. Association for Computing Machinery,
New York, NY, USA (2021). https:// doi. org/ 10. 1145/ 34117 64.
34454 32
9. Introducing ChatGPT — openai.com. https:// openai. com/ blog/
chatg pt. [Accessed 08-Jul-2023]
10. Biswas, S. S. Role of chat gpt in public health. Ann. Biomed. Eng.
51(5):868–869, 2023.
11. Sallam, M.: The utility of chatgpt as an example of large language
models in healthcare education, research and practice: System-
atic review on the future perspectives and potential limitations.
medRxiv, 2023–02 (2023)
12. Garg, R.K., Urs, V.L., Agrawal, A.A., Chaudhary, S.K., Paliwal,
V., Kar, S.K.: Exploring the role of chat gpt in patient care (diag-
nosis and treatment) and medical research: A systematic review.
medRxiv, 2023–06 (2023)
13. Liu, J., C. Wang, and S. Liu. Utility of chatgpt in clinical practice.
Journal of Medical Internet Research. 25:48568, 2023.
14. Temsah, M.-H., Aljamaan, F., Malki, K.H., Alhasan, K.,
Altamimi, I., Aljarbou, R., Bazuhair, F., Alsubaihin, A., Abdulma-
jeed, N., Alshahrani, F.S., etal.: Chatgpt and the future of digital
health: a study on healthcare workers’ perceptions and expecta-
tions. In: Healthcare, vol. 11, p. 1812 (2023). MDPI
15. Liu, S., Wright, A.P., Patterson, B.L., Wanderer, J.P., Turer, R.W.,
Nelson, S.D., McCoy, A.B., Sittig, D.F., Wright, A.: Using AI-
generated suggestions from ChatGPT to optimize clinical decision
support. Journal of the American Medical Informatics Association
30(7), 1237–1245 (2023) https:// doi. org/ 10. 1093/ jamia/ ocad0 72
16. Dave, T., S. A. Athaluri, and S. Singh. Chatgpt in medicine: an
overview of its applications, advantages, limitations, future pros-
pects, and ethical considerations. Front. Artif. Intell. 6:1169595,
2023.
Publisher's Note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
... This milestone not only marks a significant achievement in artificial intelligence (AI) but also has sparked widespread global interest in the potential applications of AI, particularly in the medical domain (Biswas, 2023;Wójcik et al., 2023). Especially in areas such as diagnostic assistance, treatment planning and recommendations, clinical practice and guidelines, clinical research and data analysis, telemedicine, and medical resource allocation (Ferdush et al., 2023), ChatGPT has shown immense potential. ...
... Furthermore, through interaction with patients, ChatGPT can monitor changes in cognitive status and provide personalized cognitive training and psychological support, offering substantial support for treating cognitive disorders such as dementia (Zheng et al., 2024). Notably, ChatGPT, with its efficient capabilities in parsing and generating human language, has shown proficiency in accurately interpreting medical literature and engaging in empathetic patient communication (Dave et al., 2023;Ferdush et al., 2023) significantly boosting the auxiliary diagnostic and therapeutic potential for MCI. ...
... Such technological integration not only innovates in treatment methodologies but also represents a qualitative leap at the technological level, bringing cognitive training and rehabilitation activities closer to complex real-world scenarios. Moreover, the application potential of this system spans patient diagnostic assistance, personalized treatment plan formulation, and data analysis for doctors (Ferdush et al., 2023), providing comprehensive support and fresh perspectives for the diagnosis and treatment of cognitive disorders. ...
... One concern regarding ChatGPT responses is the method used to find the answer; presumably, the program searches texts on the internet and calculates the probability of the best response. 2 We acknowledge that our study did not include expert opinions or a thorough analysis of the AI-generated responses. We believe that expert opinions, supported by the best available evidence, are adequately represented in existing guidelines. ...
... The integration of AI into clinical contexts and decisions is already a reality in modern medicine. 2 Researchers and clinicians should be discerning regarding the use of these technologies in their practice routines. It is time to discuss and propose a framework for the future clinical decision support system, including the utilization of AI. ...
... Also, LLMs can be customized for operating in specific scenarios, and their ability of context interpretation represents a notable opportunity for enhancing Decision Support for users. In clinical domain, for example, the use of ChatGPT as QA tool has proven valuable for assisting the professional in diagnosis and treatment planning [67,149]. Besides medicine, there is a wide range of applications involving large language models that should be leveraged by future studies for building QA interfaces with situational capabilities [95]. ...
Thesis
Full-text available
Enhancing user support in data integration demands addressing real time information, a challenging process for conventional database systems. Many data integration variants integrate data “on-the-fly” for managing situational queries, i.e., queries that cover dynamic requirements. The methods range, for example, from service mashups to traversal-based approaches; however, the uncertainty in automatic data discovery and ensuring the relevance and completeness of information remain as key challenges. One way to minimize these challenges is by capturing user feedback, since it can help solving ambiguities and improving matching tasks. This thesis introduces a conversational Case-Based Reasoning (CBR) architecture, aimed at improving situational data management by incorporating user feedback into the process. The core of the architecture is a “human-in-the-loop” approach implemented through a conversational agent, which facilitates interaction between the user and the system. For including a learning mechanism adaptable to both positive and negative feedback, the Case-Based Reasoning methodology was used, which solves problems by using or adapting solutions from previous cases. The CBR-based approach leverages a historical knowledge base that is dynamically updated based on user feedback, allowing for a more responsive and adaptive system. This feedback plays a crucial role for case retrieval, review, and retention within the CBR cycle, enabling the system to evolve based on user interactions. Each CBR phase is depicted in the present thesis and evaluated in three different experiments, focused on source retrieval, reuse of multidimensional solutions within a chatbot application, and incremental learning based on historical knowledge, respectively. In the latter, an empirical user study was conducted to assess the impact of user feedback on system recommendations, including both static and dynamic test scenarios, and focusing on aspects such as visibility, support, and usefulness. The results highlighted a general preference for recommendations that were influenced by user input, indicating the effectiveness of incorporating human feedback in the decision-making process. As an additional contribution, this thesis also demonstrate a terminology mismatch involving on-the-fly data integration variants, proposing a new terminology and taxonomy for managing situational data. On the light of the proposed taxonomy, entitled Built-up Integration, the CBR-based architecture is also evaluated regarding Data Retrieval, On-the-fly Integration, and Data Delivery features. Overall, the research conducted contributes to situational data management by illustrating how a conversational CBR framework can improve processes such as data integration and data discovery, and evidencing the potential for further development in this area.
Article
Full-text available
Question Answering (QA) systems provide accurate answers to questions; however, they lack the ability to consolidate data from multiple sources, making it difficult to manage complex questions that could be answered with additional data retrieved and integrated on the fly. This integration is inherent to Situational Data Integration (SDI) approaches that deal with dynamic requirements of ad hoc queries that neither traditional database management systems, nor search engines are effective in providing an answer. Thus, if QA systems include SDI characteristics, they could be able to return validated and immediate information for supporting users decisions. For this reason, we surveyed QA-based systems, assessing their capabilities to support SDI features, i.e., Ad hoc Data Retrieval, Data Management, and Timely Decision Support. We also identified patterns concerning these features in the surveyed studies, highlighting them in a timeline that shows the SDI evolution in the QA domain. To the best of your knowledge, this study is precursor in the joint analysis of SDI and QA, showing a combination that can favor the way systems support users. Our analyses show that most of SDI features are rarely addressed in QA systems, and based on that, we discuss directions for further research.
Book
Full-text available
Knjiga se bavi prikazom dosadašnjih znanja o primjeni chatbotova u psihijatrijskoj praksi i praktičnim savjetima kako ih koristiti u svakodnevnom radu.
Chapter
Full-text available
Poznajući način na koji ChatGPT radi možemo razumjeti kako on stvara odgovore i zbog čega ponekad daje netočne, neprecizne, zastarjele ili nejasne odgovore. Također, možemo razumjeti zašto ponekad daje potpuno izmišljene, nepostojeće podatke. Jednako tako, znat ćemo prepoznati kad se to javlja. U konačnici, odgovornost za primjenu bilo kojeg alata, pa tako i chabotova, je na korisnicima a ne na samom alatu. Svaki odgovor treba pažljivo proučiti i procijeniti koliko ćemo mu vjerovati i hoćemo li ga upotrijebiti. Ukoliko dođe do greške, kao posljedice upotrebe podataka, odgovornost leži na korisniku podataka, a nikako ne na ChatGPT-u. ChatGPT je virtualni davatelj odgovora, ChatGPT je umjetna, oponašajuća (a ne stvarna) inteligencija. On je stroj koji pokušava imitirati ljudske odgovore. On nema zdravog razuma, nema svijesti, nema razumijevanja onoga što nudi. On je jako sofisticirani pretraživač. I ništa više od toga.
Article
Full-text available
ChatGPT, an AI-driven conversational large language model (LLM), has garnered significant scholarly attention since its inception, owing to its manifold applications in the realm of medical science. This study primarily examines the merits, limitations, anticipated developments, and practical applications of ChatGPT in clinical practice, healthcare, medical education, and medical research. It underscores the necessity for further research and development to enhance its performance and deployment. Moreover, future research avenues encompass ongoing enhancements and standardization of ChatGPT, mitigating its limitations, and exploring its integration and applicability in translational and personalized medicine. Reflecting the narrative nature of this review, a focused literature search was performed to identify relevant publications on ChatGPT’s use in medicine. This process was aimed at gathering a broad spectrum of insights to provide a comprehensive overview of the current state and future prospects of ChatGPT in the medical domain. The objective is to aid healthcare professionals in understanding the groundbreaking advancements associated with the latest artificial intelligence tools, while also acknowledging the opportunities and challenges presented by ChatGPT.
Article
This study assessed the performance of LLM-linked chatbots in providing accurate advice for colorectal cancer screening to both clinicians and patients. We created standardized prompts for nine patient cases varying by age and family history to query ChatGPT, Bing Chat, Google Bard, and Claude 2 for screening recommendations to clinicians. Chatbots were asked to specify which screening test was indicated and the frequency of interval screening. Separately, the chatbots were queried with lay terminology for screening advice to patients. Clinician and patient advice was compared to guidelines from the United States Preventive Services Task Force (USPSTF), Canadian Cancer Society (CCS), and the U.S. Multi-Society Task Force (USMSTF) on Colorectal Cancer. Based on USPSTF criteria, clinician advice aligned with 3/4 (75.0%), 2/4 (50.0%), 3/4 (75.0%), and 1/4 (25.0%) cases for ChatGPT, Bing Chat, Google Bard, and Claude 2, respectively. With CCS criteria, clinician advice corresponded to 2/4 (50.0%), 2/4 (50.0%), 2/4 (50.0%), and 1/4 (25.0%) cases for ChatGPT, Bing Chat, and Google Bard, respectively. For USMSTF guidelines, clinician advice aligned with 7/9 (77.8%), 5/9 (55.6%), 6/9 (66.7%), and 3/9 (33.3%) cases for ChatGPT, Bing Chat, Google Bard, and Claude 2, respectively. Discordant advice was given to clinicians and patients for 2/9 (22.2%), 3/9 (33.3%), 2/9 (22.2%), and 3/9 (33.3%) cases for ChatGPT, Bing Chat, Google Bard, and Claude 2, respectively. Clinical advice provided by the chatbots stemmed from a range of sources including the American Cancer Society (ACS), USPSTF, USMSTF, and the CCS. LLM-linked chatbots provide colorectal cancer screening recommendations with inconsistent accuracy for both patients and clinicians. Clinicians must educate patients on the pitfalls of using these platforms for health advice.
Article
Full-text available
This study aimed to assess the knowledge, attitudes, and intended practices of healthcare workers (HCWs) in Saudi Arabia towards ChatGPT, an artificial intelligence (AI) Chatbot, within the first three months after its launch. We also aimed to identify potential barriers to AI Chatbot adoption among healthcare professionals. A cross-sectional survey was conducted among 1057 HCWs in Saudi Arabia, distributed electronically via social media channels from 21 February to 6 March 2023. The survey evaluated HCWs’ familiarity with ChatGPT-3.5, their satisfaction, intended future use, and perceived usefulness in healthcare practice. Of the respondents, 18.4% had used ChatGPT for healthcare purposes, while 84.1% of non-users expressed interest in utilizing AI Chatbots in the future. Most participants (75.1%) were comfortable with incorporating ChatGPT into their healthcare practice. HCWs perceived the Chatbot to be useful in various aspects of healthcare, such as medical decision-making (39.5%), patient and family support (44.7%), medical literature appraisal (48.5%), and medical research assistance (65.9%). A majority (76.7%) believed ChatGPT could positively impact the future of healthcare systems. Nevertheless, concerns about credibility and the source of information provided by AI Chatbots (46.9%) were identified as the main barriers. Although HCWs recognize ChatGPT as a valuable addition to digital health in the early stages of adoption, addressing concerns regarding accuracy, reliability, and medicolegal implications is crucial. Therefore, due to their unreliability, the current forms of ChatGPT and other Chatbots should not be used for diagnostic or treatment purposes without human expert oversight. Ensuring the trustworthiness and dependability of AI Chatbots is essential for successful implementation in healthcare settings. Future research should focus on evaluating the clinical outcomes of ChatGPT and benchmarking its performance against other AI Chatbots.
Article
Full-text available
ChatGPT is receiving increasing attention and has a variety of application scenarios in clinical practice. In clinical decision support, ChatGPT has been used to generate accurate differential diagnosis lists, support clinical decision-making, optimize clinical decision support, and provide insights for cancer screening decisions. In addition, ChatGPT has been used for intelligent question-answering to provide reliable information about diseases and medical queries. In terms of medical documentation, ChatGPT has proven effective in generating patient clinical letters, radiology reports, medical notes, and discharge summaries, improving efficiency and accuracy for health care providers. Future research directions include real-time monitoring and predictive analytics, precision medicine and personalized treatment, the role of ChatGPT in telemedicine and remote health care, and integration with existing health care systems. Overall, ChatGPT is a valuable tool that complements the expertise of health care providers and improves clinical decision-making and patient care. However, ChatGPT is a double-edged sword. We need to carefully consider and study the benefits and potential dangers of ChatGPT. In this viewpoint, we discuss recent advances in ChatGPT research in clinical practice and suggest possible risks and challenges of using ChatGPT in clinical practice. It will help guide and support future artificial intelligence research similar to ChatGPT in health.
Preprint
Full-text available
Background ChatGPT(Chat Generative Pre-trained Transformer) is an artificial intelligence (AI) based on a natural language processing tool developed by OpenAI (California, USA). This systematic review examines the potential of Chat GPT in diagnosing and treating patients and its contributions to medical research. Methods In order to locate articles on ChatGPT's use in clinical practise and medical research, this systematic review used PRISMA standards and conducted database searches across several sources. Selected records were analysed using ChatGPT, which also produced a summary for each article. The resultant word document was transformed to a PDF and handled using ChatPDF. The review looked at topics pertaining to scholarly publishing, clinical practise, and medical research. Results We reviewed 118 publications. There are difficulties and moral conundrums associated with using ChatGPT in therapeutic settings and medical research. Patient inquiries, note writing, decision-making, trial enrolment, data management, decision support, research support, and patient education are all things that ChatGPT can help with. However, the solutions it provides are frequently inadequate and inconsistent, presenting issues with its originality, privacy, accuracy, bias, and legality. When utilising ChatGPT for academic writings, there are issues with prejudice and plagiarism, and because it lacks human-like characteristics, its authority as an author is called into question. Conclusions ChatGPT has limitations when used in research and healthcare. Even while it aids in patient treatment, concerns regarding accuracy, authorship, and bias arise. Currently, ChatGPT can serve as a "clinical assistant" and be a huge assistance with research and scholarly writing.
Article
Full-text available
This paper presents an analysis of the advantages, limitations, ethical considerations, future prospects, and practical applications of ChatGPT and artificial intelligence (AI) in the healthcare and medical domains. ChatGPT is an advanced language model that uses deep learning techniques to produce human-like responses to natural language inputs. It is part of the family of generative pre-training transformer (GPT) models developed by OpenAI and is currently one of the largest publicly available language models. ChatGPT is capable of capturing the nuances and intricacies of human language, allowing it to generate appropriate and contextually relevant responses across a broad spectrum of prompts. The potential applications of ChatGPT in the medical field range from identifying potential research topics to assisting professionals in clinical and laboratory diagnosis. Additionally, it can be used to help medical students, doctors, nurses, and all members of the healthcare fraternity to know about updates and new developments in their respective fields. The development of virtual assistants to aid patients in managing their health is another important application of ChatGPT in medicine. Despite its potential applications, the use of ChatGPT and other AI tools in medical writing also poses ethical and legal concerns. These include possible infringement of copyright laws, medico-legal complications, and the need for transparency in AI-generated content. In conclusion, ChatGPT has several potential applications in the medical and healthcare fields. However, these applications come with several limitations and ethical considerations which are presented in detail along with future prospects in medicine and healthcare.
Article
Full-text available
Objective To determine if ChatGPT can generate useful suggestions for improving clinical decision support (CDS) logic and to assess noninferiority compared to human-generated suggestions. Methods We supplied summaries of CDS logic to ChatGPT, an artificial intelligence (AI) tool for question answering that uses a large language model, and asked it to generate suggestions. We asked human clinician reviewers to review the AI-generated suggestions as well as human-generated suggestions for improving the same CDS alerts, and rate the suggestions for their usefulness, acceptance, relevance, understanding, workflow, bias, inversion, and redundancy. Results Five clinicians analyzed 36 AI-generated suggestions and 29 human-generated suggestions for 7 alerts. Of the 20 suggestions that scored highest in the survey, 9 were generated by ChatGPT. The suggestions generated by AI were found to offer unique perspectives and were evaluated as highly understandable and relevant, with moderate usefulness, low acceptance, bias, inversion, redundancy. Conclusion AI-generated suggestions could be an important complementary part of optimizing CDS alerts, can identify potential improvements to alert logic and support their implementation, and may even be able to assist experts in formulating their own suggestions for CDS improvement. ChatGPT shows great potential for using large language models and reinforcement learning from human feedback to improve CDS alert logic and potentially other medical areas involving complex, clinical logic, a key step in the development of an advanced learning health system.
Article
Full-text available
ChatGPT, a language model developed by OpenAI, has the potential to play a role in public health. With its ability to generate human-like text based on large amounts of data, ChatGPT has the potential to support individuals and communities in making informed decisions about their health (Panch et al. Lancet Digit Health 1:e13-e14, 2019; Baclic et al. Canada Commun Dis Rep 46.6:161, 2020). However, as with any technology, there are limitations and challenges to consider when using ChatGPT in public health. In this overview, we will examine the potential uses of ChatGPT in public health, as well as the advantages and disadvantages of its use.
Preprint
Full-text available
An artificial intelligence (AI)-based conversational large language model (LLM) was launched in November 2022 namely, "ChatGPT". Despite the wide array of potential applications of LLMs in healthcare education, research and practice, several valid concerns were raised. The current systematic review aimed to investigate the possible utility of ChatGPT and to highlight its limitations in healthcare education, research and practice. Using the PRIMSA guidelines, a systematic search was conducted to retrieve English records in PubMed/MEDLINE and Google Scholar under the term "ChatGPT". Eligibility criteria included the published research or preprints of any type that discussed ChatGPT in the context of healthcare education, research and practice. A total of 280 records were identified, and following full screening, a total of 60 records were eligible for inclusion. Benefits/applications of ChatGPT were cited in 51/60 (85.0%) records with the most common being the utility in scientific writing followed by benefits in healthcare research (efficient analysis of massive datasets, code generation and rapid concise literature reviews besides utility in drug discovery and development). Benefits in healthcare practice included cost saving, documentation, personalized medicine and improved health literacy. Concerns/possible risks of ChatGPT use were expressed in 58/60 (96.7%) records with the most common being the ethical issues including the risk of bias, plagiarism, copyright issues, transparency issues, legal issues, lack of originality, incorrect responses, limited knowledge, and inaccurate citations. Despite the promising applications of ChatGPT which can result in paradigm shifts in healthcare education, research and practice, the embrace of this application should be done with extreme caution. Specific applications of ChatGPT in health education include the promising utility in personalized learning tools and shift towards more focus on critical thinking and problem-based learning. In healthcare practice, ChatGPT can be valuable for streamlining the workflow and refining personalized medicine. Saving time for the focus on experimental design and enhancing research equity and versatility are the benefits in scientific research.
Article
Full-text available
Background: Artificial intelligence (AI) has developed rapidly, and its application extends to clinical decision support system (CDSS) for improving healthcare quality. However, the interpretability of AI-driven CDSS poses significant challenges to widespread application. Objective: This study is a review of the knowledge-based and data-based CDSS literature regarding interpretability in health care. It highlights the relevance of interpretability for CDSS and the area for improvement from technological and medical perspectives. Methods: A systematic search was conducted on the interpretability-related literature published from 2011 to 2020 and indexed in the five databases: Web of Science, PubMed, ScienceDirect, Cochrane, and Scopus. Journal articles that focus on the interpretability of CDSS were included for analysis. Experienced researchers also participated in manually reviewing the selected articles for inclusion/exclusion and categorization. Results: Based on the inclusion and exclusion criteria, 20 articles from 16 journals were finally selected for this review. Interpretability, which means a transparent structure of the model, a clear relationship between input and output, and explainability of artificial intelligence algorithms, is essential for CDSS application in the healthcare setting. Methods for improving the interpretability of CDSS include ante-hoc methods such as fuzzy logic, decision rules, logistic regression, decision trees for knowledge-based AI, and white box models, post hoc methods such as feature importance, sensitivity analysis, visualization, and activation maximization for black box models. A number of factors, such as data type, biomarkers, human-AI interaction, needs of clinicians, and patients, can affect the interpretability of CDSS. Conclusions: The review explores the meaning of the interpretability of CDSS and summarizes the current methods for improving interpretability from technological and medical perspectives. The results contribute to the understanding of the interpretability of CDSS based on AI in health care. Future studies should focus on establishing formalism for defining interpretability, identifying the properties of interpretability, and developing an appropriate and objective metric for interpretability; in addition, the user's demand for interpretability and how to express and provide explanations are also the directions for future research.
Article
Full-text available
Artificial intelligence (AI) has been widely used in various medical fields, such as image diagnosis, pathological classification, selection of treatment schemes, and prognosis analysis. Especially in the image-aided diagnosis of tumors, the cooperation of human-computer interactions has become mature. However, the ethics of the application of AI as an emerging technology in clinical decision-making have not been fully supported, so the clinical decision support system (CDSS) based on AI technology has not fully realized human-computer interactions in clinical practice as the image-aided diagnosis system. The CDSS was currently used and promoted worldwide including Watson for Oncology, Chinese society of clinical oncology-artificial intelligence (CSCO AI) and so on. This paper summarized the applications and clarified the principle of AI in CDSS, analyzed the difficulties of AI in oncology decisions, and provided a reference scheme for the application of AI in oncology decisions in the future.
Article
Purpose: To identify and quantify the barriers and facilitators to the use of clinical decision support systems (CDSSs) by primary care professionals (PCPs). Methods: A mixed-methods systematic review was conducted using a sequential synthesis design. PubMed/MEDLINE, PsycInfo, Embase, CINAHL, and the Cochrane library were searched in July 2021. Studies that evaluated CDSSs providing recommendations to PCPs and intended for use during a consultation were included. We excluded CDSSs used only by patients, described as concepts or prototypes, used with simulated cases, and decision supports not considered as CDSSs. A framework synthesis was performed according to the HOT-fit framework (Human, Organizational, Technology, Net Benefits), then a quantitative synthesis evaluated the impact of the HOT-fit categories on CDSS use. Results: A total of 48 studies evaluating 45 CDSSs were included, and 186 main barriers or facilitators were identified. Qualitatively, barriers and facilitators were classified as human (eg, perceived usefulness), organizational (eg, disruption of usual workflow), and technological (eg, CDSS user-friendliness), with explanatory elements. The greatest barrier to using CDSSs was an increased workload. Quantitatively, the human and organizational factors had negative impacts on CDSS use, whereas the technological factor had a neutral impact and the net benefits dimension a positive impact. Conclusions: Our findings emphasize the need for CDSS developers to better address human and organizational issues, in addition to technological challenges. We inferred core CDSS features covering these 3 factors, expected to improve their usability in primary care.