ArticlePDF AvailableLiterature Review

Perceptions and Needs of Artificial Intelligence in Health Care to Increase Adoption: Scoping Review

Authors:

Abstract and Figures

Background Artificial intelligence (AI) has the potential to improve the efficiency and effectiveness of health care service delivery. However, the perceptions and needs of such systems remain elusive, hindering efforts to promote AI adoption in health care. Objective This study aims to provide an overview of the perceptions and needs of AI to increase its adoption in health care. Methods A systematic scoping review was conducted according to the 5-stage framework by Arksey and O’Malley. Articles that described the perceptions and needs of AI in health care were searched across nine databases: ACM Library, CINAHL, Cochrane Central, Embase, IEEE Xplore, PsycINFO, PubMed, Scopus, and Web of Science for studies that were published from inception until June 21, 2021. Articles that were not specific to AI, not research studies, and not written in English were omitted. Results Of the 3666 articles retrieved, 26 (0.71%) were eligible and included in this review. The mean age of the participants ranged from 30 to 72.6 years, the proportion of men ranged from 0% to 73.4%, and the sample sizes for primary studies ranged from 11 to 2780. The perceptions and needs of various populations in the use of AI were identified for general, primary, and community health care; chronic diseases self-management and self-diagnosis; mental health; and diagnostic procedures. The use of AI was perceived to be positive because of its availability, ease of use, and potential to improve efficiency and reduce the cost of health care service delivery. However, concerns were raised regarding the lack of trust in data privacy, patient safety, technological maturity, and the possibility of full automation. Suggestions for improving the adoption of AI in health care were highlighted: enhancing personalization and customizability; enhancing empathy and personification of AI-enabled chatbots and avatars; enhancing user experience, design, and interconnectedness with other devices; and educating the public on AI capabilities. Several corresponding mitigation strategies were also identified in this study. Conclusions The perceptions and needs of AI in its use in health care are crucial in improving its adoption by various stakeholders. Future studies and implementations should consider the points highlighted in this study to enhance the acceptability and adoption of AI in health care. This would facilitate an increase in the effectiveness and efficiency of health care service delivery to improve patient outcomes and satisfaction.
Content may be subject to copyright.
Review
Perceptions and Needs of Artificial Intelligence in Health Care to
Increase Adoption: Scoping Review
Han Shi Jocelyn Chew1, DPhil; Palakorn Achananuparp2, DPhil
1Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
2Living Analytics Research Centre, Singapore Management University, Singapore, Singapore
Corresponding Author:
Han Shi Jocelyn Chew, DPhil
Alice Lee Centre for Nursing Studies
Yong Loo Lin School of Medicine
National University of Singapore
Level 3, Clinical Research Centre
Block MD11, 10 Medical Drive
Singapore, 117597
Singapore
Phone: 65 65168687
Email: jocelyn.chew.hs@nus.edu.sg
Abstract
Background: Artificial intelligence (AI) has the potential to improve the efficiency and effectiveness of health care service
delivery. However, the perceptions and needs of such systems remain elusive, hindering efforts to promote AI adoption in health
care.
Objective: This study aims to provide an overview of the perceptions and needs of AI to increase its adoption in health care.
Methods: A systematic scoping review was conducted according to the 5-stage framework by Arksey and O’Malley. Articles
that described the perceptions and needs of AI in health care were searched across nine databases: ACM Library, CINAHL,
Cochrane Central, Embase, IEEE Xplore, PsycINFO, PubMed, Scopus, and Web of Science for studies that were published from
inception until June 21, 2021. Articles that were not specific to AI, not research studies, and not written in English were omitted.
Results: Of the 3666 articles retrieved, 26 (0.71%) were eligible and included in this review. The mean age of the participants
ranged from 30 to 72.6 years, the proportion of men ranged from 0% to 73.4%, and the sample sizes for primary studies ranged
from 11 to 2780. The perceptions and needs of various populations in the use of AI were identified for general, primary, and
community health care; chronic diseases self-management and self-diagnosis; mental health; and diagnostic procedures. The use
of AI was perceived to be positive because of its availability, ease of use, and potential to improve efficiency and reduce the cost
of health care service delivery. However, concerns were raised regarding the lack of trust in data privacy, patient safety,
technological maturity, and the possibility of full automation. Suggestions for improving the adoption of AI in health care were
highlighted: enhancing personalization and customizability; enhancing empathy and personification of AI-enabled chatbots and
avatars; enhancing user experience, design, and interconnectedness with other devices; and educating the public on AI capabilities.
Several corresponding mitigation strategies were also identified in this study.
Conclusions: The perceptions and needs of AI in its use in health care are crucial in improving its adoption by various stakeholders.
Future studies and implementations should consider the points highlighted in this study to enhance the acceptability and adoption
of AI in health care. This would facilitate an increase in the effectiveness and efficiency of health care service delivery to improve
patient outcomes and satisfaction.
(J Med Internet Res 2022;24(1):e32939) doi: 10.2196/32939
KEYWORDS
artificial intelligence; health care; service delivery; perceptions; needs; scoping; review
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 1https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
Introduction
Background
Rapid advances in artificial intelligence (AI)—software systems
designed to mimic human intelligence or cognitive
functions—have sparked confidence in its potential to enhance
the efficiency of health care service delivery and patient
outcomes [1-3]. However, although AI has been rapidly adopted
in many industries, such as finance and information technology
(IT), its adoption in health care remains relatively lagged
because of the ethical and safety considerations that are more
pronounced when it comes to human lives at stake [4].
AI-powered systems in health care can autonomously or
semiautonomously perform a wide variety of tasks, such as
medical diagnosis [5], treatment [6], and self-monitoring and
coaching [7,8]. In some studies, AI has been shown to
outperform human capabilities, such as analyses of chest x-ray
images by radiologists [9]. Not only is AI expected to improve
the quality of care and health outcomes for patients by
decreasing human errors, but it is also likely to free up time for
clinicians and health care workers from routine and repetitive
tasks, enabling them to focus on more complex tasks [9,10].
For instance, in many areas of medical imaging, the use of fast
and accurate AI-assisted diagnoses would significantly increase
the workflow efficiency by processing more than 250 million
images per day [11]. Various AI chatbots have also been
developed to provide mental health counseling and assist
overburdened clinicians [9]. Through AI-enabled apps and
wearable devices, patients and the public could self-monitor
and self-diagnose symptoms, such as atrial fibrillation, skin
lesions, and retinal diseases [9].
Owing to the emerging nature of modern AI systems, the
perceptions and needs of affected stakeholders (eg, health care
providers, patients, caregivers, policy makers, and IT
technicians) on the use of AI in health care are not yet fully
understood. A large body of literature suggests that human
factors, such as trust, perceived usefulness, and privacy, play
an important role in the acceptance and adoption of past
technologies in health care, including handheld devices [12],
IT [13], and assistive technologies [14]. However, current
evidence remains broad and general, and little is known about
the perceptions and needs of AI in community health care. As
the world makes a paradigm shift from curative to preventive
medicine, AI holds a strong transformative potential to enhance
sustainable health care by empowering self-care, such as
self-monitoring and self-diagnosis. However, it is important to
first understand the perspectives of all direct users of AI-driven
systems (eg, patients and frontline health workers) and their
perceived needs to ensure its successful adoption across different
parts of the health care sector, especially community health
care. Thus, this study aims to present an overview of the
perceptions and needs of AI in community health care. The
implications of this study will help inform the design of future
health care–related AI technology to better fit the needs of users
and enhance the adoption and acceptability of the technology.
Definition of AI
First, as the term AI is broadly used in many disciplines to
represent various forms of intelligent systems and algorithms,
it is important to establish a concrete and unified definition of
AI for this study. Specifically, we adopted the definition of AI
proposed by the High-Level Expert Group on Artificial
Intelligence [15], which describes AI in terms of both a
technology and field of study:
Artificial intelligence (AI) systems are software (and
possibly also hardware) systems designed by humans
that, given a complex goal, act in the physical or
digital dimension by perceiving their environment
through data acquisition, interpreting the collected
structured or unstructured data, reasoning on the
knowledge, or processing the information, derived
from this data and deciding the best action(s) to take
to achieve the given goal. AI systems can either use
symbolic rules or learn a numeric model, and they
can also adapt their behaviour by analysing how the
environment is affected by their previous actions.
As a scientific discipline, AI includes several
approaches and techniques, such as machine learning
(of which deep learning and reinforcement learning
are specific examples), machine reasoning (which
includes planning, scheduling, knowledge
representation and reasoning, search, and
optimization), and robotics which includes control,
perception, sensors, and actuators, as well as the
integration of all other techniques into cyber-physical
systems.
Furthermore, most, if not all, modern AI systems are considered
artificial narrow intelligence (ANI) or Weak AI [15] designed
to perform one or more specific tasks. In health care,
domain-specific tasks for ANI may vary from performing human
perceptions, such as image recognition [16] and natural language
processing [17], to making complex clinical decisions, such as
medical diagnostics [18]. Many recent advances and
breakthroughs in ANI use learning-based approaches, namely,
deep learning, in which computational models consisting of
several layers of artificial neural networks (hence the titular
deep) are trained by learning from a massive amount of sample
data to perform specific tasks. Although recent performances
of ANI appear very promising, ANI models are limited in their
generalizability, that is, models trained to perform tasks in one
domain cannot be generalized to other domains. For example,
ANI trained to diagnose diabetic retinopathy from fundus images
cannot be directly used to detect pneumonia from chest x-ray
images. In contrast to ANI, artificial general intelligence (AGI)
or Strong AI [15] belongs to a class of AI that displays true
human intelligence, capable of continuously learning and
performing any tasks like a real human. AGI is most likely in
public consciousness when talking about AI, as it is frequently
portrayed in popular culture by sentient robots and self-aware
systems. At present, no AI systems have been able to come
close to exhibit the AGI capability. For a useful and concise
summary regarding the definitions, terminologies, and history
of AI, see the following technical reports: Ethics Guidelines for
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 2https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
Trustworthy AI [15] and Historical Evolution of Artificial
Intelligence [19].
Methods
A systematic scoping review was conducted according to the
5-stage framework by Arksey and O’Malley [20]. Results were
reported according to the PRISMA (Preferred Reporting Items
for Systematic Reviews and Meta-Analyses) checklist
(Multimedia Appendix 1) [21].
Stage 1: Identifying the Research Question
Our research question was as follows: What is known about the
perceptions and needs of AI in health care?
Stage 2: Identifying Relevant Studies
Studies were searched from inception until June 21, 2021, using
a 3-step search strategy. First, potential keywords and Medical
Subject Headings terms were generated through iterative
searches on PubMed and Embase. Keywords such as machine
learning did not result in better search outcomes (ie, many
irrelevant results were retrieved, such as the use of machine
learning to explore perceptions of other topics); hence, they
were omitted. Next, keywords including artificial intelligence,
AI; public; consumer; community; perception*; preference*;
needs*; opinions*; and acceptability were searched through
nine databases: ACM Library, CINAHL, Cochrane Central,
Embase, IEEE Xplore, PsycINFO, PubMed, Scopus, and Web
of Science. Additional articles were also retrieved from the first
10 pages of the Google Scholar search results and the reference
lists of the included full-text articles. The specific database
searches combined with Boolean operators are detailed in
Multimedia Appendix 2.
Stage 3: Study Selection
After removing duplicate articles, titles and abstracts were first
screened by HSJC for inclusion eligibility. Articles were
included if they were (1) focused on the use of AI in health care,
except those focused on using AI to improve surgical techniques;
(2) focused on perceptions, needs, and acceptability of AI in
health care; (3) empirical studies or systematic reviews; (4) on
adults aged 18 years; and (5) used in a community setting.
Articles were excluded if they were (1) not specific to AI (eg,
general eHealth or mobile health); (2) pilot studies,
commentaries, perspectives, or opinion papers; and (3) not
presented in the English language. In total, 43 full-text articles
were screened independently by both coauthors, and
discrepancies were resolved through discussions and consensus.
Stage 4: Charting the Data
Data were extracted by HSJC using Microsoft Excel according
to the following headings: author, year, title, aim, type of
publication, study design, country, AI applications in health
care, data collection method, population characteristics, sample
size, age (mean or range), proportion of men, acceptability,
perceptions, needs and preferences, and limitations.
Results
Stage 5: Collating, Summarizing, and Reporting
Results
A total of 3666 articles were retrieved from the initial search.
After removing duplicate articles, 50.74% (1860/3666) of titles
and abstracts were screened, and 0.91% (17/1860) of full-text
articles were excluded for reasons shown in Figure 1. A total
of 1.4% (26/1860) of articles were included in this study, with
the study characteristics summarized in Table 1 and detailed in
Multimedia Appendix 3 [22-47]. The mean age of participants
ranged from 30 to 72.6 years, and the proportion of men ranged
from 0% to 73.4%. Sample sizes for studies with human subject
responses ranged from 11 to 2780, and secondary data (ie,
journal articles and app reviews) ranged from 31 to 1826
[22-24]. Interestingly, 19% (5/26) of studies focused on the use
of chatbots in health care [23-27] and 31% (8/26) of studies
measured acceptability using questionnaires, surveys, interviews
[25,26,28-33], and the discrete choice experiment (Multimedia
Appendix 4[22-32,34,36,37,39,41-44,47]) [34]. All the studies
showed at least moderate acceptability, or >50% of the
participants showed acceptance toward the use of AI in health
care, albeit only for minor conditions [26]. Age, IT skills,
preference for talking to computers, perceived utility, positive
attitude, and perceived trustworthiness were found to be
associated with AI acceptability [25,26].
Figure 1. PRISMA (Preferred Reporting Item for Systematic Reviews and Meta-Analyses) flow diagram of search strategy. AI: artificial intelligence.
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 3https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
Table 1. Summary of study characteristics (N=26).
Value, n (%)Study characteristics
Country
1 (4)Australia and New Zealand [35]
4 (15)Canada [27,36-38]
6 (23)China [22,32,33,39,40]
1 (4)France [41]
2 (8)India [24,42]
1 (4)Korea [48]
1 (4)Saudi Arabia [29]
1 (4)Switzerland [30]
5 (19)United Kingdom [23,26,31,43,44]
1 (4)United Kingdom, Cyprus, Australia, the Netherlands, Sweden, Spain, United States, and Canada [28]
3 (12)United States [25,45,46]
Type of publication
24 (92)Journal papers [22-29,31-41,43-47]
2 (8)Conference papers [30,42]
Study design
15 (58)Observational [22,24,27-30,33-35,39,43-47]
5 (19)Qualitative [36-38,41,42]
5 (19)Mixed methods [25,26,31,32,40]
1 (4)Systematic review [23]
Population characteristics
9 (35)General public [22,24,26,30,32-34,37,45]
10 (39)Health care, government, technology, and industrial staff [27-29,35,36,40-44]
7 (27)Patients and caregivers with specific diseases [25,31,34,36,38,39,47]
1 (4)Mixture (systematic review) [23]
Artificial intelligence applications in health care
11 (42)General health care [22,23,26,27,29,33,36,37,40,41,43]
3 (12)Primary [44] and community health care [28,42]
3 (12)Chronic disease self-management [25,31,47]
4 (15)Self-diagnosis [30,32,34,39]
2 (8)Mental health [24,38]
3 (12)Diagnostics [35,45,46]
Positive Perceptions
Overview
Several positive perceptions on the use of AI in health care were
highlighted in our findings (Table 2).
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 4https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
Table 2. Perceptions on the use of artificial intelligence (AI) in health care.
Concerns over
full automation
Lack of trust in
technology
Lack of trust in
patient safety
Lack of trust in
data privacy
PriceEfficiencyAvailable on
demand and us-
er-friendly
Study
NS
NSb
Deemed techni-
cally and com-
Especially in
voice-activated
devices
Cost was seen as
both a facilitator of
and a barrier to the
older people’s
adoption of AIa
Could support the
self-care needs of old-
er people—mobility,
self-care and domestic
life, social life and re-
lationships, psycholog-
Able to collect
data nonintru-
sively
Abdi et al
[28]mercially ready
to support the
care needs of
older people
ical support, and ac-
cess to health care;
potential uses for re-
mote monitoring and
prompting daily re-
minders, for example,
medications
Most health
care employees
NSAI was unable
to provide opin-
NSNSSpeeds up health care
processes
NSAbdullah
and Fakieh
[29] feared that the
AI would re-
ions in unexpect-
ed situations place their job
(mean score
3.11 of 4)
Only a minority
would rely sole-
NSUsers were un-
sure about the
There were con-
cerns over data
privacy
AI could be a cost-
saving alternative
Quicker diagnosis and
no waiting time
Constant avail-
ability, not re-
stricted by
physical loca-
tion
Baldauf et al
[30]ly on an AI-
driven app for
assessing health
legality of offi-
cial medical
certification and
app trustworthi-
ness
Overall, 10% of
health care staff
NSNSIn all, 80% of
health care staff
NSIn all, 79% of health
care staff believed AI
NSCastagno
and Khalifa
[43] worried AI will
replace their job
believed there
may be serious
privacy issues
could be useful or ex-
tremely useful in their
field of work
NSNSPatients were
unsure whether
Patients were
not concerned
NSNSNSEaston et al
[31]to treat a chat-over data shar-
ing bot as a real
physician or an
adviser
Less than half
of the social
NSSocial media
users were pes-
Distrust of AI
companies ac-
NSNSNSGao et al
[22]media posts ex-simistic aboutcounted for a pressed that AIthe immaturityquarter of all would complete-of AI technolo-
gy
negative opin-
ions among so-
cial media users ly or partially
replace human
doctors
NSNSThere were con-
cerns with chat-
There were con-
cerns with chat-
NSThe majority were in-
terested in using a
NSGriffin et al
[25]bots makingbots providingchatbot to help man- overwhelmingtoo much infor-age medications, re- demands formation and in-
vading privacy
fills, communicate
with care teams, and
accountability toward
self-care tasks
lifestyle
changes
NSNSNSNSNSNSNSKim [47]
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 5https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
Concerns over
full automation
Lack of trust in
technology
Lack of trust in
patient safety
Lack of trust in
data privacy
PriceEfficiencyAvailable on
demand and us-
er-friendly
Study
NSNSNSThere were le-
gal difficulties
to access indi-
vidual health
data; regulate
use; strike bal-
ance between
health, social
justice, and
freedom; and
need to achieve
confidentiality
and respect for
privacy
NSNSNSLai et al [41]
NSNSAI may not un-
derstand com-
plex emotional
problems and
give incurable
diagnoses; and
unsure whether
doctors would
accept the infor-
mation provid-
ed by the AI
NSNSNSNSLi et al [32]
Majority pre-
ferred to receive
combined diag-
noses from both
AI and human
clinicians
NSMajority were
confident that
AI diagnosis
methods would
outperform hu-
man clinician
diagnosis meth-
ods because of
higher accuracy
NSNSNSNSLiu et al [34]
NSNSAccuracy was
deemed the
most important
attribute for AI
uptake
NSAcceptability de-
pends on the ex-
pense of AI diagno-
sis compared with
that of physicians
NSNSLiu et al [39]
AI technology
is still not com-
petent to re-
place human
decision-mak-
ing in clinical
scenarios
NSThere were con-
cerns over the
risk of medical
errors, bias, and
secondary ef-
fects of using
AI (eg, insur-
ance)
NSNSImproves efficiency
through decision sup-
port to improve prima-
ry health care process-
es and pattern recogni-
tion in imaging
NSLiyanage et
al [44]
Fear of losing
human touch
and skills from
overreliance on
machines
NSIt still requires
human verifica-
tion of comput-
er-aided deci-
sions
There were con-
cerns about pri-
vacy, commer-
cial motives,
and other risks
and mixed
views about ex-
plicit consent
for research.
Transparency is
needed
NSPotential for faster
and more accurate
analyses; ability to use
more data
NSMcCradden
et al [36]
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 6https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
Concerns over
full automation
Lack of trust in
technology
Lack of trust in
patient safety
Lack of trust in
data privacy
PriceEfficiencyAvailable on
demand and us-
er-friendly
Study
NSNSA few patients
and caregivers
felt that alloca-
tion of health
resources
should be done
via computer-
ized output, and
a majority stat-
ed that it was
inappropriate to
delegate such
decisions to a
computer
Nonconsented
use of health
data is accept-
able with disclo-
sure and trans-
parency. Selling
health data
should be pro-
hibited. Some
privacy health
outcomes trade-
off is acceptable
NSPredictive modeling
performed on primary
care health data and
business analytics for
primary care provider.
AI has the potential to
improve managerial
and clinical decisions
and processes, and
this would be facilitat-
ed by common data
standards
NSMcCradden
et al [37]
NSNSUnable to suffi-
ciently encom-
pass the real sit-
uational com-
plexity. Elec-
tronic physician
did not have the
ability to go
deep enough,
provide access
to other materi-
als, or provide
enough informa-
tion
NSNSSpeed up the process
of service delivery
and performance. Re-
spondents appreciated
reminders and assis-
tance in forming rou-
tines, chatbot agents
in facilitating learn-
ing, and agents in pro-
viding accountability
(eg, regular check-ins,
follow-ups). Multi-
modal interactions
(eg, voice, touch)
were viewed positive-
ly
Easy to learn
and use
Milne-Ives
et al [23]
NSUncertain about
the quality,
trustworthiness,
and accuracy of
the health infor-
mation provid-
ed by chatbots
Risk of harm
from inaccurate
or inadequate
advice. Imma-
ture in perform-
ing a diagnosis
but providing
general health
advice is accept-
able
Some partici-
pants were con-
cerned about
the ability of
the chatbots to
keep sensitive
data secured
and confiden-
tial. The level
of anonymity
offered by chat-
bots was
viewed positive-
ly by several
participants
NSIf free at the point of
access, chatbots were
seen as time-saving
and useful platforms
for triaging users to
appropriate health
care services
Chatbots were
perceived as a
convenient tool
that could facili-
tate the seeking
of health infor-
mation on the
web
Nadarzynski
et al [26]
AI would never
completely re-
place health
care workers
because of the
need for human
interaction
Concerned over
AI failures or
misdiagnoses.
The AI app
might serve to
reinforce the
expertise of
CHWs, improve
patients’under-
standing of the
diagnosis
NSNSAI app would be able
to perform some of
the manual tasks and
make the work of
CHWscmore effi-
cient, and help CHWs
and patients in deci-
sion-making processes
NSOkolo et al
[42]
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 7https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
Concerns over
full automation
Lack of trust in
technology
Lack of trust in
patient safety
Lack of trust in
data privacy
PriceEfficiencyAvailable on
demand and us-
er-friendly
Study
Chatbots alone
are not able to
provide effec-
tive care for all
patients because
of limited
knowledge of
personal factors
NSChatbots could
be a risk to pa-
tients if they
self-diagnose
too often and
did not accurate-
ly understand
the diagnoses
NSNSMany physicians be-
lieved that chatbots
would be most benefi-
cial for administrative
tasks such as schedul-
ing physician appoint-
ments, locating health
clinics, or providing
medication informa-
tion
NSPalanica et
al [27]
NSDoubtful about
reliability and
functionality
Chatbots may
be useful in
managing men-
tal health condi-
tions but not
good enough
for complex
problems. May
even be more
harmful to vul-
nerable patients
with poor ad-
vice
Data privacy is
a major barrier
that prevents
the adoption of
mental health
chatbots
The price of men-
tal health chatbots
could be a decisive
factor in places
with a poor health
insurance system
NSAlways avail-
able at the
touch of a but-
ton and user-
friendly
Prakash and
Das [24]
There is decreas-
ing reliance on
medical special-
ists for diagno-
sis and treat-
ment advice
AI would need
to perform
much more su-
perior to the av-
erage specialist
in screening and
diagnosis
There were con-
cerns over medi-
cal liability be-
cause of ma-
chine errors
There were con-
cerns over the
divestment of
health care to
large technolo-
gy and data
companies
NSThe top three potential
advantages are im-
proved patient access
to disease screening;
improved diagnostic
confidence; and en-
hanced efficiency, that
is, reduced time spent
by specialists on
monotonous tasks
NSScheetz et al
[35]
NSNearly equal
trust in AI vs
physician diag-
noses; signifi-
cantly more
likely to trust an
AI diagnosis of
cancer over a
physician’s diag-
nosis
NSNSAlmost all (94%)
participants were
willing to pay for a
review of medical
imaging by an AI
NSNSStai et al
[45]
NSThere were con-
cerns over the
lack of data inte-
gration; stan-
dards of data
collection, for-
mat, and quali-
ty; algorithm
opacity; and
ability to read
unstructured da-
ta
Doubts in the
ability of AI to
identify coun-
try-specific pa-
tient disease
profiles
Lack of trust to-
ward AI-based
decisions; uneth-
ical use of
shared data
High treatment
costs for patients
but does not make
profits for hospitals
NSNSSun and
Medaglia
[40]
NSThere were
doubts over
overall sustain-
ability
Trust in the app,
as it discloses
that the app was
informed by the
Canadian mili-
tary experience
(credibility)
No assurance of
users’privacy
NSIt would address the
perceived mental
health service gap
It could support
those not cur-
rently accessing
mental health
services
Tam-Seto et
al [38]
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 8https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
Concerns over
full automation
Lack of trust in
technology
Lack of trust in
patient safety
Lack of trust in
data privacy
PriceEfficiencyAvailable on
demand and us-
er-friendly
Study
A very small
minority of
health care and
non–health care
workers expect
that full automa-
tion is likely to
happen
NSBoth health care
and non–health
care workers
express more
trust in real doc-
tors than in AI
NSNSHealth care workers
prefer AI to alleviate
daily repetitive work
and improve outpa-
tient guidance and
consultation. The cur-
rent auxiliary and par-
tial substitution ef-
fects of AI are recog-
nized by >90% of the
public, and both
groups have positive
attitudes regarding AI
development
NSXiang et al
[33]
Supplementary
service rather
than a replace-
ment of the pro-
fessional health
force is required
for the AI to be
particularly use-
ful in helping
patients to com-
prehend their
physician’s diag-
nosis
There were con-
cerns about ac-
curacy, reliabili-
ty, quality, and
trustworthiness
of AI outputs,
such as the pre-
dictions and
recommended
medical infor-
mation
NSThere were con-
cerns about cy-
bersecurity
NSNSNSZhang et al
[46]
aAI: artificial intelligence.
bNS: not specified.
cCHW: community health care worker.
Availability and Ease of Use
Of the 26 studies, 3 (12%) studies highlighted the advantage of
AI being constantly available without restrictions such as
physical location, time, and access to a structured treatment
[24,30,38]; 3 (12%) other studies also mentioned the
appreciation of respondents for how an AI system could collect
data remotely in a nonintrusive and user-friendly manner
[23,24,28]. These studies mostly represented the perceptions
of consumers and health care providers [24,30,38] (Multimedia
Appendix 3). Only 4% (1/26) of studies did not mention the
population characteristics [24].
Improves Efficiency and Reduces the Cost of Health
Care Service Delivery
In all, 58% (15/26) of studies highlighted the potential of AI to
improve the efficiency of health care service delivery in terms
of remote monitoring [28], providing health-related reminders
[23,28], increasing the speed and accuracy of health care
processes (eg, consultation wait time, triaging, diagnosis, and
managing medication refills) [26,29,30,35-37,44], facilitating
care team communications, improving care accountability (eg,
regular check-ins and follow-ups for information gathering)
[23], and taking over repetitive manual tasks (eg, scheduling,
patient education, and vital signs monitoring) [27]. Some
respondents also appreciated the use of AI to provide a second
opinion to physicians’diagnoses or evaluations [42,46]. Overall,
12% (3/26) of studies [24,34,45] discussed the potential
cost-saving capacity of AI that influences AI acceptability,
whereas 4% (1/26) mentioned that the provision of an AI service
using IBM Watson caused patients to incur higher treatment
costs that did not translate to profits for the hospital after
factoring onboarding of the technology [40]. There was a good
proportion of representation from the health care and IT staff
(53.3%) [27-29,36,37,40,42,44] and those from the public,
including patients (Multimedia Appendix 3). Only 4% (1/26)
of the studies did not mention the population characteristics
[24].
Concerns and Mitigation Strategies
Overview
Our findings highlight several concerns (Table 2) and mitigation
strategies (Table 3).
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 9https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
Table 3. Needs and mitigation strategies of artificial intelligence (AI) in health care.
Educating the public
on AI capabilities
Design, user experience, and
interconnectedness with
other devices
Perceived empathy and per-
sonification
Lack of personalization and
customizability
Need for transparen-
cy, credibility, and
regulation
Study
NSImplementing user-led de-
sign principles could facili-
NSNS
NSa
Abdi et al
[28]tate the acceptability and
uptake of these technologies
Most respondents
had a general lack of
NSNSNSNSAbdullah
and Fakieh
[29]AI knowledge (mean
score 2.95 from 4)
and were unaware of
the advantages and
challenges of AI ap-
plications in health
care
NSNSLack of personal face-to-
face contact with a human
expert
Need guarantee of
anonymized trans-
mission and analysis
of personal health
data of users
Baldauf et al
[30]Personalized explana-
tion of analyses
Disease information
Treatment cost
Recommending physi-
cian’s visit
Alternative Therapies
Prevention information
Treatment companion
Mental support
Objectivity and inde-
pendence
NSNSNSNSNSCastagno
and Khalifa
[43]
NSPersonification of the chat-
bot should be emotionally
The chatbot should be en-
riched by the ability to de-
The system should allow
personalization
Needed clarity on
whether the chatbot
was a physician or
an adviser
Easton et al
[31]expressive. Multi-modal in-
teractions and interconnect-
edness with other consumer
devices were suggested
tect emotion (distress, fa-
tigue, and irritation) in
speech and nonverbal cues
to build a therapeutic rela-
tionship between the agent
and the patient
NSNSNSNSNSGao et al
[22]
NSSome older adults described
limited use of smartphone,
NSNSNSGriffin et al
[25]given the small screen or in-
ability to keep track of it
NSNSNSNSNSKim [47]
NSNSNSNSNeed for app regula-
tion to create a more
Laï et al [41]
permissive regulato-
ry framework;
achieve confidential-
ity and respect for
privacy
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 10https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
Educating the public
on AI capabilities
Design, user experience, and
interconnectedness with
other devices
Perceived empathy and per-
sonification
Lack of personalization and
customizability
Need for transparen-
cy, credibility, and
regulation
Study
NSNSNSAI systems may provide
more specific, personalized
information and advice
Credibility of the in-
telligent self-diagno-
sis system can be
improved through
transparency (eg,
showing accuracy
scores). State if doc-
tors would accept
information provid-
ed by AI
Li et al [32]
NSNSNSNSNSLiu et al [34]
NSNSNSNSNSLiu et al [39]
NSNSNSNSNSLiyanage et
al [44]
NSNSNSNSNeed for transparen-
cy on how and by
whom their data
were used
McCradden
et al [36]
NSNSNSNSNeed for transparen-
cy, disclosure, repa-
rations, deidentifica-
tion of data, and use
within trusted institu-
tions
McCradden
et al [37]
NSInteraction was too long, the
use of nonverbal expressions
by the avatar was not appeal-
ing, and there was a lack of
clarity regarding the aim of
the chatbot. Better integra-
tion of the agent with elec-
tronic health record systems
(for a virtual physician) or
health care providers (for an
asthma self-management
chatbot) would be useful
Need for greater interactivi-
ty or relational skills in con-
versational agents. Respon-
dents liked that the agent
had a personality and
showed empathy, which im-
proves personal connection.
Others had difficulty in em-
pathizing with the agent or
reported disliking its limited
conversation and responses
Need more customization or
availability of feature op-
tions (eg, preformatted or
free-text options)
NSMilne-Ives
et al [23]
There was a general
lack of familiarity
and understanding of
health chatbots
among participants
NSLack of empathy and inabil-
ity of chatbots to understand
more emotional issues, espe-
cially in mental health. The
responses given by chatbots
were seen as depersonalized,
cold, and inhuman. They
were perceived as inferior to
physician consultation, al-
though anonymity could fa-
cilitate the disclosure of
more intimate or uncomfort-
able aspects to do with
health
NSNeed to increase
transparency of infor-
mation source
Nadarzynski
et al [26]
NSNSNSNSNSOkolo et al
[42]
NSNSMany physicians believed
that chatbots cannot display
human emotion
NSNSPalanica et
al [27]
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 11https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
Educating the public
on AI capabilities
Design, user experience, and
interconnectedness with
other devices
Perceived empathy and per-
sonification
Lack of personalization and
customizability
Need for transparen-
cy, credibility, and
regulation
Study
NSNS
Mixed findings on per-
ceived empathy. Some
users perceived the
chatbot to be warm and
friendly, whereas oth-
ers found it to be un-
sympathetic and rude
Mixed findings on
preference for a life-
like chatbot—some felt
it a little creepy and
weird
The nonjudgmental na-
ture of chatbots is a
strong motivator of
adoption. It should re-
spond spontaneously in
a contingent, human-
like manner
There were user input restric-
tions during chatbot conver-
sations where the chatbot
forced the users to respond
to a list of choices
NSPrakash and
Das [24]
A minority (13.8%)
of the participants
felt that the special-
ist training colleges
were adequately pre-
pared for the intro-
duction of AI into
clinical practice. Ed-
ucation was identi-
fied as a priority to
prepare clinicians
for the implementa-
tion of AI in health
care
NSNSNSNSScheetz et al
[35]
NSNSNSNSNSStai et al
[45]
Insufficient knowl-
edge on values and
advantages of AI
technology; unrealis-
tic expectations to-
ward AI technology
NSNSNSNSSun and
Medaglia
[40]
Managing the pub-
lic’s expectations of
the capabilities of
such an app
NSNSNSNSTam-Seto et
al [38]
More than 90% of
health care workers
expressed a willing-
ness to devote time
to learning about AI
and participating in
AI research
NSNSNSNSXiang et al
[33]
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 12https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
Educating the public
on AI capabilities
Design, user experience, and
interconnectedness with
other devices
Perceived empathy and per-
sonification
Lack of personalization and
customizability
Need for transparen-
cy, credibility, and
regulation
Study
NSNSConcerns over lack of empa-
thy
Need more personal-
ized and actionable in-
formation
AI should be enhanced
with features that can
help to recommend
personalized questions
to ask physicians
Majority of partici-
pants expressed the
need to increase sys-
tem transparency by
explaining how the
AI arrived at its con-
clusion
Zhang et al
[46]
aNS: not specified.
Lack of Trust
Data Privacy
In all, 58% (15/26) of studies described the respondents’ lack
of trust regarding how their personal data will be collected (eg,
unknowingly through voice-activated devices) and handled (eg,
by whom and how) [22,24-26,28,30,31,35,36,38,40,41,43,46].
However, 4% (1/26) of the studies reported no concerns
regarding data sharing. This could be because of the respondents
being chronic obstructive pulmonary disease patients who may
have been used to their data being shared for clinical
decision-making purposes [31]. Potential mitigation strategies
suggested were to guarantee anonymity [26] and increase
transparency in how the collected data will be used (eg, by
which third party and how) [24,37]. There was a good proportion
of representation from the general public, including patients
(53.3%) [22,24-26,30,31,37,38,46] and health care providers
and IT staff (Multimedia Appendix 3).
Patient Safety
Of the 26 studies, 21 (81%) discussed the respondents’ lack of
trust in an AI to ensure patient safety while performing its tasks,
especially regarding providing accurate information on rare
conditions or unexpected situations [22-27,29-42,44]. Other
concerns were regarding the credibility of AI-based
recommendations (eg, whether it was validated by medical
professionals) [30,32], maturity in the technology to provide
safe and realistic recommendations [22,25], medical liability
from the risk of medical errors and bias [26,35,36,44], secondary
effects of AI-based diagnoses such as insurance claims [44],
and miscommunications [26]. The potential mitigation strategies
suggested were the provision of AI-specific regulations
[30,31,41], transparency in its credibility, how a
recommendation is derived (eg, showing who developed the
system and the system reasoning and reliability based on
information source and personal information), and its accuracy
[32,38]. In contrast, 4% (1/26) of studies reported that the
respondents were confident that the AI would outperform human
clinical diagnoses because of higher accuracy and lower human
errors [39]. Most respondents accepted AI in providing general
health advice to minor ailments. Most of the responses
represented the voices of the public, including patients (66.6%)
[22-26,30-32,34,35,37-40] (Multimedia Appendix 3).
Technology
Of the 26 studies, 6 (23%) studies discussed the participants’
lack of trust in the maturity of AI technology in providing
reliable and accurate information to support health-related
predictions and recommendations [24,26,35,38,40,46]. This
could be related to concerns over the lack of integration and
synthesis of information from various sources, standardization
of data collection, and the overall sustainability of AI-assisted
health care service delivery [40,45]. However, 8% (2/26) of
studies reported that respondents had similar trust in AI as
compared with a human physician’s diagnoses [28,45]. Possible
mitigation strategies include increasing system transparency
and reporting system accuracies [26,46]. Only 8% (2/26) of
studies represented the voices of health care and IT staff
[35,40,49] (Multimedia Appendix 3).
Potential Impacts of Full Automation
In all, 46% (12/26) of studies discussed the perceptions of
respondents on the possibility and impacts of full automation
on the health care industry, especially in terms of diagnoses, all
of which reported that it is unlikely that AI will completely
replace health care professionals [22, 27, 29, 30, 33, 35, 36, 39,
42-44, 46]. This could largely be because of the immaturity of
AI technology and its limitations in providing human-like
interactions (which build trust) [27]. Instead, many patients
preferred a combination of both AI and human physicians in
diagnoses to achieve a more accurate and comprehensive
evaluation [30,39]. Most of the responses represented the voices
of health care and IT staff (58.3%) [27,29,35,36,42-44]
(Multimedia Appendix 3).
Needs to Improve Adoption of AI in Health Care
Besides the needs highlighted to mitigate the concerns, several
additional features were found to potentially improve the
adoption of AI in health care (Table 3).
Enhance Personalization and Customizability
Of the 26 studies, 6 (23%) studies discussed the need for AI to
personalize information such as the explanation of diagnoses,
recommendations, patient education, and even pertinent
questions or issues to raise to their physicians [23,24,30-32,46].
Some studies also mentioned the need to customize chatbot
features according to user preferences (for fixed options or
free-texts) [23,24].
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 13https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
Enhance Empathy and Personification of AI-Enabled
Chatbots and Avatars
In all, 27% (7/26) of studies highlighted the respondents’
concern over the lack of empathy, which is a crucial element
of human interaction to build trust between service providers
and consumers. However, empathy must be displayed tactfully
in verbal and nonverbal expressions such that it does not appear
to be “creepy and weird,” especially in populations with mental
health issues [24]. Personification was also emphasized to
increase the relatability, connection, and appeal to interact with
the chatbot or avatar [23]. Perceived anonymity in interacting
with the chatbot was also highlighted to assist in communication
regarding sensitive topics [26].
Enhance User Experience, Design, and
Interconnectedness With Other Devices
Overall, 15% (4/26) of studies described the need to improve
user experience to increase user engagement with AI
[23,25,28,31]. Strategies include needs-based interaction timing,
the use of suitable verbal and nonverbal expressions,
interconnectedness with other information sources (eg, electronic
health record), apps (eg, calendar), and devices (eg, smart home
technology–enabled devices).
Educate the Public on AI Capabilities
Of the 26 studies, 6 (23%) studies highlighted the lack of public
and clinical awareness on the capabilities of AI in health care,
of which the majority of the respondents expressed their
willingness to learn [26,29,33,35,38,40]. A better understanding
of the advantages and disadvantages of AI in health care could
enhance the health care service delivery efficiency while
balancing the expectations from it.
Discussion
Principal Findings
On the basis of the 26 articles included in this scoping review,
we identified the perceptions and needs of various populations
in the use of AI for general, primary, and community health
care; chronic diseases self-management; self-diagnosis; mental
health; and diagnostic procedures. However, the use of AI in
health care remains challenged by the common perceptions,
concerns, and unmet needs of various stakeholders such as
patients, health care professionals, governmental or legal
regulatory bodies, software developers, and industrial providers.
Simply introducing AI into health care systems without
understanding the needs of stakeholders will not lead to a
sustainable change [50].
Our results showed that, similar to most ITs, AI was generally
favored for its on-demand availability, ease of use, potential to
improve efficiency, and reduce the cost of health care service
delivery. These features could enhance patients’compliance to
health care treatments and recommendations that may be
inaccessible or inconvenient. For example, patients are
traditionally required to commit to a physician’s consultative
appointment that could be relatively inflexible because of a long
list of patients, and one could be forced to skip the consultation
because of a conflict in their schedule. AI confers the benefit
of information collection and dissemination beyond the
constraints of time and place, which have been shown to
improve medication adherence through an AI-based smartphone
app [51] and diet and exercise adherence through an AI-based
virtual health assistant [52]. Our findings also demonstrated
that AI is valued for its potential to speed up health care
processes such as diagnosis, waiting time, communication with
care teams, decisional support, and other routine tasks (eg,
progress monitoring) that can be automated. This increase in
service delivery efficiency frees up time and resources for
clinicians to focus on tasks that involve more unexpected
variabilities such as dealing with rare disease management and
interacting with patients, thereby reducing the risk of burnout,
job dissatisfaction, and manpower shortage [53].
Although our findings showed high rates of acceptability,
concerns were raised about the lack of trust (in data privacy,
patient safety, and technology maturity) and the impacts of
AI-driven automation on health care job security and health
care services. Ethical controversies surrounding the use of AI
in health care have been long-standing. Although there are
increasingly more regulatory guidelines available, such as those
developed by the World Health Organization [54] and the
European Union [55], the use of AI in health care remains
debatable because of the challenges in ensuring data privacy
and proper data use [56]. This is especially true when data
collection modes are conducted through third-party apps, such
as Facebook Messenger (Meta Platforms), of which privacy
policies are governed by technology companies and not health
care institutions [24]. Moreover, although there are privacy and
security precautionary measures, the increasing reports of data
leaks and vulnerabilities in electronic medical record databases
erode population trust. Future security and transparency
measures could consider the use of blockchain technology, and
privacy laws should be properly delineated and transparent [57].
This review also found the need to enhance the personalization
and customizability of information provided by AI, the
incorporation of empathy and personification in AI-based
conversational agents, the user experience through better design
and interconnectedness with other devices and systems, and the
need to educate the public on AI capabilities. Concerning
personalized health care, reports generated by AI should be
integrated and explained in accordance with each individual’s
demographic and clinical profile to facilitate self-management
[46]. We also identified the need for AI to not only assist in the
understanding of patients’ medical condition but also the
provision of relevant treatment options and personalized
recommendations with intuitive actions provided (eg, a button
to call an ambulance when deemed necessary by the AI) [31].
This coincides with existing studies that highlight the predictive
power of AI in providing support to preventive disease onset
or deterioration through interventions tailored according to user
preferences [58]. For example, AI has been used to provide
just-in-time adaptive interventions that prompt users to perform
healthy behavior changes (eg, healthy diet and exercise and
smoking cessation) based on constant data collection of their
behaviors and preferences [49]. However, the data collection
of users’behavioral or clinical information should also consider
the customizability of input options (eg, providing predefined
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 14https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
options or allowing for free-text input) to enhance the usability
and adoption of such systems, depending on user preferences
[24]. Personification of AI-based conversational agents to
express human-like identity, personality, empathy, and emotions
was also highlighted as an area of improvement to enhance
human-chatbot interactions and eventually user adoption [59].
It was also important for the AI systems to be accessible through
various devices (eg, tablets, televisions, laptops, and smart home
appliances) and modes (eg, text and speech) for the convenience
of information consumption and data collection. Finally, our
findings suggest a need to address the knowledge deficit in the
definition, capacity, and functions of AI. This could be done
by cultivating AI literacy and exposure from childhood [60]
and incorporating the AI curriculum in health care training and
upgrading courses [61].
Overall, our study findings are consistent with well-established
theories such as the Technology Acceptance Model, of which
the second version proposed by Venkatesh and Davis [62] posits
that technology acceptance is strongly associated with the
perceived usefulness and perceived ease of use, which are
influenced by subjective norms, images, job relevance, output
quality, result demonstrability, experience, and voluntariness
[63]. Therefore, to enhance the acceptability of AI in health
care applications, its perceived usefulness over and above the
current standard practices such as capacity to increase service
delivery efficiency and community-based self-diagnostic
accuracy should be emphasized. Such messages should be
designed to be relevant to the individual and organizational
adopters of a social system through various communication
channels and change agents (ie, gatekeepers and opinion
leaders). Such messages should be persuasive to spark five
stages of adoption, namely, knowledge, persuasion, decision,
implementation, and confirmation, known as the diffusion of
innovation theory by Rogers [64]. Different strategies are also
needed to correspond with the different categories of adopters,
namely, the innovators, early adopters, early majority, late
majority, and laggards. Different rates of technology adoption
are associated with one’s risk tolerance related to higher social
economic status, education level, and financial stability [65].
An example is the case of AI adoption in chronic disease early
detection and management in the United Arab Emirates. Success
was attributed to the managerial, organizational, operational,
and IT infrastructure factors that contribute to the factors of
the Technology Acceptance Model [66]. However, advanced
technologies such as AI continue to be relatively expensive and
require eHealth literacy, which may widen the digital divide,
and therefore the data divide and health disparity among
societies. According to a report published in The Lancet, the
internet remains inaccessible to approximately 50% of the global
population because of a digital divide [67]. In addition, there
are specific guidelines on the implementation of AI in health
care service delivery, such as the quality of data and certification
of AI systems, which may deter adoption [68].
Limitations
This study had several limitations. First, only articles written
in English were retrieved, possibly limiting the
comprehensiveness of our findings. However, we conducted a
search on Google Scholar to supplement the electronic database
search for more relevant papers. Second, the studies were largely
heterogeneous in their study designs, research aims, and data
collection methods. Third, there were limited studies on the
perceptions of AI and clinical researchers who could provide
outlooks on the perceptions of the general public. Finally, the
public’s perceptions of AI in health care may be limited by their
knowledge of the definitions and capabilities of AI, as
highlighted in our findings that there is a need to enhance the
public’s knowledge on AI. Therefore, the priority or importance
of each perception and need could not be evaluated. The
inclusion of articles based on our definition of AI could also
have limited the scope of this study. Studies that considered
different definitions of AI may have been excluded.
Recommendations for Future Design and Research
This study highlighted the perceptions and needs of AI to
enhance its adoption in health care. However, one major
challenge lies in the extent to which AI is tailored according to
each individual’s unique preference, and if such preferences are
largely varied, how data can be aggregated for analyses and
applicability in specific health care applications. Therefore,
future studies that use AI should not only consider the issues
raised in this study but also clarify the applicability in their
applications and target population. A prior needs-based analysis
is recommended before the development of AI systems.
Conclusions
Although AI is valued for its 24/7 availability in health care
service delivery, ease of use, and capacity to improve health
care service provision efficiency, concerns over trust in data
privacy, information credibility, and technological maturity
remain. Although several mitigation strategies such as enhancing
transparency over predictive accuracy and information sources
were identified, other areas of improvement were also
highlighted. Future studies and AI development should consider
the points raised in this study to enhance the adoption and
enhancement of AI to improve health care service delivery.
Acknowledgments
This research was supported by the National University Health System Internal Grant Funding under grant
NUHSRO/2021/063/RO5+6/FMPCHSRG-Mar21/01 and the National Research Foundation, Singapore, under its Strategic
Capabilities Research Centres Funding Initiative. Any opinions, findings, conclusions, or recommendations expressed in this
material are those of the author or authors and do not reflect the views of the National University Health System or the National
Research Foundation, Singapore.
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 15https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
Conflicts of Interest
None declared.
Multimedia Appendix 1
PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist.
[DOCX File , 16 KB-Multimedia Appendix 1]
Multimedia Appendix 2
Database search details.
[DOCX File , 14 KB-Multimedia Appendix 2]
Multimedia Appendix 3
Study characteristics.
[DOCX File , 21 KB-Multimedia Appendix 3]
Multimedia Appendix 4
Acceptability of artificial intelligence use in health care.
[DOCX File , 16 KB-Multimedia Appendix 4]
References
1. Panch T, Szolovits P, Atun R. Artificial intelligence, machine learning and health systems. J Glob Health 2018
Dec;8(2):020303 [FREE Full text] [doi: 10.7189/jogh.08.020303] [Medline: 30405904]
2. Chew HS, Ang WH, Lau Y. The potential of artificial intelligence in enhancing adult weight loss: a scoping review. Public
Health Nutr 2021 Jun;24(8):1993-2020 [FREE Full text] [doi: 10.1017/S1368980021000598] [Medline: 33592164]
3. Panch T, Pearson-Stuttard J, Greaves F, Atun R. Artificial intelligence: opportunities and risks for public health. Lancet
Digit Health 2019 May;1(1):13-14. [doi: 10.1016/s2589-7500(19)30002-0]
4. Arora A. Conceptualising artificial intelligence as a digital healthcare innovation: an introductory review. Med Devices
(Auckl) 2020 Aug 20;13:223-230 [FREE Full text] [doi: 10.2147/MDER.S262590] [Medline: 32904333]
5. Dilsizian SE, Siegel EL. Artificial intelligence in medicine and cardiac imaging: harnessing big data and advanced computing
to provide personalized medical diagnosis and treatment. Curr Cardiol Rep 2014 Jan;16(1):441. [doi:
10.1007/s11886-013-0441-8] [Medline: 24338557]
6. Hashimoto D, Rosman G, Rus D, Meireles O. Artificial intelligence in surgery: promises and perils. Ann Surg 2018
Jul;268(1):70-76 [FREE Full text] [doi: 10.1097/SLA.0000000000002693] [Medline: 29389679]
7. Stein N, Brooks K. A fully automated conversational artificial intelligence for weight loss: longitudinal observational study
among overweight and obese adults. JMIR Diabetes 2017 Nov 01;2(2):e28 [FREE Full text] [doi: 10.2196/diabetes.8590]
[Medline: 30291087]
8. Sahoo D, Hao W, Ke S, Xiongwei W, Le H, Achananuparp P, et al. FoodAI: food image recognition via deep learning for
smart food logging. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data
Mining. 2019 Presented at: 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining; Aug
4 - 8, 2019; Anchorage AK USA. [doi: 10.1145/3292500.3330734]
9. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019 Jan;25(1):44-56.
[doi: 10.1038/s41591-018-0300-7] [Medline: 30617339]
10. Yu K, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Biomed Eng 2018 Oct;2(10):719-731. [doi:
10.1038/s41551-018-0305-z] [Medline: 31015651]
11. Beam AL, Kohane IS. Translating artificial intelligence into clinical care. J Am Med Assoc 2016 Dec 13;316(22):2368-2369.
[doi: 10.1001/jama.2016.17217] [Medline: 27898974]
12. Lu Y, Xiao Y, Sears A, Jacko JA. A review and a framework of handheld computer adoption in healthcare. Int J Med
Inform 2005 Jun;74(5):409-422. [doi: 10.1016/j.ijmedinf.2005.03.001] [Medline: 15893264]
13. Or CK, Karsh B. A systematic review of patient acceptance of consumer health information technology. J Am Med
Informatics Assoc 2009 Jul 01;16(4):550-560. [doi: 10.1197/jamia.m2888]
14. Yusif S, Soar J, Hafeez-Baig A. Older people, assistive technologies, and the barriers to adoption: a systematic review. Int
J Med Inform 2016 Oct;94:112-116. [doi: 10.1016/j.ijmedinf.2016.07.004] [Medline: 27573318]
15. Ethics guidelines for trustworthy AI. European Commission. URL: https://ec.europa.eu/futurium/en/ai-alliance-consultation.
1.html [accessed 2021-12-28]
16. Ting DS, Liu Y, Burlina P, Xu X, Bressler NM, Wong TY. AI for medical imaging goes deep. Nat Med 2018
May;24(5):539-540. [doi: 10.1038/s41591-018-0029-3] [Medline: 29736024]
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 16https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
17. Quiroz JC, Laranjo L, Kocaballi AB, Berkovsky S, Rezazadegan D, Coiera E. Challenges of developing a digital scribe to
reduce clinical documentation burden. NPJ Digit Med 2019;2:114 [FREE Full text] [doi: 10.1038/s41746-019-0190-1]
[Medline: 31799422]
18. Lysaght T, Lim HY, Xafis V, Ngiam KY. Ai-assisted decision-making in healthcare: the application of an ethics framework
for big data in health and research. Asian Bioeth Rev 2019 Sep;11(3):299-314 [FREE Full text] [doi:
10.1007/s41649-019-00096-0] [Medline: 33717318]
19. Delipetrev B, Tsinaraki C, Kostic U. Historical Evolution of Artificial Intelligence. Luxembourg: Publications Office of
the European Union; 2020.
20. Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol 2005 Feb;8(1):19-32.
[doi: 10.1080/1364557032000119616]
21. Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA Extension for Scoping Reviews
(PRISMA-ScR): checklist and explanation. Ann Intern Med 2018 Oct 02;169(7):467-473 [FREE Full text] [doi:
10.7326/M18-0850] [Medline: 30178033]
22. Gao S, He L, Chen Y, Li D, Lai K. Public perception of artificial intelligence in medical care: content analysis of social
media. J Med Internet Res 2020 Jul 13;22(7):e16649 [FREE Full text] [doi: 10.2196/16649] [Medline: 32673231]
23. Milne-Ives M, de Cock C, Lim E, Shehadeh MH, de Pennington N, Mole G, et al. The effectiveness of artificial intelligence
conversational agents in health care: systematic review. J Med Internet Res 2020 Oct 22;22(10):e20346 [FREE Full text]
[doi: 10.2196/20346] [Medline: 33090118]
24. Prakash A, Das S. Intelligent conversational agents in mental healthcare services: a thematic analysis of user perceptions.
Pacific Asia J Assoc Inf Syst 2020:1-34 [FREE Full text]
25. Griffin A, Xing Z, Mikles S, Bailey S, Khairat S, Arguello J, et al. Information needs and perceptions of chatbots for
hypertension medication self-management: a mixed methods study. JAMIA Open 2021 Apr;4(2):ooab021 [FREE Full text]
[doi: 10.1093/jamiaopen/ooab021] [Medline: 33898936]
26. Nadarzynski T, Miles O, Cowie A, Ridge D. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare:
a mixed-methods study. Digit Health 2019;5:2055207619871808 [FREE Full text] [doi: 10.1177/2055207619871808]
[Medline: 31467682]
27. Palanica A, Flaschner P, Thommandram A, Li M, Fossat Y. Physicians' perceptions of chatbots in health care: cross-sectional
web-based survey. J Med Internet Res 2019 Apr 05;21(4):e12887 [FREE Full text] [doi: 10.2196/12887] [Medline: 30950796]
28. Abdi S, Witte LD, Hawley M. Exploring the potential of emerging technologies to meet the care and support needs of older
people: a delphi survey. Geriatrics (Basel) 2021 Feb 13;6(1):19 [FREE Full text] [doi: 10.3390/geriatrics6010019] [Medline:
33668557]
29. Abdullah R, Fakieh B. Health care employees' perceptions of the use of artificial intelligence applications: survey study. J
Med Internet Res 2020 May 14;22(5):e17620 [FREE Full text] [doi: 10.2196/17620] [Medline: 32406857]
30. Baldauf M, Fröehlich P, Endl R. Trust me, I’m a doctor – user perceptions of AI-driven apps for mobile health diagnosis.
In: Proceedings of the 19th International Conference on Mobile and Ubiquitous Multimedia. 2020 Presented at: MUM
2020: 19th International Conference on Mobile and Ubiquitous Multimedia; Nov 22 - 25, 2020; Essen Germany. [doi:
10.1145/3428361.3428362]
31. Easton K, Potter S, Bec R, Bennion M, Christensen H, Grindell C, et al. A virtual agent to support individuals living with
physical and mental comorbidities: co-design and acceptability testing. J Med Internet Res 2019 May 30;21(5):e12996
[FREE Full text] [doi: 10.2196/12996] [Medline: 31148545]
32. Li W, Fan X, Zhu H, Wu J, Teng D. Research on the influencing factors of user trust based on artificial intelligence self
diagnosis system. In: Proceedings of the ACM Turing Celebration Conference. 2020 Presented at: ACM Turing Celebration
Conference; May 22 - 24, 2020; Hefei China. [doi: 10.1145/3393527.3393561]
33. Xiang Y, Zhao L, Liu Z, Wu X, Chen J, Long E, et al. Implementation of artificial intelligence in medicine: status analysis
and development suggestions. Artif Intell Med 2020 Jan;102:101780. [doi: 10.1016/j.artmed.2019.101780] [Medline:
31980086]
34. Liu T, Tsang W, Xie Y, Tian K, Huang F, Chen Y, et al. Preferences for artificial intelligence clinicians before and during
the COVID-19 pandemic: discrete choice experiment and propensity score matching study. J Med Internet Res 2021 Mar
02;23(3):e26997 [FREE Full text] [doi: 10.2196/26997] [Medline: 33556034]
35. Scheetz J, Rothschild P, McGuinness M, Hadoux X, Soyer HP, Janda M, et al. A survey of clinicians on the use of artificial
intelligence in ophthalmology, dermatology, radiology and radiation oncology. Sci Rep 2021 Mar 04;11(1):5193 [FREE
Full text] [doi: 10.1038/s41598-021-84698-5] [Medline: 33664367]
36. McCradden MD, Baba A, Saha A, Ahmad S, Boparai K, Fadaiefard P, et al. Ethical concerns around use of artificial
intelligence in health care research from the perspective of patients with meningioma, caregivers and health care providers:
a qualitative study. CMAJ Open 2020;8(1):90-95 [FREE Full text] [doi: 10.9778/cmajo.20190151] [Medline: 32071143]
37. McCradden MD, Sarker T, Paprica PA. Conditionally positive: a qualitative study of public perceptions about using health
data for artificial intelligence research. BMJ Open 2020 Oct 28;10(10):e039798 [FREE Full text] [doi:
10.1136/bmjopen-2020-039798] [Medline: 33115901]
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 17https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
38. Tam-Seto L, Wood VM, Linden B, Stuart H. Perceptions of an AI-supported mobile app for military health in the Canadian
armed forces. Milit Behav Health 2020 Nov 13;9(3):247-254. [doi: 10.1080/21635781.2020.1838364]
39. Liu T, Tsang W, Huang F, Lau OY, Chen Y, Sheng J, et al. Patients' preferences for artificial intelligence applications
versus clinicians in disease diagnosis during the SARS-CoV-2 pandemic in China: discrete choice experiment. J Med
Internet Res 2021 Feb 23;23(2):e22841 [FREE Full text] [doi: 10.2196/22841] [Medline: 33493130]
40. Sun TQ, Medaglia R. Mapping the challenges of Artificial Intelligence in the public sector: evidence from public healthcare.
Govern Inform Q 2019 Apr;36(2):368-383. [doi: 10.1016/j.giq.2018.09.008]
41. Laï M, Brian M, Mamzer M. Perceptions of artificial intelligence in healthcare: findings from a qualitative survey study
among actors in France. J Transl Med 2020 Jan 09;18(1):14 [FREE Full text] [doi: 10.1186/s12967-019-02204-y] [Medline:
31918710]
42. Okolo C, Kamath S, Dell N, Vashistha A. “It cannot do all of my work”: community health worker perceptions of AI-enabled
mobile health applications in rural India. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing
Systems. 2021 Presented at: CHI Conference on Human Factors in Computing Systems; May 8 - 13, 2021; Yokohama
Japan. [doi: 10.1145/3411764.3445420]
43. Castagno S, Khalifa M. Perceptions of artificial intelligence among healthcare staff: a qualitative survey study. Front Artif
Intell 2020 Oct 21;3:578983 [FREE Full text] [doi: 10.3389/frai.2020.578983] [Medline: 33733219]
44. Liyanage H, Liaw S, Jonnagaddala J, Schreiber R, Kuziemsky C, Terry AL, et al. Artificial intelligence in primary health
care: perceptions, issues, and challenges. Yearb Med Inform 2019 Aug;28(1):41-46 [FREE Full text] [doi:
10.1055/s-0039-1677901] [Medline: 31022751]
45. Stai B, Heller N, McSweeney S, Rickman J, Blake P, Vasdev R, et al. Public perceptions of artificial intelligence and
robotics in medicine. J Endourol 2020 Oct;34(10):1041-1048 [FREE Full text] [doi: 10.1089/end.2020.0137] [Medline:
32611217]
46. Zhang Z, Citardi D, Wang D, Genc Y, Shan J, Fan X. Patients' perceptions of using artificial intelligence (AI)-based
technology to comprehend radiology imaging data. Health Informatics J 2021;27(2):14604582211011215 [FREE Full text]
[doi: 10.1177/14604582211011215] [Medline: 33913359]
47. Kim H. An analysis of the need for aid tools in dementia patients: focusing on the normal elderly, dementia patients, and
caregivers of dementia patients. Ind J Public Health Res Develop 2019;10(11):4399. [doi: 10.5958/0976-5506.2019.04300.6]
48. Kim S, Kim J, Badu-Baiden F, Giroux M, Choi Y. Preference for robot service or human service in hotels? Impacts of the
COVID-19 pandemic. Int J Hospitality Manag 2021 Feb;93:102795. [doi: 10.1016/j.ijhm.2020.102795]
49. Nahum-Shani I, Smith S, Spring B, Collins L, Witkiewitz K, Tewari A, et al. Just-in-Time Adaptive Interventions (JITAIs)
in mobile health: key components and design principles for ongoing health behavior support. Ann Behav Med 2018 May
18;52(6):446-462 [FREE Full text] [doi: 10.1007/s12160-016-9830-8] [Medline: 27663578]
50. Panch T, Mattie H, Celi LA. The "inconvenient truth" about AI in healthcare. NPJ Digit Med 2019;2:77 [FREE Full text]
[doi: 10.1038/s41746-019-0155-4] [Medline: 31453372]
51. Roosan D, Chok J, Karim M, Law AV, Baskys A, Hwang A, et al. Artificial intelligence-powered smartphone app to
facilitate medication adherence: protocol for a human factors design study. JMIR Res Protoc 2020 Nov 09;9(11):e21659
[FREE Full text] [doi: 10.2196/21659] [Medline: 33164898]
52. Davis CR, Murphy KJ, Curtis RG, Maher CA. A process evaluation examining the performance, adherence, and acceptability
of a physical activity and diet artificial intelligence virtual health assistant. Int J Environ Res Public Health 2020 Dec
07;17(23):9137 [FREE Full text] [doi: 10.3390/ijerph17239137] [Medline: 33297456]
53. Meskó B, Hetényi G, Győrffy Z. Will artificial intelligence solve the human resource crisis in healthcare? BMC Health
Serv Res 2018 Jul 13;18(1):545 [FREE Full text] [doi: 10.1186/s12913-018-3359-4] [Medline: 30001717]
54. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance. Geneva: World Health Organization; 2021.
55. White paper on Artificial Intelligence: a European approach to excellence and trust. European Commission. 2020. URL:
https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en
[accessed 2021-12-28]
56. Lee D, Yoon SN. Application of artificial intelligence-based technologies in the healthcare industry: opportunities and
challenges. Int J Environ Res Public Health 2021 Jan 01;18(1):271 [FREE Full text] [doi: 10.3390/ijerph18010271] [Medline:
33401373]
57. Wahl B, Cossy-Gantner A, Germann S, Schwalbe NR. Artificial intelligence (AI) and global health: how can AI contribute
to health in resource-poor settings? BMJ Glob Health 2018;3(4):e000798 [FREE Full text] [doi: 10.1136/bmjgh-2018-000798]
[Medline: 30233828]
58. Shaban-Nejad A, Michalowski M, Buckeridge D. Health intelligence: how artificial intelligence transforms population and
personalized health. NPJ Digit Med 2018 Oct 2;1:53 [FREE Full text] [doi: 10.1038/s41746-018-0058-9] [Medline:
31304332]
59. Chaves AP, Gerosa MA. How should my chatbot interact? A survey on social characteristics in human–chatbot interaction
design. Int J Hum Comput Interact 2020 Nov 08;37(8):729-758. [doi: 10.1080/10447318.2020.1841438]
60. Teaching tech to talk: K-12 conversational artificial intelligence literacy curriculum and development tools. arXiv. 2020.
URL: https://arxiv.org/abs/2009.05653 [accessed 2021-12-28]
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 18https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
61. Wood EA, Ange BL, Miller DD. Are we ready to integrate artificial intelligence literacy into medical school curriculum:
students and faculty survey. J Med Educ Curric Dev 2021 Jun 23;8:23821205211024078 [FREE Full text] [doi:
10.1177/23821205211024078] [Medline: 34250242]
62. Venkatesh V, Davis FD. A theoretical extension of the technology acceptance model: four longitudinal field studies. Manag
Sci 2000 Feb;46(2):186-204. [doi: 10.1287/mnsc.46.2.186.11926]
63. Marangunić N, Granić A. Technology acceptance model: a literature review from 1986 to 2013. Univ Access Inf Soc 2014
Feb 16;14(1):81-95. [doi: 10.1007/s10209-014-0348-1]
64. Kaminski J. Diffusion of innovation theory. Can J Nursing Inf 2011;6(2):1-6. [doi: 10.1097/cin.0000000000000072]
65. Kwon H, Chidambaram L. A test of the technology acceptance model: the case of cellular telephone adoption. In: Proceedings
of the 33rd Annual Hawaii International Conference on System Sciences. 2000 Presented at: 33rd Annual Hawaii International
Conference on System Sciences; Jan 7, 2000; Maui, HI, USA. [doi: 10.1109/hicss.2000.926607]
66. Alhashmi S, Salloum S, Mhamdi C. Implementing artificial intelligence in the United Arab Emirates healthcare sector: an
extended technology acceptance model. Int J Inf Technol Lang Stud 2019:27-42 [FREE Full text]
67. Makri A. Bridging the digital divide in health care. Lancet Digit Health 2019 Sep;1(5):204-205. [doi:
10.1016/s2589-7500(19)30111-6]
68. Thinking on its own: AI in the NHS. Reform Research Trust. URL: https://reform.uk/research/thinking-its-own-ai-nhs
[accessed 2021-12-28]
Abbreviations
AGI: artificial general intelligence
AI: artificial intelligence
ANI: artificial narrow intelligence
IT: information technology
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
Edited by A Mavragani; submitted 16.08.21; peer-reviewed by N Tom, K Ludlow, S Hong; comments to author 04.10.21; revised
version received 08.11.21; accepted 03.12.21; published 14.01.22
Please cite as:
Chew HSJ, Achananuparp P
Perceptions and Needs of Artificial Intelligence in Health Care to Increase Adoption: Scoping Review
J Med Internet Res 2022;24(1):e32939
URL: https://www.jmir.org/2022/1/e32939
doi: 10.2196/32939
PMID:
©Han Shi Jocelyn Chew, Palakorn Achananuparp. Originally published in the Journal of Medical Internet Research
(https://www.jmir.org), 14.01.2022. This is an open-access article distributed under the terms of the Creative Commons Attribution
License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete
bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license
information must be included.
J Med Internet Res 2022 | vol. 24 | iss. 1 | e32939 | p. 19https://www.jmir.org/2022/1/e32939 (page number not for citation purposes)
Chew & AchananuparpJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
... In a scoping review, it is suggested that chatbots' interconnectedness with other devices can help to enhance the adoption of AI applications in healthcare [31]. ...
... Information insecurity is considered a barrier to chatbot adoption [9]. Data privacy is also a concern of patients using health AI applications [31]. ...
... It is suggested that, to improve the adoption of AI applications in healthcare, personalization and customization should be taken into account [31]. ...
Article
Full-text available
Technology has revolutionized various industries; notably, chatbots impact healthcare through the efficient streamlining of medical procedures, cost reductions, and improved accessibility to healthcare services. Consequently, understanding how to promote the adoption of healthcare chatbots has become crucial for enhancing the healthcare industry and medical service quality. Despite numerous studies identifying factors influencing healthcare chatbot adoption, there is a notable lack of empirical verification regarding their interrelationships, leading to a significant knowledge gap. Therefore, this study aims to address this gap by developing a decision-making model to analyze the relationships among key factors regarding three dimensions: technology, user, and society. The model begins by filtering out insignificant factors based on experts’ opinions. Subsequently, it employs DEMATEL (Decision Making Trial and Evaluation Laboratory) to construct a causal relationships graph and the ISM (interpretive structural modeling) method to categorize these factors into a hierarchical order. To mitigate uncertainties stemming from the topic’s complexity, this study utilizes fuzzy sets and Z-number theory in the assessment process. The findings reveal a predominance of causal factors within the technological dimension. Notably, the quality of information provided by chatbots stands out as the most influential causal factor. The insights from this study suggest implications for both enterprises and governments to boost chatbot adoption in society.
... 3,9 In our study, nurses emphasised the added value of using data-driven algorithms to support patient prioritisation and improve clinical decision-making in critical care, as reported in high-resource settings. [25][26][27][28] To our knowledge, no similar studies have been carried out in LRS, evidencing an important gap in scienti c literature. The positive attitudes towards data-driven algorithms indicate a general acceptance of this new technology, consistent with prior research. ...
... Despite these limitations, the study's focus on LRS lls a signi cant gap in existing scienti c literature. [25][26][27][28] Furthermore, applying a human factors perspective and the methodological triangulation of contextual inquiry, interviews, and co-design sessions offered a comprehensive understanding of the potential of data-driven algorithms in paediatric HDUs in LRS and guided their development and implementation. Finally, the multidisciplinary approach allowed us to understand better the context and the signi cance of data-driven algorithms in critical care within LRS, and draw a set of general recommendations for pre-implementation of data-driven algorithms in LRS (Table 3). ...
Preprint
Full-text available
Introduction Paediatric critical care nurses face challenges in promptly detecting patient deterioration and delivering high-quality care, especially in low-resource settings (LRS). Patient monitors equipped with data-driven algorithms that integrate monitor and clinical data can optimise scarce resources (e.g. trained staff) offering solutions to these challenges. Poor algorithm output design and workflow integration are important factors hindering successful implementation. This study aims to explore nurses' perspectives to inform the development of a data-driven algorithm and user-friendly interface for future integration into a continuous monitoring system for critical care in LRS. Methods Human-centred design methods, including contextual inquiry, semi-structured interviews, and co-design sessions, were carried out at the high-dependency units of Queen Elizabeth Central Hospital and Zomba Central Hospital in Malawi. Triangulating these methods, and employing qualitative content analysis principles, we identified what algorithm could assist nurses and used co-creation methods to design a user interface prototype. Results Workflow observations demonstrated the effects of personnel shortages and limited monitor equipment availability for vital sign monitoring. Interviews emphasised the advantages of predictive algorithms in anticipating deterioration, underlining the need to integrate the algorithm’s output, the (constant) monitoring data, and the patient's present clinical condition. Nurses preferred a scoring system represented with familiar scales and colour codes. During co-design sessions, trust, usability and context specificity were emphasized as requirements for these algorithms. Four prototype components were examined, with nurses favouring explainable and transparent scores represented by colour codes and visual representations of score changes. Conclusions Nurses in LRS perceive that data-driven algorithms, especially for predicting patient deterioration, could improve the provision of critical care. This can be achieved by translating nurses' perspectives into design strategies, as has been carried out in this study. The lessons learned are summarised as actionable pre-implementation recommendations for the development and implementation of data-driven algorithms in LRS.
... The increasing interest in AI for promoting healthy lifestyles is evident in the growing attention received by AI-based software applications (Stefanidis et al., 2022). Recognising the perceptions and needs regarding AI in healthcare is crucial for enhancing its adoption by various stakeholders, underscoring the importance of understanding consumer perspectives and requirements for the successful integration of AI in healthcare (Chew & Achananuparp, 2022). Therefore, achieving a comprehensive understanding of the intricate factors influencing AI acceptance and developing tailored strategies to address these factors is essential for the successful adoption and utilisation of technology in healthcare settings. ...
Article
Switching from human to artificial intelligence (AI) as online health sales advisors significantly impact sales. AI has been shown to influence customer purchases and adapt quickly to changes in online retail. While AI enhances efficiency and personalisation in sales, it also introduces ethical dilemmas, such as job loss and bias, alongside consumer scepticism about its accuracy and trustworthiness. Integrating AI into pharmacy practices demands a deep understanding of its effects on pharmacists’ roles, signaling a significant shift in healthcare delivery. This study aims to explore and confirm the seven Unified Theory of Acceptance and Use of Technology 1 (UTAUT1) construct as a robust framework that can be utilized in the consumer acceptance of AI technologies in healthcare retail settings. The participants in the study were patrons of the Guardian Pharmacy in the Klang Valley region. The research team modified 46 questions from an earlier study, employing a dual translation method that involved translating the questionnaire from English to Malay and then back to English to better align with the demographics of the study group. The experts assessed and authenticated these revised statements to ensure their relevance and accuracy, focusing on content and appearance authenticity. The study analyzed and confirmed 103 responses using the exploratory factor analysis (EFA) technique. The EFA results indicated that each construct’s items were grouped into one component with factor loading higher than 0.6, with the total variance explained ranging from 69% to 90% and overall internal reliability of 0.944. This study enriches existing knowledge by validating the UTAUT1 framework as a robust framework in assessing consumer acceptance towards AI integration in healthcare sales, marking a significant shift in healthcare delivery and underscoring the need for a nuanced understanding of AI’s implications on pharmacists’ roles amidst ethical and consumer.
... Healthcare chatbots aim at eliminating hospital waiting times, making appointments, and providing user assistance such as consultations or even diagnosis and psychological support [2,3]. In this way, these chatbots decrease the medical and organizational burden while cutting costs [4]. Although it is helpful to use chatbots in healthcare, they are complex to build, and poor design can lead to accuracy problems in the responses or even worse, in the diagnosis. ...
Article
Full-text available
Healthcare is one of the most important sectors of our society, and during the COVID-19 pandemic a new challenge emerged—how to support people safely and effectively at home regarding their health-related problems. In this regard chatbots or conversational agents (CAs) play an increasingly important role, and are spreading rapidly. They can enhance not only user interaction by delivering quick feedback or responses, but also hospital management, thanks to several of their features. Considerable research is focused on making CAs more reliable, accurate, and robust. However, a critical aspect of chatbots is how to make them inclusive, in order to effectively support the interaction of users unfamiliar with technology, such as the elderly and people with disabilities. In this study, we investigate the current use of chatbots in healthcare, exploring their evolution over time and their inclusivity. The study was carried out on four digital libraries (ScienceDirect, IEEE Xplore, ACM Digital Library, and Google Scholar) on research articles published in the last 5 years, with a total of 21 articles describing chatbots implemented and actually used in the eHealth clinical area. The results showed a notable improvement in the use of chatbots in the last few years but also highlight some design issues, including poor attention to inclusion. Based on the findings, we recommend a different kind of approach for implementing chatbots with an inclusive accessibility-by-design approach.
... Successful pilot deployments act as a springboard for broader adoption [58]. Human trust is an important factor, and further education on AI and transparency information can build this trust for clinicians and patients [59][60][61]. Identifying and leveraging technology champions within the healthcare system can profoundly influence the dissemination of the technology's value. These advocates play a vital role in communication campaigns, training, and facilitating a seamless transition to widespread deployment, ensuring a comprehensive understanding of the technology's benefits. ...
Article
Full-text available
Background Perhaps nowhere else in the healthcare system than in the intensive care unit environment are the challenges to create useful models with direct time-critical clinical applications more relevant and the obstacles to achieving those goals more massive. Machine learning-based artificial intelligence (AI) techniques to define states and predict future events are commonplace activities of modern life. However, their penetration into acute care medicine has been slow, stuttering and uneven. Major obstacles to widespread effective application of AI approaches to the real-time care of the critically ill patient exist and need to be addressed. Main body Clinical decision support systems (CDSSs) in acute and critical care environments support clinicians, not replace them at the bedside. As will be discussed in this review, the reasons are many and include the immaturity of AI-based systems to have situational awareness, the fundamental bias in many large databases that do not reflect the target population of patient being treated making fairness an important issue to address and technical barriers to the timely access to valid data and its display in a fashion useful for clinical workflow. The inherent “black-box” nature of many predictive algorithms and CDSS makes trustworthiness and acceptance by the medical community difficult. Logistically, collating and curating in real-time multidimensional data streams of various sources needed to inform the algorithms and ultimately display relevant clinical decisions support format that adapt to individual patient responses and signatures represent the efferent limb of these systems and is often ignored during initial validation efforts. Similarly, legal and commercial barriers to the access to many existing clinical databases limit studies to address fairness and generalizability of predictive models and management tools. Conclusions AI-based CDSS are evolving and are here to stay. It is our obligation to be good shepherds of their use and further development.
... One explanation is the lack of involvement from endusers in the design and development process [8]. Indeed, research suggests that adoption and acceptance of AI technologies improves when needs and preferences of patients and staff are incorporated [8][9][10]. While other sectors such as public policy and information technology have for decades widely employed approaches centered around users, the same cannot be said for health care [11]. ...
... Moreover, the emergence of AI in the healthcare sector has generated a multitude of worries and doubts within the healthcare workforce. These concerns stem from various dimensions, including job security (Chew & Achananuparp, 2022), ethical considerations (Karimian et al., 2022), and the impact on the patient-provider relationship (Nash et al., 2023). Healthcare workers fear the possibility of losing their jobs or the need to adapt to new roles as AI systems take over routine tasks (Jarota, 2023). ...
Article
Full-text available
Background The rapid integration of artificial intelligence (AI) into healthcare has raised concerns among healthcare professionals about the potential displacement of human medical professionals by AI technologies. However, the apprehensions and perspectives of healthcare workers regarding the potential substitution of them with AI are unknown. Objective This qualitative research aimed to investigate healthcare workers’ concerns about artificial intelligence replacing medical professionals. Methods A descriptive and exploratory research design was employed, drawing upon the Technology Acceptance Model (TAM), Technology Threat Avoidance Theory, and Sociotechnical Systems Theory as theoretical frameworks. Participants were purposively sampled from various healthcare settings, representing a diverse range of roles and backgrounds. Data were collected through individual interviews and focus group discussions, followed by thematic analysis. Results The analysis revealed seven key themes reflecting healthcare workers’ concerns, including job security and economic concerns; trust and acceptance of AI; ethical and moral dilemmas; quality of patient care; workforce role redefinition and training; patient–provider relationships; healthcare policy and regulation. Conclusions This research underscores the multifaceted concerns of healthcare workers regarding the increasing role of AI in healthcare. Addressing job security, fostering trust, addressing ethical dilemmas, and redefining workforce roles are crucial factors to consider in the successful integration of AI into healthcare. Healthcare policy and regulation must be developed to guide this transformation while maintaining the quality of patient care and preserving patient–provider relationships. The study findings offer insights for policymakers and healthcare institutions to navigate the evolving landscape of AI in healthcare while addressing the concerns of healthcare professionals.
Article
Background In recent years, there has been an upwelling of artificial intelligence (AI) studies in the health care literature. During this period, there has been an increasing number of proposed standards to evaluate the quality of health care AI studies. Objective This rapid umbrella review examines the use of AI quality standards in a sample of health care AI systematic review articles published over a 36-month period. Methods We used a modified version of the Joanna Briggs Institute umbrella review method. Our rapid approach was informed by the practical guide by Tricco and colleagues for conducting rapid reviews. Our search was focused on the MEDLINE database supplemented with Google Scholar. The inclusion criteria were English-language systematic reviews regardless of review type, with mention of AI and health in the abstract, published during a 36-month period. For the synthesis, we summarized the AI quality standards used and issues noted in these reviews drawing on a set of published health care AI standards, harmonized the terms used, and offered guidance to improve the quality of future health care AI studies. Results We selected 33 review articles published between 2020 and 2022 in our synthesis. The reviews covered a wide range of objectives, topics, settings, designs, and results. Over 60 AI approaches across different domains were identified with varying levels of detail spanning different AI life cycle stages, making comparisons difficult. Health care AI quality standards were applied in only 39% (13/33) of the reviews and in 14% (25/178) of the original studies from the reviews examined, mostly to appraise their methodological or reporting quality. Only a handful mentioned the transparency, explainability, trustworthiness, ethics, and privacy aspects. A total of 23 AI quality standard–related issues were identified in the reviews. There was a recognized need to standardize the planning, conduct, and reporting of health care AI studies and address their broader societal, ethical, and regulatory implications. Conclusions Despite the growing number of AI standards to assess the quality of health care AI studies, they are seldom applied in practice. With increasing desire to adopt AI in different health topics, domains, and settings, practitioners and researchers must stay abreast of and adapt to the evolving landscape of health care AI quality standards and apply these standards to improve the quality of their AI studies.
Article
Full-text available
Background The effects of Artificial Intelligence (AI) technology applications are already felt in healthcare in general and in the practice of medicine in the disciplines of radiology, pathology, ophthalmology, and oncology. The expanding interface between digital data science, emerging AI technologies and healthcare is creating a demand for AI technology literacy in health professions. Objective To assess medical student and faculty attitudes toward AI, in preparation for teaching AI foundations and data science applications in clinical practice in an integrated medical education curriculum. Methods An online 15-question semi-structured survey was distributed among medical students and faculty. The questionnaire consisted of 3 parts: participant’s background, AI awareness, and attitudes toward AI applications in medicine. Results A total of 121 medical students and 52 clinical faculty completed the survey. Only 30% of students and 50% of faculty responded that they were aware of AI topics in medicine. The majority of students (72%) and faculty (59%) learned about AI from the media. Faculty were more likely to report that they did not have a basic understanding of AI technologies (χ ² , P = .031). Students were more interested in AI in patient care training, while faculty were more interested in AI in teaching training (χ ² , P = .001). Additionally, students and faculty reported comparable attitudes toward AI, limited AI literacy and time constraints in the curriculum. There is interest in broad and deep AI topics. Our findings in medical learners and teaching faculty parallel other published professional groups’ AI survey results. Conclusions The survey conclusively proved interest among medical students and faculty in AI technology in general, and in its applications in healthcare and medicine. The study was conducted at a single institution. This survey serves as a foundation for other medical schools interested in developing a collaborative programming approach to address AI literacy in medical education.
Article
Full-text available
Results of radiology imaging studies are not typically comprehensible to patients. With the advances in artificial intelligence (AI) technology in recent years, it is expected that AI technology can aid patients’ understanding of radiology imaging data. The aim of this study is to understand patients’ perceptions and acceptance of using AI technology to interpret their radiology reports. We conducted semi-structured interviews with 13 participants to elicit reflections pertaining to the use of AI technology in radiology report interpretation. A thematic analysis approach was employed to analyze the interview data. Participants have a generally positive attitude toward using AI-based systems to comprehend their radiology reports. AI is perceived to be particularly useful in seeking actionable information, confirming the doctor’s opinions, and preparing for the consultation. However, we also found various concerns related to the use of AI in this context, such as cyber-security, accuracy, and lack of empathy. Our results highlight the necessity of providing AI explanations to promote people’s trust and acceptance of AI. Designers of patient-centered AI systems should employ user-centered design approaches to address patients’ concerns. Such systems should also be designed to promote trust and deliver concerning health results in an empathetic manner to optimize the user experience.
Article
Full-text available
Objective Chatbots have potential to deliver interactive self-management interventions but have rarely been studied in the context of hypertension or medication adherence. The objective of this study was to better understand patient information needs and perceptions of chatbots to support hypertension medication self-management. Materials and Methods Mixed methods were used to assess self-management needs and preferences for using chatbots. We purposively sampled adults with hypertension who were prescribed at least one medication. Participants completed questionnaires on sociodemographics, health literacy, self-efficacy, and technology use. Semi-structured interviews were conducted, audio-recorded, and transcribed verbatim. Quantitative data were analyzed using descriptive statistics, and qualitative data were analyzed using applied thematic analysis. Results Thematic saturation was met after interviewing 15 participants. Analysis revealed curiosity toward chatbots, and most perceived them as humanlike. The majority were interested in using a chatbot to help manage medications, refills, communicate with care teams, and for accountability toward self-care tasks. Despite general enthusiasm, there were concerns with chatbots providing too much information, making demands for lifestyle changes, invading privacy, and usability issues with deployment on smartphones. Those with overall positive perceptions toward chatbots were younger and taking fewer medications. Discussion Chatbot-related informational needs were consistent with existing self-management research, and many felt chatbots would be valuable if customizable and compatible with patient portals, pharmacies, or health apps. Conclusion Although most were not familiar with chatbots, patients were interested in interacting with them, but this varied. This research informs future design and functionalities of conversational interfaces to support hypertension self-management.
Article
Full-text available
Artificial intelligence technology has advanced rapidly in recent years and has the potential to improve healthcare outcomes. However, technology uptake will be largely driven by clinicians, and there is a paucity of data regarding the attitude that clinicians have to this new technology. In June–August 2019 we conducted an online survey of fellows and trainees of three specialty colleges (ophthalmology, radiology/radiation oncology, dermatology) in Australia and New Zealand on artificial intelligence. There were 632 complete responses (n = 305, 230, and 97, respectively), equating to a response rate of 20.4%, 5.1%, and 13.2% for the above colleges, respectively. The majority (n = 449, 71.0%) believed artificial intelligence would improve their field of medicine, and that medical workforce needs would be impacted by the technology within the next decade (n = 542, 85.8%). Improved disease screening and streamlining of monotonous tasks were identified as key benefits of artificial intelligence. The divestment of healthcare to technology companies and medical liability implications were the greatest concerns. Education was identified as a priority to prepare clinicians for the implementation of artificial intelligence in healthcare. This survey highlights parallels between the perceptions of different clinician groups in Australia and New Zealand about artificial intelligence in medicine. Artificial intelligence was recognized as valuable technology that will have wide-ranging impacts on healthcare.
Article
Full-text available
Objective To present an overview of how artificial intelligence (AI) could be used to regulate eating and dietary behaviours, exercise behaviours and weight loss. Design A scoping review of global literature published from inception to 15 December 2020 was conducted according to Arksey and O’Malley’s five-step framework. Eight databases (CINAHL, Cochrane–Central, Embase, IEEE Xplore, PsycINFO, PubMed, Scopus and Web of Science) were searched. Included studies were independently screened for eligibility by two reviewers with good interrater reliability (k= 0.96). Results 66 out of 5573 potential studies were included, representing more than 2,031 participants. Three tenets of self-regulation were identified - self-monitoring (n=66, 100%), optimization of goal-setting (n=10, 15.2%) and self-control (n= 10, 15.2%). Articles were also categorised into three AI applications namely machine perception (n=50), predictive analytics only (n=6), and real-time analytics with personalised micro-interventions (n=10). Machine perception focused on recognizing food items, eating behaviours, physical activities and estimating energy balance. Predictive analytics focused on predicting weight loss, intervention adherence, dietary lapses and emotional eating. Studies on the last theme focused on evaluating AI-assisted weight management interventions that instantaneously collected behavioural data, optimised prediction models for behavioural lapse events and enhance behavioural self-control through adaptive and personalized nudges/prompts. Only six studies reported average weight losses (2.4% to 4.7%) of which two were statistically significant. Conclusion The use of AI for weight loss is still undeveloped. Based on this study findings, we proposed a framework on the applicability of AI for weight loss but cautioned its contingency upon engagement and contextualisation.
Article
Full-text available
Some emerging technologies have potential to address older people’s care and support needs. However, there is still a gap in the knowledge on the potential uses of these technologies in some care domains. Therefore, a two-round Delphi survey was conducted to establish a consensus of opinion from a group of health and social technology experts (n = 21) on the potential of 10 emerging technologies to meet older people’s needs in five care and support domains. Experts were also asked to provide reasons for their choices in free-text spaces. The consensus level was set at 70%. Free-text responses were analyzed using thematic analysis. Voice activated devices was the technology that reached experts consensus in all assessed care domains. Some technologies (e.g., Artificial intelligence (AI) enabled apps and wearables and Internet of things (IoT) enabled homes) also show potential to support basic self-care and access to healthcare needs of older people. However, most of the remaining technologies (e.g., robotics, exoskeletons, virtual and augmented reality (VR/AR)) face a range of technical and acceptability issues that may hinder their adoption by older people in the near future. Findings should encourage the R & D community to address some of the identified challenges to improve the adoption of emerging technologies by older people.
Article
Full-text available
Background Artificial intelligence (AI) methods can potentially be used to relieve the pressure that the COVID-19 pandemic has exerted on public health. In cases of medical resource shortages caused by the pandemic, changes in people’s preferences for AI clinicians and traditional clinicians are worth exploring. Objective We aimed to quantify and compare people’s preferences for AI clinicians and traditional clinicians before and during the COVID-19 pandemic, and to assess whether people’s preferences were affected by the pressure of pandemic. Methods We used the propensity score matching method to match two different groups of respondents with similar demographic characteristics. Respondents were recruited in 2017 and 2020. A total of 2048 respondents (2017: n=1520; 2020: n=528) completed the questionnaire and were included in the analysis. Multinomial logit models and latent class models were used to assess people’s preferences for different diagnosis methods. ResultsIn total, 84.7% (1115/1317) of respondents in the 2017 group and 91.3% (482/528) of respondents in the 2020 group were confident that AI diagnosis methods would outperform human clinician diagnosis methods in the future. Both groups of matched respondents believed that the most important attribute of diagnosis was accuracy, and they preferred to receive combined diagnoses from both AI and human clinicians (2017: odds ratio [OR] 1.645, 95% CI 1.535-1.763; P
Article
Background: The emerging Artificial Intelligence (AI) based Conversational Agents (CA) capable of delivering evidence-based psychotherapy presents a unique opportunity to solve longstanding issues such as social stigma and demand-supply imbalance associated with traditional mental health care services. However, the emerging literature points to several socio-ethical challenges which may act as inhibitors to the adoption in the minds of the consumers. We also observe a paucity of research focusing on determinants of adoption and use of AI-based CAs in mental healthcare. In this setting, this study aims to understand the factors influencing the adoption and use of Intelligent CAs in mental healthcare by examining the perceptions of actual users. Method: The study followed a qualitative approach based on netnography and used a rigorous iterative thematic analysis of publicly available user reviews of popular mental health chatbots to develop a comprehensive framework of factors influencing the user’s decision to adopt mental healthcare CA. Results: We developed a comprehensive thematic map comprising of four main themes, namely, perceived risk, perceived benefits, trust, and perceived anthropomorphism, along with its 12 constituent subthemes that provides a visualization of the factors that govern the user’s adoption and use of mental healthcare CA. Conclusions: Insights from our research could guide future research on mental healthcare CA use behavior. Additionally, it could also aid designers in framing better design decisions that meet consumer expectations. Our research could also guide healthcare policymakers and regulators in integrating this technology into formal healthcare delivery systems. Available at: https://aisel.aisnet.org/pajais/vol12/iss2/1/ Recommended Citation Prakash, Ashish Viswanath and Das, Saini (2020) "Intelligent Conversational Agents in Mental Healthcare Services: A Thematic Analysis of User Perceptions," Pacific Asia Journal of the Association for Information Systems: Vol. 12: Iss. 2, Article 1. DOI: 10.17705/1pais.12201