ArticlePDF AvailableLiterature Review

Review of the Factors Affecting Acceptance of AI-Infused Systems

Authors:

Abstract and Figures

Objective: The study aimed to provide a comprehensive overview of the factors impacting technology adoption, to predict the acceptance of artificial intelligence (AI)-based technologies. Background: Although the acceptance of AI devices is usually defined by behavioural factors in theories of user acceptance, the effects of technical and human factors are often overlooked. However, research shows that user behaviour can vary depending on a system's technical characteristics and differences in users. Method: A systematic review was conducted. A total of 85 peer-reviewed journal articles that met the inclusion criteria and provided information on the factors influencing the adoption of AI devices were selected for the analysis. Results: Research on the adoption of AI devices shows that users' attitudes, trust and perceptions about the technology can be improved by increasing transparency, compatibility, and reliability, and simplifying tasks. Moreover, technological factors are also important for reducing issues related to human factors (e.g. distrust, scepticism, inexperience) and supporting users with lower intention to use and lower trust in AI-infused systems. Conclusion: As prior research has confirmed the interrelationship among factors with and without behaviour theories, this review suggests extending the technology acceptance model that integrates the factors studied in this review to define the acceptance of AI devices across different application areas. However, further research is needed to collect more data and validate the study's findings. Application: A comprehensive overview of factors influencing the acceptance of AI devices could help researchers and practitioners evaluate user behaviour when adopting new technologies.
Content may be subject to copyright.
Review of the Factors Affecting Acceptance of
AI-Infused Systems
Ulugbek Vahobjon Ugli Ismatullaevand Sang-Ho Kim, PhD, Department of
Industrial Engineering, Kumoh National Institute of Technology, South Korea
Objective: The study aimed to provide a comprehensive
overview of the factors impacting technology adoption, to
predict the acceptance of articial intelligence (AI)-based
technologies.
Background: Although the acceptance of AI devices is
usually dened by behavioural factors in theories of user ac-
ceptance, the effects of technical and human factors are often
overlooked. However, research shows that user behaviour can
vary depending on a systems technical characteristics and
differences in users.
Method: A systematic review was conducted. A total of 85
peer-reviewed journal articles that met the inclusion criteria
and provided information on the factors inuencing the
adoption of AI devices were selected for the analysis.
Results: Research on the adoption of AI devices shows that
usersattitudes, trust and perceptions about the technology can
be improved by increasing transparency, compatibility, and
reliability, and simplifying tasks. Moreover, technological factors
are also important for reducing issues related to human factors
(e.g. distrust, scepticism, inexperience) and supporting users
with lower intention to use and lower trust in AI-infused
systems.
Conclusion: As prior research has conrmed the in-
terrelationship among factors with and without behaviour
theories, this review suggests extending the technology ac-
ceptance model that integrates the factors studied in this review
to dene the acceptance of AI devices across different appli-
cation areas. However, further research is needed to collect
more data and validate the studysndings.
Application: A comprehensive overview of factors
inuencing the acceptance of AI devices could help researchers
and practitioners evaluate user behaviour when adopting new
technologies.
Keywords: technology adoption, human factors, autonomous
driving, robotics, healthcare
BACKGROUND
Although rapid advances in machine learning
have made it increasingly applicable to expert
decision-making, the delivery of accurate al-
gorithmic predictions is insufcient for effective
humanarticial intelligence (AI) collaboration
(Caietal.,2019). Therefore, research focusing on
factors that ensure successful interactions with AI
technologies has been increasing. Furthermore,
research on humanAI interaction has become
more signicant in different elds in which AI
tools are widely used, such as autonomous driving
(AD), healthcare, robotics and education.
Although AI has numerous applications that
can offer signicant benets, existing studies
indicate that there is a need for an independent
evaluation of the factors inuencing AI use. At
present, many published studies on AI focus on
the technology itself, as developers are keen to
demonstrate that the technology works, while the
effects of other factors on technology acceptance
remain unclear (Sujan et al., 2020). Some existing
studies have examined psychological factors in-
tegrated into behaviour theories or related to user
behaviour (e.g. attitude, perceived usefulness
(PU), perceived ease of use (PEOU), trust) to
explain technology acceptance (Jing et al., 2020).
However, prior studies suggest that research on
human factors related to user demographics and
cognitive aspects should accompany AI de-
velopment from the outset (Sujan et al., 2020).
Furthermore, it may be helpful to consider ad-
ditional factors related to human values in es-
tablishing successful interactions with AI systems
(de Visser et al., 2020). Currently, a lack of trust in
AI systems is a signicant barrier to technology
adoption. Trust in AI can be inuenced by several
human factors (e.g. education, experience, per-
ception) and properties of the AI system (e.g.
transparency, complexity, controllability) (Asan
et al., 2020). Thus, investigating the inuence
Address correspondence to Sang-Ho Kim, Department of
Industrial Engineering, Kumoh National Institute of
Technology, 61 Daehak-ro, Gumi, Gyeongbuk 39177,
South Korea; e-mail: kimsh@kumoh.ac.kr
HUMAN FACTORS
Vol. 0, No. 0, nn n, pp. 1-19
DOI:10.1177/00187208211064707
Article reuse guidelines: sagepub.com/journals-permissions
Copyright © 2022, Human Factors and Ergonomics Society.
of both technological and human factors on user
behaviour towards AI systems could greatly help
to predict the acceptance of AI-infused systems
among different user groups. Therefore, the
present study aimed to analyse the effects of
behavioural, technological and human factors on
technology adoption.
TECHNOLOGY ACCEPTANCE THEORIES
Numerous models and frameworks have been
developed to explain user adoption of new
technologies (Taherdoost, 2018). The theory of
reasoned action, developed by Fishbein and
Ajzen (1975), suggests that behaviour is de-
termined by the intention to perform that be-
haviour, and this intention is, in turn, a function
of individual attitudes towards the behaviour
and subjective norms (SNs). Similarly, the
technology acceptance model (TAM) suggests
that technology acceptance can be determined
by a users behavioural intention (BI), which is
also known as the intention of use (IU) of the
technology. Technology acceptance model ex-
plains user intention to apply the technology
based on three factors: PU, PEOU and attitude.
Therefore, TAM contains not only BI but also
two central beliefs, PU and PEOU, which have
a considerable impact on user attitude and in-
tention (Taherdoost, 2018). Recently, a theory
with clear similarity to TAM, the Unied Theory
of Acceptance and Use of Technology, has been
developed in an impressive attempt to merge the
literature on information technology acceptance
(Venkatesh et al., 2003). Venkatesh et al. (2003)
changed PU into a performance expectancy con-
struct, PEOU into effort expectancy, and SNs into
social inuence (SI), and added facilitating con-
ditions as a fourth construct. All four constructs are
moderated by gender, age, experience and volun-
tariness of use (Taherdoost, 2018).
Several researchers have reviewed the factors
that inuence user acceptance of AI devices in
the AD eld. In particular, Jing et al. (2020) used
behaviour theories to examine factors affecting
the acceptance intention of autonomous vehicles
(AVs), Becker et al. (2019) investigated the
heterogeneity of AV preferences among differ-
ent groups by comparing survey methods used
to assess AV acceptance, and Peek et al. (2014)
studied the factors inuencing technology ac-
ceptance among community-dwelling older
adults. However, to the best of our knowledge,
the general acceptance of AI-based technologies
across different application elds (e.g. health-
care, robotics) has not been extensively re-
viewed. In addition, the effects of technological,
human and behavioural factors on AI adoption
have not been widely examined in technology
acceptance related research.
Thus, the present study had two objectives.
The rst objective was to investigate the factors
affecting the adoption of AI-infused systems
across different application domains. The sec-
ond objective was to provide a comprehensive
review of the relationships among technological,
human and behavioural factors and their impact
on user acceptance of AI technologies. In ad-
dition, based on the ndings, this study aimed to
draw implications that could be applicable in
further research.
METHOD
In this study, we employed the Preferred Re-
porting Items for Systematic Reviews and Meta-
analysis (PRISMA) model developed by Moher
et al. (2009). Using PRISMA for systematic re-
viewsisbenecial because it reduces the risk of
including an excessive number of studies ad-
dressing the same question (Bagshaw et al., 2006;
Biondi-Zoccai et al., 2006) and provides greater
transparency when updating systematic reviews.
Figure 1 shows the steps taken and the PRISMA
model applied in the present study.
Using keywords such as articial in-
telligence,technology adoption,acceptance
and intention of usewe searched the IEEE
Xplore, Springer Link and Google Scholar da-
tabases, which include articles on engineering,
computer science and transportation. First, we
searched Springer Link and IEEE Xplore using
the search string articial intelligenceAND
intentionAND (adoptionOR acceptance).
Second, on Google Scholar, we used articial
intelligenceAND other keywords (intention,
adoption, and acceptance) individually to
search for unique papers related to the study.
Through all databases, only papers that per-
tained to AI and used at least one phrase related
2nnn-Human Factors
to acceptance and IU were included in the
analysis. Further inclusion criteria used to
identify eligible articles for the analysis were
papers that focused on AI technologies, ad-
dressed factors inuencing technology adoption,
and were published in English, while the ex-
clusion criteria were duplicate studies and
studies published before 2000.
The last search was performed on December
1, 2020, and the included studies were limited to
that date. A total of 464 publications were found
based on the related keywords above: 161 from
IEEE Xplore, 145 from Springer Link and 158
from Google Scholar. As mentioned above,
a different search strategy with the same keywords
was used to search for unique papers on Google
Scholar compared to the other databases. As
a result, 78 duplicate papers were identied and
excluded from the analysis. Then, 386
publications were screened based on their abstracts
to verify their relevance to the study objectives;
249 of these publications were excluded after
screening. Next, 137 full-text articles were as-
sessed for eligibility criteria, including inclusion
and exclusion criteria. Finally, 85 publications
related to AD, healthcare and robotics were se-
lected for further analysis. A total of 44 pub-
lications were found in the most studied
application area, AD, followed by 25 and 16
publications in robotics and healthcare, re-
spectively (Figure 2).
Although papers on acceptance of AI devices
have also been published in other elds, partic-
ularly education and telecommunications, the
number of studies related to these areas (less than
ve in total) differed sharply from that of research
in the selected application domains. Thus, papers
belonging to other application domains were
Figure 1. Flow diagram of the Preferred Reporting Items for Systematic Reviews and Meta-analysis
model used in the present study.
FACTORS AFFECTING ACCEPTANCE OF AI-INFUSED SYSTEMS 3
excluded, and the application areas were limited to
three for the analysis. We also considered that
increasing the application areas would complicate
the review analysis and prevent accurate con-
clusions from being drawn across areas.
RESULTS
In total, 85 papers met the inclusion criteria and
were included in the nal analysis. Furthermore,
factors were evaluated, and major observations
were addressed based on their effects on AI ac-
ceptance. Tab le 1 provides conceptual denitions
of the key factors that impact technology accep-
tance. It should be noted that the denitions of the
factors are derived from the original source where
each term was rst used. Thus, these sources were
not included in the papers selected for review.
FACTORS AFFECTING ACCEPTANCE OF
AI DEVICES IN STUDIES WITH
BEHAVIOUR THEORIES
This section discusses the effects of key
factors in behaviour theories on IU of AI sys-
tems and highlights the relationships among
them. Figure 3 shows the results of the fre-
quency analysis of the main factors found in
behaviour theories: attitude, PU, PEOU, SNs,
trust, perceived benets (PBs), and perceived
risks (PRs). Frequency was calculated based on
the number of references that emphasised the
effects of each factor on AI device acceptance.
Figure 3 shows that PU and PEOU, which are
the basic constructs of TAM, have been exten-
sively studied in each application area
considered, followed by PR, SNs and trust. The
effect of Perceived benet (PB) on BI to use AI
tools has been explored relatively less than other
factors in technology acceptance studies. Re-
garding AD, PU has been studied more exten-
sively, followed by PEOU, PR and trust. In
robotics, the impact of PU on technology ac-
ceptance was the most discussed, while PEOU,
attitude, and SNs were the next most studied
factors. PEOU was the most studied factor in
healthcare, followed by PU, PR and SNs.
Attitude
In addition to the denition by Fishbein and
Ajzen (1975), it has been argued that attitude is
an accumulation of information regarding an
object, person, situation, or experience leading
to a predisposition to act in a positive or negative
way towards some object or technology
(Littlejohn, 2002). Thus, users may behave or
react based on their positive or negative attitude
towards specic technology. The results of this
review conrmed that attitude positively inu-
ences usersIU of AI-infused systems across all
studied application areas. For example, people
with more positive attitudes towards autono-
mous shuttles as an alternative to public trans-
port are more likely to use them in the future
(Chen, 2019;Man et al., 2020). In addition, it
was found that physicians with more positive
attitudes tend to exhibit greater IU for AI
technologies (Alhashmi et al., 2020). Further-
more, the attitude was found to affect perceived
privacy risk (PPR). Therefore, it was suggested
that providing users with control over their
personal information will reduce PPR and in-
crease positive attitudes towards AVs (Walter &
Abendroth, 2020).
Some specic relationships were also found
between attitude and human factors across ap-
plication areas. Young people were observed to
have more positive attitudes compared with
other age groups, as they are more likely to be
open to AVs recording and using their data
(Biermann et al., 2020). In healthcare, Zhang
et al. (2014) showed that, for men with more
positive attitudes, increases in the level of SNs
have a decreasing marginal impact on adoption
Figure 2. Proportion of literature by application area.
4nnn-Human Factors
intention; however, this was not conrmed for
women. In interactions with social assistant
robots, users with high levels of technology
experience were likely to have more positive
attitudes towards these robots (Gessl et al.,
2019). Research further suggests that attitude
has a weaker effect on IU than PU in the ac-
ceptance of service robots in Korea, implying
that users in Korea were more likely to be
concerned regarding the usefulness in the
adoption of service robots (Park et al., 2013).
PU and PEOU
PU and PEOU showed a high level of in-
uence in each application area on AI device
acceptance. In particular, for AVs, users were
more concerned regarding the usefulness rather
TABLE 1: Key variables in technology acceptance and their conceptual denitions
Factors Denition Source
Actual use The end-point where people use the technology Davis (1989)
Intention of use (IU) The strength of an individuals intention to perform a specic
behaviour or use the technology
Davis (1989)
Attitude An individuals positive or negative feelings (evaluative affect)
regarding performing the target behaviour
Fishbein and Ajzen
(1975)
Perceived usefulness
(PU)
The degree to which an individual believes that using
a particular system would enhance job performance
Davis (1989)
Performance
expectancy (PE)
The degree to which an individual perceives that using
a system will help improve job performance
Venkatesh et al.
(2003)
Perceived ease of use
(PEOU)
The degree to which an individual believes that using
a particular system would be free of effort
Davis (1989)
Effort expectancy (EE) The degree of ease associated with using the system Venkatesh et al.
(2003)
Subjective norms
(SNs)
An individuals perception that most people who are
important to him or her think he or she should or should not
perform the behaviour
Fishbein and Ajzen
(1975)
Trust The willingness of an individual to be vulnerable to the actions
of another based on the expectations that the other party
will perform a particular action that is important to the
trustor, irrespective of the ability to monitor or control that
other party
Mayer et al. (1995)
Perceived benet (PB) The perception of positive consequences that result from
a specic action
Leung (2013)
Perceived risk (PR) A combination of uncertainty and seriousness of the outcome
involved and associated with each product category
BAUER (1960)
Transparency Allows individuals to see whether models have been
thoroughly tested and make sense and that they can
understand why particular decisions are made by AI tools
Nicodeme (2020)
Reliability Refers to whether the AI technology can perform a task
predictably and consistently
Asan et al. (2020)
Complexity Refers to the degree of perceived difculty and cognitive load
required to accomplish a given task
Fan et al. (2020)
Compatibility Refers to the degree to which a technology complies with the
technical functionalities of other existing products (e.g.
smartphones and tablets) and with the needs and lifestyles
of users
Li et al. (2019)
FACTORS AFFECTING ACCEPTANCE OF AI-INFUSED SYSTEMS 5
than the ease of use (Choi & Ji, 2015). For
individuals already familiar with driving, using
an AV does not seem to be a difcult task (Choi &
Ji, 2015;J. Lee, D. Lee et al., 2019;S. Lee, Ratan
et al., 2019). Moreover, in the adoption of oph-
thalmic AI devices, the public might perceive
them as intelligent as newly developed products
and believe that they should be convenient and
easytouse(Ye et al., 2019). However, some
contradictory results were also found. In particular,
Walter and Abendroth (2020) and Garidis et al.
(2020) reported that PU did not have a signicant
effect on IU of AVs, while Chen (2019) found PU
had much less of an effect than PEOU on attitude.
The positive effects of PEOU and PU on trust
in AI-based technologies in healthcare were
conrmed (Fan et al., 2020;Zhang et al., 2017).
In addition, regarding robotics, Heerink et al.
(2009) found that the social abilities of robotic
systems could help older adults perceive the
system as more useful and effortless, which, in
turn, could help increase the acceptance of robots
and screen agents. Ezer et al. (2009) emphasised
that both younger and older age groups would be
willing to use and accept robotic technologies in
their homes, as long as they perceive that using
these technologies would be benecial and not
too difcult.
SNs
Subjective norms were found to have a pos-
itive impact on adoption intention and PU and
a negative impact on PRs of AVs (Zhu et al.,
2020;Zmud & Sener, 2017). In particular, media
was shown to be a critical inuence on AD
adoption intention (C. Lee, B. Seppelt, et al.,
2019;Zhu et al., 2020). While C. Lee, B. Seppelt,
et al. (2019) found that frequent reporting in the
media of accidents involving AVs in most cases
negatively affected usersattitudes towards the
technology, Zhu et al. (2020) argued that the more
impressions users received from media, the
greater their self-efcacy regarding AV use. In
addition, users were mostly inuenced by family
members, peers, friends and colleagues to use
AVs and mobile health devices (Hoque, 2016;
Walter & Abendroth, 2020;Ye et al., 2019;Yuen,
Thi, et al., 2020). A negative relationship was
observed between SNs and trust, implying that
the more trust people have in their IU for AVs,
the less will they be inuenced by family
members, friends and peers (Panagiotopoulos
& Dimitrakopoulos, 2018).
T. Zhang, H. Tan, et al. (2019) emphasised
that those who perceived their friends as having
a more positive attitude towards AVs had higher
PEOU and PU of AVs, showed higher trust, and
were more likely to use AVs when available.
This is expected to have a greater impact in
collectivistic cultures, owing to group confor-
mity. Likewise, Ye et al. (2019) emphasised that
SNs are the most important predictor of IU and
conrmed the effect of SNs on PU in the
adoption of ophthalmic AI devices. According
to Ye et al. (2019), in China, when individuals
encounter new technologies, such as ophthalmic
AI devices, PU, PEOU and IU are likely to be
inuenced by the perceptions of close friends
and relatives, colleagues, peers and superiors or
leaders in the workplace because of crowd
mentality and collectivistic culture.
In addition, SNs were observed to be related
to age and gender. Li et al. (2019) found that, in
terms of SI, the attitudes and decisions of older
adults in using wearable smart systems were
signicantly inuenced by the opinions of their
children and grandchildren. For men, having
highly positive attitudes increases in the level of
SNs have a decreasing marginal impact on IU
for intelligent health devices (Zhang et al.,
2014). Although there were several studies on
SNs in robotics (Gessl et al., 2019;Heerink
Figure 3. Frequency analysis of the factors in behaviour
theories of technology acceptance. Note. ATT: Attitude;
PU: Perceived usefulness; PEOU: Perceived ease of use;
SN: Subjective norms; TRU: Trust; PB: Perceived
benet; PR: Perceived risk.
6nnn-Human Factors
et al., 2006;L. Y. Lee, W. M. Lim, et al., 2020;
Piasek & Wieczorowska-Tobis, 2018), no sig-
nicant ndings were found regarding the sig-
nicant impact of SI on IU for robots.
Trust
Trust has been found to exhibit strong direct
and indirect effects on BI and overall acceptance
of AVs and AI-based healthcare technologies. It
was conrmed that trust has a signicant direct
effect on BI in AD and healthcare (Dirsehan &
Can, 2020;Fan et al., 2020;Nadarzynski et al.,
2019), while it was the most important, but not
the only, direct antecedent of BI to use AVs
(Panagiotopoulos & Dimitrakopoulos, 2018;
Zhang, Tan, et al., 2019). Zhang, Tao, et al.
(2019) and Man et al. (2020) found that trust has
a stronger effect than PU and PEOU on attitudes
towards AVs, which was explained by the fact
that, in uncertain situations, trust is a solution for
the specic problems related to the risk factors.
Trust showed a signicant moderating effect on
the path from PU to IU for AI-based healthcare
technologies, indicating that if peoples trust in
AI technologies improves, these users will adopt
AI technology, even if the devices are not as
useful as expected (Ye et al., 2019). However,
contradictory results were also found. While the
effects of trust on IU and PR in AD were not
supported by Walter and Abendroth (2020),
Chen (2019) found that the path from trust to IU
was not signicant, although trust positively
inuenced attitude. Age and gender were also
found to be related to trust in AD. It was ob-
served that it is more difcult to gain the trust of
older participants than younger ones, while
women showed higher trust than men in using
AV s ( Chen, 2019;J. Lee, Abe, et al., 2020). No
signicant effects of trust on the acceptance of
robotic devices were found.
Perceived Benet
Perceived benet showed signicant effects
on the adoption of AVs and AI-related health-
care devices. Liu et al. (2019) argued that PB is
a stronger predictor of fully automated driving
acceptance than PR. Moreover, Manfreda et al.
(2019) found that PB had the greatest impact on
AV adoption, whereas perceived safety can
signicantly reduce perceived problems with
AV acceptance. In healthcare, users tend to use
and accept medical technologies when they
perceive the benets as exceeding the risk of
privacy loss (Li et al., 2019). In addition, a direct
effect of PB on IU for healthcare wearable de-
vices was observed. Therefore, it was suggested
that focusing more attention on forming users
perceptions regarding the potential benets they
could receive from technology could greatly
facilitate increased acceptance (Brell et al.,
2019a;2019b).
PR
Perceived safety risk. Several ndings were
noted regarding perceived safety risk (PSR) in
AD. In particular, J. Lee, D. Lee, et al. (2019)
and Garidis et al. (2020) conrmed the direct
effect of PR on IU for AVs. This indicates that
users who perceive a high degree of safety are
more likely to use AVs than people who assume
a low degree of safety, and PRs regarding AVs
are mostly related to external factors, such as
system errors or random accidents. Man et al.
(2020) found a relationship between PSR and
trust, in those drivers who felt the security risks
of using AVs were acceptable tended to have
a high level of trust in AV. The negative impact
of driving experience was conrmed, as with
increasing experience, PR decreases for novel
driving technologies (Brell et al., 2019a;2019b).
Moreover, it was found that PSR is a crucial
factor in decreasing the adoption of AVs among
millennials since the perceived safety of AVs
signicantly reduces the impact of concerns
related to AVs (Manfreda et al., 2019). However,
in both robotics and healthcare, no signicant
ndings regarding the effects of PSR on the
adoption of AI devices were reported.
PPR. Perceived privacy risk was found to
negatively impact attitudes towards AD, as well
as PU and PEOU in healthcare (Dhagarra et al.,
2020;Walter & Abendroth, 2020). Therefore,
Walter and Abendroth (2020) suggested that
service providers should take measures to
strengthen user control over their personal in-
formation to reduce PPR and ultimately increase
positive attitudes towards the use of vehicle
FACTORS AFFECTING ACCEPTANCE OF AI-INFUSED SYSTEMS 7
services. The more individuals worry about data
privacy issues, the less likely they are to use self-
driving cars (Zhu et al., 2020). However, some
contradictory results were found for AD. In
particular, Choi and Ji (2015) and Bogatzki et al.
(2020) argued that PPR is not a signicant factor
in predicting BI, while trust has a negative effect
on PPR. Bogatzki et al. (2020) explained this by
positing that consumers may not be aware of the
scope of data collection when they utilize AVs.
Furthermore, the path from PPR to trust was not
signicant (Man et al., 2020;Zhang, Tao, et al.,
2019). This was because the potential losses
associated with a violation of privacy while
driving are not as immediate and obvious as the
losses associated with a risk to personal safety.
Ye et al. (2019) found that PPR does not affect
IU for AI technologies in healthcare. This could
be explained by the fact that people are accus-
tomed to providing personal information when
registering on an app or receiving nuisance calls.
However, perceived privacy should not be
overlooked by designers because the use of
medical history data applied to teach AI requires
steps to be taken to prevent the data from being
passed on to third parties or cyberattacks (Balta-
Ozkan et al., 2013).
Technology-Related Factors in
Technology Adoption
In studies on the acceptance of AI devices,
numerous technology-related factors have been
found to affect behavioural factors and overall
acceptance. Figure 4 shows the key factors related
to AD, healthcare, and robotics: transparency,
reliability, complexity and compatibility.
As shown in Figure 4, compatibility, com-
plexity and transparency of the AI system are the
most discussed technological factors in all elds.
In AD, the impact of complexity on AV ac-
ceptance was studied least; however, its impact
on AI-based healthcare devices was the most
studied in healthcare, followed by reliability and
difculty level. However, relatively few studies
have examined the effects of technological
factors on the use and adoption of robotic
technologies, with only limited research avail-
able on the effects of compatibility, complexity
and transparency on the acceptance of robots.
Transparency. Transparency was found to
signicantly moderate and inuence trust in
predicting IU for AVs and robotic devices (Choi
& Ji, 2015;de Visser et al., 2020;Weitz et al.,
2020). Therefore, user perceptions of the ac-
curacy of autonomous technology must be
clearly dened by providing adequate trans-
parency, to help drivers predict and understand
the operation of AVs and the features that enable
them to regain control (Choi & Ji, 2015).
Moreover, Hutchins et al. (2019) stated that
transparency greatly impacts the psychological
feedback of any decision that a system makes. In
humanrobot interactions, transparency can
help calibrate trust by conveying information
regarding an agents uncertainty, dependence
and vulnerability (de Visser et al., 2020). Evi-
dence suggests that a lack of transparency, with
respect to the decisions of an AI system, might
have a negative impact on the trustworthiness of
a system (Weitz et al., 2020). It was further
suggested that adding transparency to a models
higher level design objectives can be helpful for
improving user attitudes and outcomes in
healthcare (Cai et al., 2019).
Complexity. Regarding the effects of com-
plexity, Heerink et al. (2009) suggested that the
simplicity of the tasks might warrant a closer
look. Because task simplicity enables setting up
a strict scenario with few surprises, it gives
participants less of an idea of the whole range of
tasks that a robot agent could do for them. In
relation to AD, K. F. Yuen, Y. D. Wong, et al.
(2020) suggested that for complexity, the pro-
cess of using or interacting with AVs can be
simplied by reducing the number of AV
Figure 4. Frequency analysis of the technological
factors.
8nnn-Human Factors
components that require human interaction and
the number of relations between these compo-
nents. The complexity of AVs can also be re-
duced by stimulating public interest, increasing
the publics capability of using AVs, and elic-
iting positive experiences from using AVs.
Furthermore, Fan et al. (2020) found signicant
positive effects of task complexity on PEOU and
trust in AI-based healthcare devices. In addition,
Ziee and R¨
ocker (2010) identied that the ease
of using medical technology is the most sig-
nicant criterion, in line with data manageability
and comfort of communication with technolo-
gies for the acceptance of pervasive healthcare
systems.
Reliability. Reliability was found to be pos-
itively associated with IU and trust in AD and
healthcare (Asan et al., 2020;Brell et al., 2019a;
Kaur & Rampersad, 2018;Siau & Wang, 2018).
Brell et al. (2019a) suggested that usersper-
ceptions regarding the reliability of AVs must
increase; otherwise, even if AVs become highly
reliable, users might still not trust the system and
prefer human drivers. Furthermore, reliability
showed a positive inuence on PU of AVs
(Hutchins et al., 2019). In robotics, Siau and
Wang (2018) emphasised that reliability is a key
factor in developing continuous trust in AI
systems. To achieve reliability, the AI applica-
tion should be designed to operate easily and
intuitively without unexpected downtime and
crashes.
Compatibility. System compatibility was
found to be an important inuencing factor for
the acceptance of AVs and robotic technolo-
gies. Compatibility showed a signicant pos-
itive inuence on PU, PEOU, and trust in AD,
indicating that if AVs have good compatibility
and system quality, drivers tend to have a high
level of trust in AVs and believe they are both
usefulandeasytouse(Man et al., 2020). In
addition, K. F. Yuen, Y. D. Wong, et al. (2020)
reported that compatibility is an important
factor affecting public acceptance of AVs. In
healthcare, Li et al. (2019) found that, for
wearable smart systems, hardware compati-
bility with current communication equipment,
weight and volume of the wearable device, and
lifestyle signicantly affected IU among older
adults. In robotics, compatibility is one factor
that can predict IU through PU (Turja et al.,
2020).
Human Factors in Technology Adoption
The effects of human factors, specically age,
gender, education, cultural background and
experience, on technology adoption are dis-
cussed in this section. Figure 5 shows the fre-
quency analysis of the studies on these factors
across different application areas.
User age and gender, followed by education,
were the most frequently studied as moderating
variables in the acceptance of AI devices
across different application areas. For AD, the
effects of age on AV adoption were discussed
most often, whereas, for the acceptance of
healthcare devices, the effects of gender were
found to be studied more frequently. In ro-
botics, studies on these two factors were found
to be almost equal in number. However, unlike
the other two application areas, the impact of
technological expertise in the robot industry
has been studied more extensively than that of
education. As noted above, the effects of hu-
man factors on AI acceptance have been
studied most extensively in AD, followed by
robotics and healthcare.
Age. Several studies identied the moder-
ating effects of age on the adoption of AVs,
indicating that users have different attitudes
towards and interest in using AVs based on
their age group. For AD, older people were
found to be more likely to prefer conventional
cars, since they tend to be attuned to their
lifestyle and less open to trying new technol-
ogies (Haboucha et al., 2017). Chen (2019)
Figure 5. Frequency analysis of human factors.
FACTORS AFFECTING ACCEPTANCE OF AI-INFUSED SYSTEMS 9
observed that users over the age of 40 were
more concerned about the systems ease of use,
and it was difcult to gain their trust regarding
AVs. However, among younger users, trust was
found to have a positive effect on attitude. AVs
were further found to be more appealing to
younger users when compared with older
people (Bansal et al., 2016). Older people are
often reluctant to adopt new technologies if
they cannot perceive them as potentially reli-
able and useful, while younger people are
typically enthusiastic about experiencing new
technologies (Deb et al., 2017;T. Zhang, D.
Tao, et al., 2019). Additionally, older users
typically have more concerns regarding man-
ufacturers using their private information,
whereas younger drivers are generally open to
AD technologies recording and using their data
(Biermann et al., 2020).
Likewise, in humanrobot interactions,
older people tend to distrust robotic technolo-
gies they perceive as potentially unsafe,
whereas younger people show higher positive
feelings and overall condence in the ability of
robots to perform tasks (Scopelliti et al., 2005).
Furthermore, older people interact with robots
as they would with humans. This includes
politeness markers and social speech acts that
are not only unnecessary but also likely to se-
verely confuse an actual end-to-end spoken
dialogue system. Whereas, younger people in-
teract with them efciently and adapt quickly to
systems (Wolters et al., 2009). Another differ-
ence between older and young users in the
adoption of robotic technologies is that older
adults tend to be concerned regarding the
physical and ecological issues associated with
robots, whereas younger users expect robots to
have more features and functions (Nomura
et al., 2009). However, contradictory ndings
indicated that age showed no signicant effects
in relation to either AD or robotics, which was
unexpected because older generations would be
less technologically savvy than younger ones
(Bogatzki et al., 2020;Sener et al., 2019;Yuen
et al., 2020;Zmud & Sener, 2017). Ezer et al.
(2009) argued that both younger and older
people are willing to accept a robot in their
home if it is benecial and not too difcult to
use.
Gender. While Bogatzki et al. (2020) and
Kettles and Van Belle (2019) noted that gender
did not signicantly affect overall BI and
acceptance of AVs, other studies observed the
signicant effects of gender on AVacceptance.
In particular, it was found that, compared with
men, women are more likely to trust and adopt
AV s ( J. Lee, G. Abe, et al., 2020;
Panagiotopoulos & Dimitrakopoulos, 2018).
Panagiotopoulos and Dimitrakopoulos (2018)
identied that women (almost 78%) were more
likelytohaveoruseAVswhentheybecame
available on the market, while the corre-
sponding percentage of men was lower (almost
59%). In contrast, other studies found that men
were more willing to drive AVs and had fewer
concerns regarding data privacy (Biermann
et al., 2020;Souders & Charness, 2016).
Hohenberger et al. (2016) found that gender
had a negative indirect effect on willingness to
use through anxiety and a positive indirect
effect on willingness to use AVs through
pleasure. In humanrobot interactions, men
showed higher PEOU and trust in robots
compared with women, sincetheyarelikelyto
have more experience with technology (Gessl
et al., 2019).
However, Scopelliti et al. (2005) found that
gender differences were far less important in the
adoption of robotic technology, with gender
only inuencing trust in technology. Speci-
cally, compared with men, women were more
likely to express greater scepticism regarding
robotic technologies. However, in healthcare,
women showed greater PU for medical tech-
nology (Wong et al., 2012). In the healthcare
sector, Ziee and R¨
ocker (2010) reported that
gender has no signicant effect on the overall
willingness to use medical technologies. How-
ever, signicant moderating effects of gender on
m-health adoption were conrmed by other
researchers. Specically, Hoque (2016) and
Zhang et al. (2014) showed that men had higher
levels of m-health adoption intention than
women in terms of PEOU and SNs; however,
specically in relation to PU, women had higher
levels of adoption intention.
Education. Differences were also observed in
acceptance of AI devices, depending on edu-
cational background. C. Lee, B. Seppelt, et al.
10 nnn-Human Factors
(2019) reported that younger age, a high level of
education, a positive effect for advanced driver-
assistance systems in use and high self-rated
early technology adoption tendency signi-
cantly inuence AV acceptance. Moreover, it
was argued that people with higher education
levels prefer AVs, as more educated people may
already be more familiar with the idea of AVs
and more likely to adopt new ideas and tech-
nologies (Brell et al., 201b; Haboucha et al.,
2017). In addition, people with a low education
level are typically less condent in their ability
to master a novel device (Giuliani et al., 2005).
In the robotics sector, Scopelliti et al. (2005)
argued that education does not signicantly
affect how potential users evaluate the useful-
ness and functionality of technological devices.
Contrary to the ndings of Scopelliti et al.
(2005),Nomura and Nakao (2010) reported
that users with an educational background in
science and engineering had lower scores for
negative attitudes towards interaction with ro-
bots, compared with those who had an educa-
tional background in the arts.
Cultural background. Some key differences
were found with regard to the cultural back-
ground in the acceptance of AI-based technol-
ogies in AD, robotics and healthcare. It was
observed that people from low-income countries
were most concerned about cost and safety,
whereas those in more developed countries were
concerned regarding their ability to take control
of AVs and privacy issues (Haboucha et al.,
2020). In addition, users from high-income
countries particularly disliked the idea of their
vehicle providing data to insurance companies,
tax authorities or road organisations (Kyriakidis
et al., 2015). In robotics, users from different
cultural backgrounds showed different levels of
positive and/or negative attitudes towards robots
(Bartneck et al., 2007;Nomura et al., 2008). For
healthcare and AD, it was found that the effects
of SI could be greater in collectivistic cultures,
owing to conformist mentality and authoritari-
anism, than in other cultures (Man et al., 2020;
Ye et al., 2019).
Technical expertise. Technical expertise showed
a positive effect on BI and trust in AD, implying
that experienced drivers will have a higher level
of trust and a more positive opinion of AVs (Koo
et al., 2015;R¨
odel et al., 2014). Moreover, Brell
et al. (2019a,2019b) showed that with in-
creasing experience, novel driving technolo-
gies are perceived as signicantly less risky,
and experience signicantly affects PRs in
both connected vehicles (i.e., vehicles that use
technology to communicate with each other;
connect with trafc signals, signs, and other
items on the road; or obtain data from a cloud)
and autonomous vehicles (i.e., vehicles that use
technology to steer, accelerate, and brake with
little to no human input). In addition, people
who drive more often and have experience with
technology are more likely to choose AVs, and
this is less dependent on how many of their
friends have already adopted AVs (Bansal et al.,
2016;Lee, Seppelt, et al., 2019). Robotic
technologies were likely to be more appealing to
older users, especially those less experienced
with technological devices and who could
benet more from the adoption of robotic
services (Di Nuovo et al., 2018). This was ex-
plained by experienced usersreluctance to learn
new procedures and preference for performing
familiar tasks unless they could see a strong
improvement with the new technology. How-
ever, it was also shown that previous interactions
with robots signicantly reduce negative atti-
tudes towards robotic technologies (Bartneck
et al., 2007).
Summary. The main objective of this study
was to provide a comprehensive overview of
factors that inuence the acceptance of AI de-
vices across different application areas and the
relationships among these factors. We consoli-
dated the ndings on this research topic through
an analysis of previous research on the adoption
of AI-based devices and studied the relation-
ships among the factors and their impact on
technology acceptance. A strong interdependent
relationship of technological and human factors
along with behavioural factors was identied.
For example, people would think that an AV is
useful when they perceive it is as easy to use,
while PEOU is related to the complexity of the
system. This, in turn, suggests the need for ex-
amining the inuence of technological factors
and human factors, along with behavioural fac-
tors, in determining the acceptance of AI-infused
systems. Table 2 summarises this reviewskey
FACTORS AFFECTING ACCEPTANCE OF AI-INFUSED SYSTEMS 11
ndings and shows the interrelationships between
factors with (behavioural factors) and without
behaviour theories (technological factors, human
factors).
Note that Table 2 does not include any sta-
tistical outcomes, but rather it provides a sum-
mary of the ndings identied in this review.
DISCUSSION
This systematic review of the literature ex-
amined the factors that inuence the acceptance
of AI-infused systems. The analysis included an
evaluation of the relationships among the factors
across different application areas. Specic key
ndings and research implications are discussed
below.
Key Findings
The review evaluated behavioural, techno-
logical, and human factors in relation to their use
in predicting technology adoption. The results
indicate that many researchers have explored the
impact of behavioural factors on the adoption of
AI technologies, and these factors have been
noted to have the strongest inuence on tech-
nology adoption. Specically, attitude and PU
have been identied in many studies as the most
inuential factors on adoption intention. In
addition to traditional acceptance theories, this
review highlighted that trust and PR also have
signicant roles in the adoption of AI systems.
In particular, unlike previous computer tech-
nologies, AI technologies, which make exten-
sive use of userspersonal information, pose
privacy risk issues. This requires consideration
of privacy challenges in the implementation of
any AI technology. Nevertheless, the research
shows that challenges related to PR can be
addressed by improving userstrust in tech-
nology (Panagiotopoulos & Dimitrakopoulos,
2018).
In general, the ndings on behavioural factors
conrm those reported in prior reviews regarding
the acceptance of AI devices in AD (Jing et al.,
2020). However, this review also revealed the
effects of other factors on user behaviour towards
AI-infused systems, specically those associated
with the properties of technology and human
TABLE 2: Interrelationships among factors with and without behaviour theories
12 nnn-Human Factors
factors. In particular, the ndings revealed which
technological factors have a signicant impact
on human behaviour in relation to technology
adoption. Thus, by focusing on the required
technological parameters, it is possible to in-
uence factors related to user behaviour, or
conversely, the cause of changes in user be-
haviour can be determined by linking these
technological factors to behavioural factors. For
example, user distrust of AI systems can be
overcome through technology-related factors,
such as by increasing transparency (de Visser
et al., 2020;Weitz et al., 2020), reliability (Brell
et al., 2019a), and compatibility (Man et al.,
2020), and reducing complexity (Fan et al.,
2020). Compatibility also allows users to per-
ceive AI systems as useful and easy to operate
(Man et al., 2020). However, in most current
technology acceptance studies, the effects of
technological parameters are overlooked.
Human factors are also frequently neglected
in research on the adoption of AI systems.
While the unied theory of acceptance and use
of technology emphasises the need to consider
the effects of certain human factors, many
TAM-related studies do not take into account
these factorseffects. However, the ndings of
the present review conrm that some human
factors, especially age and gender differences,
are often observed in the adoption of AI tech-
nology. Specically, older adults and women
should be seen as target user groups who need to
increase their technology acceptance. This is
because, as several researchers noted, users in
these groups tend to be more reluctant, less
interested, and relatively less condent in using
new technology (Bansal et al., 2016;Deb et al.,
2017;Haboucha et al., 2017;Hohenberger
et al., 2016;Scopelliti et al., 2005;Wolt ers
et al., 2009;Zhang et al., 2014,2019a). Since
older adults are typically more concerned re-
garding the reliability and usefulness of tech-
nology than are younger adults, the features that
need to be considered to improve acceptance
among these user groups can be identied (Deb
et al., 2017). Likewise, less educated and less
experienced users tend to have less trust in AI
technologies. Thus, identifying methods to in-
crease trust would make it easier to facilitate
technology adoption among these users. In this
case, trust can be gained by making improvements
related to the technological factors mentioned
above. Finally, a relationship was also found
between cultural background and PR. Accord-
ingly, it is possible to predict user reactions re-
garding PR depending on cultural background
and take necessary measures against challenges
that hinder technology adoption. Furthermore, SI
also needs to be considered, as the effects of SNs
appear to be stronger in collectivistic cultures
(Man et al., 2020;Ye et al., 2019).
There were no signicant differences in the
impact of behavioural factors on technology
adoption across selected application areas. It
can, therefore, be concluded that the factors
related to user attitudes towards technology do
not change according to the application area.
This nding can help to predict technology
adoption across application areas using the
same behavioural factors. However, unlike the
other application areas examined, no effects of
PB and PR factors on technology adoption were
observed for robotics. This may be because
users already perceive robot technologies as
being benecial to society and do not think
robots pose any signicant risks. It should be
noted that this may be because there are rela-
tively few studies on PR factors in the robotics
eld. However, privacy issues have become
more serious in healthcare, since the use of
personal information related to medical history
or checks by third-party companies might di-
rectly endanger human life (Balta-Ozkan et al.,
2013). Likewise, users may also be concerned
regarding the possibility that other people could
take control of AVs by hacking the system since
higher concerns related to privacy risk were
observed for AD (Zhu et al., 2020). Thus,
privacy risks should be carefully considered
when implementing both AD and healthcare
technologies.
Regarding technological factors, the effects
of transparency and compatibility on user be-
haviour did not vary depending on the appli-
cation domain, implying that these are two
factors that should be considered in technology
adoption despite the application eld. However,
the effect of complexity was not observed in AD
as an important factor inuencing user behaviour
in relation to AVs. This was explained by the fact
FACTORS AFFECTING ACCEPTANCE OF AI-INFUSED SYSTEMS 13
that AD users do not consider driving to be
a complex task (Choi & Ji, 2015;J. Lee, D. Lee,
et al., 2019). However, there is evidence that
potential users of AI systems in healthcare and
robotics are more concerned regarding the ease
of operation of technology (Ezer et al., 2009).
This nding suggests that providing an effortless
user interface for technologies used in healthcare
and robotics is highly recommended. In addi-
tion, the lack of research on the impact of re-
liability on the adoption of robot technology
requires further in-depth research of the impact
of this factor.
Regarding human factors, the effects of age,
cultural background, and gender were found to
be important across application areas, implying
that these three factors are important to consider
in relation to the acceptance of AI technologies
in any application area. However, no gender
effects were found on technology adoption in
healthcare, indicating that adoption intention for
medical technologies depends less on the users
gender. The ndings of this review demonstrated
that the effect of education is important in both
AD and healthcare, but not robotics, suggesting
that the adoption of robot technologies depends
less on the users education level. Regarding
technical expertise, prior experience with the
technology can facilitate the adoption of, and
increase positive attitudes towards, AVs and
robot technologies. However, no impact of the
experience was observed in healthcare, implying
that technology acceptance is not associated
with healthcare professionalsexperience with
the technology, or at least no study was found
that recognised its effects.
Implications
Basedonthekeyndings of previous research,
this study has some important implications. First,
this study provides a comprehensive overview
of factors impacting the acceptance of AI-
infused systems. Based on the results of the
review, this study suggests that developers,
policymakers and researchers consider tech-
nological and human factors along with factors
related to user behaviour theories. Specically,
this review highlights signicant effects of
technological factors on user behaviour and the
dependence of technology adoption on user
characteristics (e.g. age, gender, education).
Second, this review suggests extending
traditional TAMs to include PR to determine
the adoption of AI technologies. Unlike con-
ventional computer technologies, AI devices
extensively use personal information to im-
prove the user experience of the system. This,
in turn, creates challenges for AI designers and
developers. Thus, this study suggests over-
coming PPR among users by increasing user
control over their own information with regard
to how their data are used by providers and
ensuring safety by conducting third-party
inspections.
Third, this review showed how the factors
affecting technology acceptance vary by appli-
cation area. Accordingly, among the selected
technological factors, to improve the technology
acceptance, it was recommended to consider
compatibility and transparency in each appli-
cation area, reliability in healthcare and AD, and
complexity in robotics. In particular, user trust in
AI can be improved by considering specic
technological factors. In addition, developers
should not overlook the effects of human factors
throughout application areas to predict accep-
tance among different user groups. Below are
specic considerations that can serve to increase
intention to use AI technologies:
·Ensuring privacy protection, increasing ease of use
of the system for older adults, and providing more
system functionality for younger people, because
older people have more concerns regarding their
private information being used by manufacturers
and difculties associated with novel technologies,
while younger people show concerns regarding
system functionality;
·Increasing trust among women in AI technologies
by considering a greater scepticism among women
towards technologies;
·Considering the userscultural background (e.g.
crowd mentality, collectivistic culture, authori-
tarianism) and the support of the government
towards AI technologies when implementing the
technology;
·Offering affordable options for AVs, while reducing
the PSR of users in underdeveloped countries; in-
troducing cars that allow easy access to take control
14 nnn-Human Factors
of the vehicle while reducing the PPR of users in
developed countries;
·Supporting users with less experience with tech-
nology by making positive comments about
technology to reduce their PR regarding AI, since
they are more likely to be dependent on others
opinions before using the technology; the media
can also be essential to encourage new users to
adopt AI technology;
·Identifying methods to increase trust in AI among
users with low educational levels, such as by
specic technological factors.
CONCLUSION
As AI devices become more widespread, re-
search into their adoption is increasing. There-
fore, unlike previous research, this study focused
on investigating the factors affecting technology
adoption of AI-infused systems by conducting
a systematic review of the literature and identi-
fying relationships among the factors. We con-
rmed that the intention to use AI technologies is
inuenced not only by factors in behavioural
theories but also by human factors and techno-
logical factors. Technological factors showed
a signicant correlation with UI and behavioural
factors. Therefore, considering them in integrated
TAMs might serve to predict the acceptance of
AI-infused systems more precisely and improve
their adoption. No signicant differences were
observed across application areas; however, some
parameters should be considered. For example,
while complexity is a major issue for robotic
technology users, privacy risk is important in
healthcare and AD. As for human factors, as
expected, mainly older adults, women, those
with low education levels, those inexperienced
with technology, and members of relatively
collectivistic cultures have difculties adopting
AI technology. It is therefore necessary to
identify specic issues associated with these
groups. In particular, identifying the moder-
ating effects of human factors can help identify
specic challenges associated with user
groups. Indeed, identifying individual differ-
ences in technology acceptance serves to in-
crease the inclusiveness of technology and
identify more targeted or complex user groups.
Limitations and Future
Research Directions
This study has some limitations. One limi-
tation is the lack of factors studied in the
adoption of AI in different application areas. The
limited number of studies reviewed based on
selective keywords could be extended further by
using different keywords in the search. The lack
of literature, in turn, led to research conducted in
relatively few application areas. Further studies
could conduct a large-scale review including
research from more elds or experiment-based
research in a speciceld to analyse the ndings
of this literature review.
As the main objective of this study, the key
factors that may affect AI devices as well as the
effects and correlations of technological fac-
tors and human factors on factors in behaviour
theories were found and conrmed across
different application areas, and a conceptual
basis for further studies was determined.
However, the implications and ndings of this
research need to be validated by conducting
experiments or surveys in the future. Future
research should aim at developing an enhanced
TAM by integrating the factors studied in this
review to validate the present ndings and
implications.
KEY POINTS
Factors in behaviour theories, technological factors,
and human factors are all signicant in in-
vestigating the acceptance of AI-infused systems,
according to ndings in existing studies.
PU and attitude were found to be the most important
factors inuencing the adoption of AI-infused
systems in all application areas studied.
The effects of factors related to user behaviour can
be inuenced by technological factors.
There are signicant differences in the way users
adopt AI devices depending on the level of human
factors (e.g. younger vs. older, male vs. female, ed-
ucated vs. uneducated, experienced vs. inexperienced,
and users from different cultures).
Further research is needed to validate the ndings
regarding relationships among factors and the direct
and indirect effects of variables on the acceptance of
AI-infused systems.
FACTORS AFFECTING ACCEPTANCE OF AI-INFUSED SYSTEMS 15
FUNDING
The author(s) disclosed receipt of the following
nancial support for the research, authorship, and/or
publication of this article: The Basic Research Pro-
gram through the National Research Foundation of
Korea (NRF) funded by the MSIT.
ORCID iDs
Ulugbek Vahobjon Ugli Ismatullaev https://
orcid.org/0000-0003-0690-9699
Sang-Ho Kim https://orcid.org/0000-0003-0599-
289X
REFERENCES
Alhashmi, S. F. S., Salloum, S. A., & Abdallah, S. (2020). Critical
success factors for implementing articial intelligence (AI)
projects in Dubai Government United Arab Emirates (UAE)
health sector: Applying the extended technology acceptance
model (TAM). Advances in Intelligent Systems and Computing,
1058, 393405.
Asan, O., Bayrak, A. E., & Choudhury, A. (2020). Articial in-
telligence and human trust in healthcare: Focus on clinicians.
Journal of Medical Internet Research,22(6), 17. https://doi.
org/10.2196/15154.
Bagshaw, S. M., McAlister, F. A., Manns, B. J., & Ghali, W. A.
(2006). Acetylcysteine in the prevention of contrast-induced
nephropathy: A case study of the pitfalls in the evolution of
evidence. Archives of Internal Medicine,166(2), 161166.
Balta-Ozkan, N., Davidson, R., Bicket, M., & Whitmarsh, L.
(2013). Social barriers to the adoption of smart homes. Energy
Policy,63, 363-374. https://doi.org/10.1016/j.enpol.2013.08.
043.
Bansal, P., Kockelman, K. M., & Singh, A. (2016). Assessing public
opinions of and interest in new vehicle technologies: An Austin
perspective. Transportation Research Part C: Emerging
Technologies,67, 1-14. https://doi.org/10.1016/j.trc.2016.01.
019.
Bartneck, C., Suzuki, T., Kanda, T., & Nomura, T. (2007). The
inuence of peoples culture and prior experiences with Aibo on
their attitude towards robots. AI and Society,21(1), 217230.
https://doi.org/10.1007/s00146-006-0052-7.
Bauer, R. (1960). Consumer behavior as risk taking. In R Hancock
(Ed.), Dynamic marketing for a changing world (pp. 389398).
American Marketing Association.
Becker, C., Julien, C., Lalanda, P., & Zambonelli, F. (2019).
Pervasive computing middleware: Current trends and
emerging challenges. CCF Transactions on Pervasive Com-
puting and Interaction,1(1), 1023. https://doi.org/10.1007/
s42486-019-00005-2.
Biermann, H., Philipsen, R., Brell, T., & Ziee, M (2020). Rolling
in the deep. User perspectives, expectations, and challenges of
data and information distribution in AD. Human-Intelligent
Systems Integration.https://link.springer.com/article/10.1007%
2Fs42454-020-00015-x.
Biondi-Zoccai, G. G. L., Lotrionte, M., Abbate, A., Testa, L., Remigi,
E., Burzotta, F., Valgimigli, M., Romagnoli, E., Crea, F., &
Agostoni, P. (2006). Compliance with QUOROM and quality of
reporting of overlapping meta-analyses on the role of ace-
tylcysteine in the prevention of contrast associated nephropathy:
Case study. British Medical Journal,332(7535), 202209.
Bogatzki, K., Hinzmann, J., & Eslami, M (2020). Acceptance of
autonomous delivery vehicles for last mile delivery in Ger-
many extension of the technology acceptance model to an
autonomous delivery vehicles acceptance model title: Ac-
ceptance of autonomous delivery vehicles for last mile de-
livery in Germa. May.http://urn.kb.se/resolve?urn=urn:nbn:
se:hj:diva-48879.
Brell, T., Philipsen, R., & Ziee, M. (2019a). sCARy! risk per-
ceptions in AD: The inuence of experience on perceived
benets and barriers. Risk Analysis,39(2), 342357. https://doi.
org/10.1111/risa.13190.
Brell, T., Philipsen, R., & Ziee, M. (2019b). Suspicious minds?
usersperceptions of autonomous and connected driving.
Theoretical Issues in Ergonomics Science,20(3), 301331.
https://doi.org/10.1080/1463922X.2018.1485985.
Cai, C. J., Winter, S., Steiner, D., Wilcox, L., & Terry, M (2019).
Hello Ai: Uncovering the onboarding needs of medical
practitioners for humanAI collaborative decision-making.
Proceedings of the ACM on Human-Computer Interaction,
3(CSCW),3(104), 124. https://doi.org/10.1145/3359206.
Chen, C.-F. (2019). Factors affecting the decision to use autono-
mous shuttle services: Evidence from a scooter-dominant urban
context. Transportation Research Part F: Trafc Psychology
and Behaviour,67, 195-204. https://doi.org/10.1016/j.trf.2019.
10.016.
Choi, J. K., & Ji, Y. G. (2015). Investigating the importance of trust
on adopting an autonomous vehicle. International Journal of
Human-Computer Interaction,31(10), 692702. https://doi.
org/10.1080/10447318.2015.1070549.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use,
and user acceptance of information technology.MIS Quar-
terly,13(3), 319340.
de Visser, E. J., Peeters, M. M. M., Jung, M. F., Kohn, S., Shaw,
T. H., Pak, R., & Neerincx, M. A (2020). Towards a theory of
longitudinal trust calibration in humanrobot teams. In-
ternational Journal of Social Robotics,12(2), 459478. https://
doi.org/10.1007/s12369-019-00596-x.
Deb, S., Strawderman, L., Carruth, D. W., DuBien, J., Smith, B., &
Garrison, T. M. (2017). Development and validation of
a questionnaire to assess pedestrian receptivity toward fully
autonomous vehicles. Transportation Research Part C:
Emerging Technologies,84, 178-195. https://doi.org/10.1016/j.
trc.2017.08.029.
Dhagarra, D., Goswami, M., & Kumar, G. (2020). International
journal of medical informatics impact of trust and privacy
concerns on technology acceptance in healthcare: An Indian
perspective. International Journal of Medical Informatics,
141(May), 104164. https://doi.org/10.1016/j.ijmedinf.2020.104164.
Di Nuovo, A., Broz, F., Wang, N., Belpaeme, T., Cangelosi, A.,
Jones, R., Esposito, R., Cavallo, F., & Dario, P. (2018). The
multi-modal interface of Robot-Era multi-robot services tai-
lored for the elderly. Intelligent Service Robotics,11(1),
109126. https://doi.org/10.1007/s11370-017-0237-6.
Dirsehan, T., & Can, C. (2020). Examination of trust and sus-
tainability concerns in autonomous vehicle adoption. Tech-
nology in Society,63(July), 101361. https://doi.org/10.1016/j.
techsoc.2020.101361.
Ezer, N., Fisk, A. D., & Rogers, W. A. (2009). Attitudinal and
intentional acceptance of domestic robots by younger and older
adults. 3948.
16 nnn-Human Factors
Fan, W., Liu, J., Zhu, S., & Pardalos, P. M (2020). Investigating
the impacting factors for the healthcare professionals to adopt
articial intelligence-based medical diagnosis support system
(AIMDSS). Annals of Operations Research,294(12),
567592. https://doi.org/10.1007/s10479-018-2818-y.
Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention and
behaviour: An introduction to theory and research. Addison-
Wesley.
Garidis, K., Ulbricht, L., Rossmann, A., & , and Schm¨
ah, M (2020).
Toward a user acceptance model of AD. In Proceedings of the
53rd Hawaii International Conference on System Sciences,
2020, 3, 1381-1390. https://doi.org/10.24251/hicss.2020.170.
Gessl, A. S., Schl¨
ogl, S., & Mevenkamp, N. (2019). On the per-
ceptions and acceptance of articially intelligent robotics and
the psychology of the future elderly. Behaviour & Information
Technology,0(0), 120. https://doi.org/10.1080/0144929X.
2019.1566499.
Giuliani, M. V., Scopelliti, M., & Fornara, F (2005). Coping
strategies and technology in later life//Companions: Hard
Problems and Open Challenges in Robot-Human Interaction,
S.46.
Haboucha, C. J., Ishaq, R., & Shiftan, Y. (2017). User preferences
regarding autonomous vehicles. Transportation Research Part
C: Emerging Technologies,78, 37-49. https://doi.org/10.1016/j.
trc.2017.01.010.
Heerink, M., Kr ¨
ose, B., Evers, V., & Wielinga, B (2006). Studying
the acceptance of a robotic agent by elderly users. In-
ternational Journal of Assistive Robotics and Mechatronics,
7(3), 3343.
Heerink, M., Kr¨
ose, B., Wielinga, B., & Evers, V (2009). Mea-
suring the inuence of social abilities on acceptance of an
interface robot and a screen agent by elderly users. In People
and Computers XXIII Celebrating People and Technology -
Proceedings of HCI 2009, Churchill College Cambridge, UK,
15 September 2009, pp. 430439. https://doi.org/10.14236/
ewic/hci2009.54.
Hohenberger, C., Sp¨
orrle, M., & Welpe, I. M. (2016). How and why
do men and women differ in their willingness to use automated
cars? The inuence of emotions across different age groups.
Transportation Research Part A: Policy and Practice,94,
374-385. https://doi.org/10.1016/j.tra.2016.09.022.
Hoque, M. R. (2016). An empirical study of mHealth adoption in
a developing country: The moderating effect of gender concern.
BMC Medical Informatics and Decision Making,16(1), 5110.
https://doi.org/10.1186/s12911-016-0289-0.
Hutchins, N. F., Kerr, A. J., & Hook, L. R (2019). User acceptance
in autonomous vehicles: The evolution of the end user. In ISSE
2019-5th IEEE International Symposium on Systems Engi-
neering, Proceedings, 13 October 2019. https://doi.org/10.
1109/ISSE46696.2019.8984492.
Jing, P., Xu, G., Chen, Y., Shi, Y., & Zhan, F (2020). The deter-
minants behind the acceptance of autonomous vehicles: A
systematic review. Sustainability (Switzerland),12(5), 1719.
https://doi.org/10.3390/su12051719.
Kaur, K., & Rampersad, G. (2018). Trust in driverless cars: In-
vestigating key factorsinuencing the adoptionof driverless cars.
Journal of Engineering and Technology Management,48(-
November 2017), 8796. https://doi.org/10.1016/j.jengtecman.
2018.04.006.
Kettles, N., & Van Belle, J. P (2019). Investigation into the an-
tecedents of autonomous car acceptance using an enhanced
UTAUT Model. In IcABCD 2nd international conference on
advances in big data, computing and data communication
systems, 56 August 2019. https://doi.org/10.1109/ICABCD.
2019.8851011.
Koo, J., Kwac, J., Ju, W., Steinert, M., Leifer, L., & Nass, C. (2015).
Why did my car just do that? Explaining semi-AD actions to
improve driver understanding, trust, and performance. In-
ternational Journal on Interactive Design and Manufacturing
(IJIDeM),9(4), 269275. https://doi.org/10.1007/s12008-014-
0227-2.
Kyriakidis, M., Happee, R., & De Winter, J. C. F. (2015). Public
opinion on automated driving: Results of an international
questionnaire among 5000 respondents. Transportation Re-
search Part F: Trafc Psychology and Behaviour,32, 127-140.
https://doi.org/10.1016/j.trf.2015.04.014.
Lee, J., Abe, G., Sato, K., & Itoh, M. (2020). Effects of de-
mographic characteristics on trust in driving automation.
Journal of Robotics and Mechatronics,32(3), 605612. https://
doi.org/10.20965/jrm.2020.p0605.
Lee, J., Lee, D., Park, Y., Lee, S., & Ha, T. (2019). Autonomous
vehicles can be shared, but a feeling of ownership is important:
Examination of the inuential factors for intention to use au-
tonomous vehicles. Transportation Research Part C: Emerging
Technologies,107(February), 411422. https://doi.org/10.
1016/j.trc.2019.08.020.
Lee, L. Y., Lim, W. M., Teh, P.-L., Malik, O. A. S., & Nurzaman, S
(2020). Understanding the interaction between older adults and
soft service robots: Insights from robotics and the technology
acceptance model. AIS Transactions on Human-Computer In-
teraction,12(3), 125145. https://doi.org/10.17705/1thci.00132.
Lee, S., Ratan, R., & Park, T. (2019). The voice makes the car:
Enhancing autonomous vehicle perceptions and adoption in-
tention through voice agent gender and style. Multimodal
Technologies and Interaction,3(1), 20. https://doi.org/10.3390/
mti3010020.
Lee, C., Seppelt, B., Reimer, B., Mehler, B., & Coughlin, J. F.
(2019). Acceptance of vehicle automation: Effects of de-
mographic traits, technology experience and media exposure.
Proceedings of the Human Factors and Ergonomics Society
Annual Meeting,63(1), 20662070. https://doi.org/10.1177/
1071181319631425.
Leung, Y. (2013). Perceived benets,In M. D. Gellman & J. R.
Turner. (eds) Encyclopedia of Behavioral Medicine.
Li,J.,Ma,Q.,Chan,A.H.,&Man,S.S.(2019).Health
monitoring through wearable technologies for older adults:
Smart wearables acceptance model. Applied Ergonomics,
75(October 2018), 162169. https://doi.org/10.1016/j.apergo.
2018.10.006.
Littlejohn, S. W (2002). Theories of human communication (7th
ed). Wadsworth/Thomson Learning.
Liu, P., Yang, R., & Xu, Z. (2019). Public acceptance of fully automated
driving: Effects of social trust and risk/benet perceptions. Risk
Analysis,39(2), 326341. https://doi.org/10.1111/risa.13143.
Manfreda, A., Ljubi, K., & Groznik, A. (2019). Autonomous ve-
hicles in the smart city era: An empirical study of adoption
factors important for millennials. International Journal of In-
formation Management, March.https://doi.org/10.1016/j.
ijinfomgt.2019.102050.
Man, S. S., Xiong, W., Chang, F., & Chan, A. H. S. (2020). Critical
factors inuencing acceptance of automated vehicles by Hong
Kong drivers. IEEE Access,8, 109845-109856. https://doi.org/
10.1109/ACCESS.2020.3001929.
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An in-
tegrative model of organizational trust. Academy of Manage-
ment Review,20(3), 709734.
FACTORS AFFECTING ACCEPTANCE OF AI-INFUSED SYSTEMS 17
Moher, D, Liberati, A, Tetzlaff, J, & Altman, DG (2009). Pre-
ferred reporting items for systematic reviews and meta-
analyses: The PRISMA statement. PLoS Medicine,6(7),
e1000097.
Nadarzynski, T., Miles, O., Cowie, A., & Ridge, D (2019). Ac-
ceptability of articial intelligence (AI) -led chatbot services in
healthcare: A mixed-methods study (Vol. 5, pp. 112). https://
doi.org/10.1177/2055207619871808.
Nicodeme, C. (2020). Build condence and acceptance of AI-
based decision support systems - Explainable and liable AI. In
International Conference on Human System Interaction, HSI,
2023 June 2020. https://doi.org/10.1109/HSI49210.2020.
9142668.
Nomura, T., Kanda, T., Suzuki, T., & Kato, K. (2009). Age dif-
ferences and images of robots: Social survey in Japan. In-
teraction Studies,10(3), 374391.
Nomura, T., & Nakao, A. (2010). Comparison on identication of
affective body motions by robots between elder people and
university students: A case study in japan. International
Journal of Social Robotics,2(2), 147157.
Nomura, T., Suzuki, T., Kanda, T., Han, J., Shin, N., Burke, J., &
Kato, K (2008). What people assume about humanoid and
animal-type robots: Cross-cultural analysis between Japan,
Korea, and the United Utates. International Journal of Hu-
manoid Robotics,5(1), 2546. https://doi.org/10.1142/
S0219843608001297.
Panagiotopoulos, I., & Dimitrakopoulos, G. (2018). An empir-
ical investigation on consumersintentions towards AD.
Transportation Research Part C: Emerging Technologies,
95(September), 773784. https://doi.org/10.1016/j.trc.2018.
08.013.
Park, E., Angel, P., Park, E., & Pobil, A. P (2013). Users attitudes
toward service robots in South Korea. https://doi.org/10.1108/
01439911311294273.
Peek, S. T. M., Wouters, E. J. M., van Hoof, J., Luijkx, K. G.,
Boeije, H. R., & Vrijhoef, H. J. M. (2014). Factors inu-
encing acceptance of technology for aging in place: A
systematic review. International Journal of Medical In-
formatics,83(4), 235248. https://doi.org/10.1016/j.ijmedinf.
2014.01.004.
Piasek, J., & Wieczorowska-Tobis, K. (2018). Acceptance and
long-term use of a social robot by elderly users in a domestic
environment. In Proceedings - 2018 11th internationalconference
on human system interaction, HSI, 46 July 201 8, pp. 478482.
https://doi.org/10.1109/HSI.2018.8431348.
R¨
odel, C., Stadler, S., Meschtscherjakov, A., & Tscheligi, M.,
2014. Towards autonomous cars: The effect of autonomy
levels on acceptance and user experience. In: Proceedings of
the 6th international conference on automotive user interfaces
and interactive vehicular applications, September 2014, pp.
18
Scopelliti, M., Giuliani, M. V., & Fornara, F. (2005). Robots in
a domestic setting: a psychological approach//Universal access
in the information society, 4(2), 146155.
Sener, I. N., Zmud, J., & Williams, T. (2019). Measures of baseline
intent to use automated vehicles: A case study of Texas cities.
Transportation Research Part F: Trafc Psychology and Be-
haviour,62, 66-77. https://doi.org/10.1016/j.trf.2018.12.014.
Siau, K., & Wang, W (2018). Building trust in articial intelligence,
machine learning, and robotics. In Cutter Business Technology
Journal,31(2), 4753.
Souders, D., & Charness, N. (2016). Challenges of older drivers
adoption of advanced driver assistance systems and
autonomous vehicles. Lecture Notes in Computer Science
(Including Subseries Lecture Notes in Articial Intelligence and
Lecture Notes in Bioinformatics),9755, 428-440. https://doi.
org/10.1007/978-3-319-39949-2_41.
Sujan, M., Furniss, D., Hawkins, R., & Habli, I (2020). Human
factors of using articial intelligence in healthcare: Challenges
that stretch across industries. Eprints.Whiterose.Ac.Uk.http://
eprints.whiterose.ac.uk/159105/.
Taherdoost, H. (2018). A review of technology acceptance and
adoption models and theories. Procedia Manufacturing,22,
960-967. https://doi.org/10.1016/j.promfg.2018.03.137.
Turja, T., Aaltonen, I., Taipale, S., & Oksanen, A (2020). In-
formation & Management Robot acceptance model for care
(RAM-care): A principled approach to the intention to use care
robots. Information & Management,57(5), 103220. https://doi.
org/10.1016/j.im.2019.103220.
Venkatesh, V., Morris, G. M., Davis, G. B., & Davis, F. D. (2003).
User acceptance of information technology: Toward a unied
view. MIS quarterly,27(3), 425478.
Walter, J., & Abendroth, B. (2020). On the role of informational
privacy in connected vehicles: A privacy-aware acceptance
modelling approach for connected vehicular services. Tele-
matics and Informatics,49(August 2019), 101361. https://doi.
org/10.1016/j.tele.2020.101361.
Weitz, K., Schiller, D., Schlagowski, R., Huber, T., & Andr´
e, E (2020).
Let me explain!: exploring the potential of virtual agents in
explainable AI interaction design. Journal on Multimodal User
Interfaces.https://doi.org/10.1007/s12193-020-00332-0.
Wolters, M., Georgila, K., Moore, J. D., & Macpherson, S. E
(2009). Being old doesnt mean acting old: How older users
interact with spoken dialog systems. ACM Transactions on
Accessible Computing,2(1), 139. https://doi.org/10.1145/
1525840.1525842.
Wong, A. M. K., Chang, W., Ke, P., Huang, C., Tsai, T., Chang, T.,
Shieh, W., Chan, H., Chen, C., & Pei, Y. (2012). Technology
Acceptance for an Intelligent Comprehensive Interactive Care
(ICIC) System for Care of the Elderly: A Survey-Questionnaire
Study (Vol. 7, pp. 1-7). https://doi.org/10.1371/journal.pone.
0040591.
Ye, T., Xue, J., He, M., Gu, J., Lin, H., Xu, B., & Cheng, Y. (2019).
Psychosocial factors affecting articial intelligence adoption in
health care in China: Cross-sectional study. Journal of Medical
Internet Research,21(10), 122. https://doi.org/10.2196/14316.
Yuen,K.F.,Thi,D.,Huyen,K.,Wang,X.,&Qi,G(2020).
Factors-inuencing-the-adoption-of-shared-autonomous-
vehicles2020International-Journal-of-Environmental-Research-
and-Public-HealthOpen-Access.pdf.
Yuen, K. F., Wong, Y. D., Ma, F., & Wang, X (2020). The deter-
minants of public acceptance of autonomous vehicles: An in-
novation diffusion perspective. Journal of Cleaner Production,
270, 121904. https://doi.org/10.1016/j.jclepro.2020.121904.
Zhang, X., Guo, X., Lai, K.-h., Guo, F., & Li, C. (2014). Un-
derstanding gender differences in m-health adoption: A modied
theory of reasoned action model. Telemedicine and E -Health,
20(1), 3946. https://doi.org/10.1089/tmj.2013.0092.
Zhang, X., Han, X., Dang, Y., Meng, F., Guo, X., & Lin, J. (2017).
User acceptance of mobile health services from usersper-
spectives: The role of self-efcacy and response-efcacy in
technology acceptance. Informatics for Health and Social Care,
42(2), 194206. https://doi.org/10.1080/17538157.2016.1200053.
Zhang, T., Tan, H., Li, S., Zhu, H., & Tao, D. (2019). Publics
acceptance of automated vehicles: The role of initial trust and
subjective norm. Proceedings of the Human Factors and
18 nnn-Human Factors
Ergonomics Society Annual Meeting,63(1), 919923. https://
doi.org/10.1177/1071181319631183.
Zhang, T., Tao, D., Qu, X., Zhang, X., Lin, R., & Zhang, W. (2019).
The roles of initial trust and perceived risk in publics accep-
tance of automated vehicles. Transportation Research Part C:
Emerging Technologies,98(June 2018), 207220. https://doi.
org/10.1016/j.trc.2018.11.018.
Zhu, G., Chen, Y., & Zheng, J. (2020). Modelling the acceptance of
fully autonomous vehicles: A media-based perception and
adoption model. Transportation Research Part F: Trafc
Psychology and Behaviour,73, 80-91. https://doi.org/10.1016/
j.trf.2020.06.004.
Ziee, M., & R¨
ocker, C (2010). Acceptance of pervasive healthcare
systems: A comparison of different implementation
concepts. In 2010 4th international conference on pervasive
computing technologies for healthcare, pervasive health
2010, 2225 March 2010. https://doi.org/10.4108/ICST.
PERVASIVEHEALTH2010.8915.
Zmud, J. P., & Sener, I. N. (2017). Towards an understanding of the
travel behavior impact of autonomous vehicles. Transportation
Research Procedia,25, 2500-2519. https://doi.org/10.1016/j.
trpro.2017.05.281.
Ulugbek V.U. Ismatullaev is a MA student in the In-
dustrial Engineering Department at Kumoh National
Institute of Technology (KIT), Gumi, South Korea. He
completed his BA at Fergana Polytechnic Institute
(FerPI), Fergana, Uzbekistan, in 2018, where he
studied automation and control systems.
Sang-Ho Kim is a Professor in the Industrial Engi-
neering Department at the Kumoh National Institute of
Technology (KIT), Gumi, South Korea. He obtained
his PhD in industrial engineering at Pohang University
of Science and Technology (POSTECH), Pohang,
South Korea, in 1995.
Date received: June 2, 2021
Date accepted: November 15, 2021
FACTORS AFFECTING ACCEPTANCE OF AI-INFUSED SYSTEMS 19
... Additionally, Scheuer (2020) developed a comprehensive model for illustrating the acceptance of AI that consists of three main parts. One of these parts is the Technology Acceptance Model (TAM) by Davis (1989), which highlights the perceived usefulness factor that is frequently employed for assessing user acceptance of AI systems (Ismatullaev and Kim, 2024;Kelly et al., 2022;Sohn and Kwon, 2020). Furthermore, performance and effort expectancy were also identified to positively influence the willingness and use of AI systems by humans. ...
... Currently, research focuses on determining the conditions under which users accept these AI systems (Shen et al., 2023). In acceptance theories of AI systems, like TAM or UTAUT, as well as current literature reviews, perceived usefulness or performance expectancy seem to be the key factors to drive acceptance (Davis, 1989;Ismatullaev and Kim, 2024;Venkatesh et al., 2003). In the case of ChatGPT or Midjourney, this approach is reasonable since the system and the creative output it generates are both experienced by the user, thereby influencing the acceptance of the creative outcome. ...
... Some of the identified factors, such as perceived usefulness, share similarities with existing acceptance research on AI systems and disruptive innovations (Davis, 1989;Ismatullaev and Kim, 2024;Kelly et al., 2022;Siau and Wang, 2018;Sohn and Kwon, 2020). Environmental factors like "more education about AI" and "open culture towards technology of market" emphasize the importance of informing individuals about the disruptive innovation and its creator. ...
Conference Paper
Full-text available
The advance of generative artificial intelligence (GenAI) is rapidly progressing and inundating markets with its outputs. Moreover, the creativity of these systems is increasing, potentially leading to the creation of disruptive innovations in the future. Previously, research has examined acceptance of Artificial Intelligence (AI) systems and disruptive innovations independently. We are combining two research areas to shift the focus from studying the acceptance of AI systems to examining the acceptance of GenAI generated disruptive innovations. Thus, we ask, what factors drive the acceptance of disruptive innovations created by GenAI? Therefore, we conducted 18 interviews with 19 AI experts and identified several factors that could enhance acceptance in this case. The perceived usefulness of disruptive innovations appears to be the key factor, which indicates internal validity with existing research. However, higher quality expectations and the desire for traceability and comparability of disruptive innovations suggest a distinction from human-created ones.
... Moreover, AI impacted, clear value propositions and intuitive user interfaces are crucial in enhancing product acceptance, making it essential for companies to address issues related to privacy, affordability, and usability to widen technology adoption (Ismatullaev & Kim, 2024;Sohn & Kwon, 2020). ...
Preprint
Full-text available
As artificial intelligence (AI) technologies rapidly evolve, understanding their disclosed involvement in product creation is becoming crucial due to its impact on consumer perceptions and purchasing decisions. This study explored consumer reactions to books, e-books, and printed canvases, with varying levels of AI contribution. Results indicate that products with known AI involvement are perceived as less authentic and of lower quality, reducing willingness to pay compared to those attributed solely to human creators. Performance risk and perceived authenticity significantly mediate consumer attitudes towards AI-influenced products. These insights are vital for businesses employing AI in product development, suggesting that transparency about AI involvement could deter consumer interest.
... The result is similar to the prior literature (Chavez Herting et al., 2023). Likewise, the lack of trust in AI systems poses a substantial problem in accepting technology (Ismatullaev and Kim, 2024). On the contrary, the outcome shows that EE significantly impacts individuals' BI (H8), similar to the result of Vimalkumar et al. (2021), Strzelecki (2023). ...
Article
Purpose ChatGPT is an advanced artificial intelligence (AI) form that can generate human-like text based on large amounts of data. This paper aims to empirically examine the ChatGPT adoption level among Indian individuals by considering the key factors in determining individuals’ attitudes and intentions toward newly emerged AI tools. Design/methodology/approach This paper used “partial least square structural equation modeling” (PLS-SEM) to investigate the relation among several latent factors by applying a representative sample of 351 individuals. Findings This study found that trialability, performance expectancy and personal innovativeness significantly influence individuals' attitudes, while compatibility and effort expectancy do not significantly impact attitudes. Additionally, trialability, performance expectancy, effort expectancy, personal innovativeness and attitude significantly influence behavioral intentions. However, compatibility has an insignificant impact on behavioral intention. Moreover, the research highlights that attitude and behavioral intention directly correlate with actual use. Specifically, the absence of compatibility makes people hesitate to use technology that does not meet their specific needs. Practical implications These unique findings provide valuable insights for technology service providers and government entities. They can use this information to shape their policies, deliver timely and relevant updates and enhance their strategies to boost the adoption of ChatGPT. Originality/value This paper is one of the pioneering attempts to exhibit the research stream to understand the individual acceptance of ChatGPT in an emerging country. Moreover, it gained significant attention from individuals for delivering a unique experience and promising solutions.
... It is equally important for organizations to improve the effectiveness of their respective cloud security network through the establishment of a continuous monitoring and threat detection system (13). By undertaking on-going surveillance of the cloud environment, potential threats and other forms of anomalies will be easily detected. ...
Article
Full-text available
This paper explored the integration of Artificial Intelligence (AI) techniques for identifying and responding to security anomalies within cloud infrastructure. It delves into how machine learning models can be trained to recognize patterns that are indicative of cyber threats and attacks while also recommending methodologies to proactively mitigate these threats.
... Therefore, the evolution of the perception and attitude of doctors is an important input for the penetration of AI into the healthcare sector. The effective human (doctor) -AI collaboration is especially very vital for successful applications, so recent research also aims at finding out the relevant factors (24). The literature suggests that the factors such as trust in AI and hesitancy to accept the use of AI tools are slowing down the adoption (7,25). ...
... The research article by(Tiwari et al., 2023) aims to determine the student's attitude towards ChatGPT for educational purposes. the findings reveal that factors like Usefulness, social presence and legitimacy of the tool, enjoyment and motivation, impact the usage of the tool in a learning environment.The study conducted by(Ismatullaev & Kim, 2022) has the objective of studying the factors impacting technology adoption and forecasting the acceptance of AI-based technologies. The research finds that the adoption of AI devices by users can be improved by increasing transparency, compatibility, and reliability, and simplifying tasks in the AI tool.The research conducted by (Hua et al., 2024) on understanding the factors which are not attracting medical professionals to use AI systems in the medical field, where the entire field could be losing the overall benefits of AI. ...
Article
The research article explores the factors influencing student's preferences towards the usage of Artificial Intelligence (AI) tools in their academics. The study employs exploratory research methods, beginning with an online questionnaire distributed to university students, aiming to identify key attributes or variables affecting the preference for AI tools in academic work. The data collected is used to perform statistical analysis, employing factor analysis to reduce the 25 attributes. The results reveal that the attributes are reduced to 5 factors such as 'Easiness and convenience,' 'Interest Less,' 'Creativity,' 'Feeling Bored,' and 'Course Likeliness' which significantly impact students' preferences. Keywords : Artificial intelligence (AI), Factor Analysis, Education, Academics
... Emotion associated with new technologies (Hohenberger et al., 2016;Zoellick et al., 2019) and considerations of manufacturing reputation and price (Acharya & Mekker, 2022;Stewart & Harris, 2019;Wu & Chen, 2017) also influence their attitude to emerging technology and its acceptance. Furthermore, environment concerns (Wu et al., 2019), levels of anthropomorphism (Liu & Tao, 2022), levels of maturity (automation level or reliability) (Ismatullaev & Kim, 2022), demographic factors (eg, gender, age, and education) (Hohenberger et al., 2016;Lee et al., 2019;Liu & Tao, 2022), and prior experiences (Anania et al., 2018;Xu et al., 2018) have been found to significantly impact public attitudes and perceptions towards new technologies. Therefore, considering these factors is crucial for understanding and addressing passengers' concerns when introducing emerging technologies in the field of aviation. ...
Article
Full-text available
Autonomous driving will provide higher traffic safety, meet climate-related issues due to energy-saving mobility, and offer more comfort for drivers. To ensure reliable and safe autonomous traffic, and to provide efficient and time-critical mobility services, data exchange between road users and systems is essential. In public perception, however, sharing data and information may pose a challenge due to perceived privacy restrictions. In this paper, we address user perceptions and their acceptance towards data and information distribution in autonomous driving. In a multi-step empirical procedure, qualitative (focus groups, guided interviews) and quantitative approaches (questionnaire-study) were combined. The findings reveal that autonomous driving is commonly seen as a highly useful and appreciated technology. Though individual risk perceptions and potential drawbacks are manifold, mainly described in terms of data security and privacy-related issues. The findings contribute to research in human-automation interaction, technical development, and public communication strategies.
Article
Full-text available
While the research area of artificial intelligence benefited from increasingly sophisticated machine learning techniques in recent years, the resulting systems suffer from a loss of transparency and comprehensibility, especially for end-users. In this paper, we explore the effects of incorporating virtual agents into explainable artificial intelligence (XAI) designs on the perceived trust of end-users. For this purpose, we conducted a user study based on a simple speech recognition system for keyword classification. As a result of this experiment, we found that the integration of virtual agents leads to increased user trust in the XAI system. Furthermore, we found that the user’s trust significantly depends on the modalities that are used within the user-agent interface design. The results of our study show a linear trend where the visual presence of an agent combined with a voice output resulted in greater trust than the output of text or the voice output alone. Additionally, we analysed the participants’ feedback regarding the presented XAI visualisations. We found that increased human-likeness of and interaction with the virtual agent are the two most common mention points on how to improve the proposed XAI interaction design. Based on these results, we discuss current limitations and interesting topics for further research in the field of XAI. Moreover, we present design recommendations for virtual agents in XAI systems for future projects.
Article
Full-text available
Shared autonomous vehicles (SAVs), which have several potential benefits, are an emerging innovative technology in the market. However, the successful operation of SAVs largely depends on the extent of travellers' intention to adopt them. This study aims to analyse the factors that influence the adoption of SAVs by integrating two theoretical perspectives: the unified theory of acceptance and use of technology 2 (UTAUT2) and the theory of planned behaviour (TPB). A valid survey sample of 268 participants in Da Nang, Vietnam was collected. Subsequently, structural equation modelling was deployed to test the research model. The results indicate that the five dimensions of UTUAT2: performance expectation, effort expectation, habit, price value and hedonic motivation, are mediated by the attitudes toward using SAVs. Further, the TPB constructs, namely attitude, subject norm, perceived behavioural control, along with its perceived facilitating conditions, are all effective predictors of intention to use SAVs. The findings of this study can serve as a crucial resource for transport operators and the government to enhance transportation services and policies.
Article
Full-text available
With the successful introduction of advanced driver assistance systems, vehicles with driving automation technologies have begun to be released onto the market. Because the role of human drivers during automated driving may be different from the role of drivers with assistance systems, it is important to determine how general users consider such new technologies. The current study has attempted to consider driver trust, which plays a critical role in forming users’ technology acceptance. In a driving simulator experiment, the demographic information of 56 drivers (50% female, 64% student, and 53% daily driver) was analyzed with respect to Lee and Moray’s three dimensions of trust: purpose, process, and performance. The statistical results revealed that female drivers were more likely to rate higher levels of trust than males, and non-student drivers exhibited higher levels of trust than student drivers. However, no driving frequency-related difference was observed. The driver ratings of each trust dimension were neutral to moderate, but purpose-related trust was lower than process- and performance-related trust. Additionally, student drivers exhibited a tendency to distrust automation compared to non-student drivers. The findings present a potential perspective of driver acceptability of current automated vehicles.
Article
Full-text available
Artificial intelligence (AI) can transform health care practices with its increasing ability to translate the uncertainty and complexity in data into actionable—though imperfect—clinical decisions or suggestions. In the evolving relationship between humans and AI, trust is the one mechanism that shapes clinicians’ use and adoption of AI. Trust is a psychological mechanism to deal with the uncertainty between what is known and unknown. Several research studies have highlighted the need for improving AI-based systems and enhancing their capabilities to help clinicians. However, assessing the magnitude and impact of human trust on AI technology demands substantial attention. Will a clinician trust an AI-based system? What are the factors that influence human trust in AI? Can trust in AI be optimized to improve decision-making processes? In this paper, we focus on clinicians as the primary users of AI systems in health care and present factors shaping trust between clinicians and AI. We highlight critical challenges related to trust that should be considered during the development of any AI system for clinical use.
Article
In this article, the public perception and acceptance of novel vehicle technologies – autonomous driving (AD) and connected driving (CD) – is investigated. Following a multistep empirical procedure, we explore participants’ cognitions towards AD and CD. Therefore, a questionnaire study was run in which the perceived benefits and barriers of the technologies were evaluated by 443 participants in a wide age range (18–76 years). In addition, we took a closer look at the impact of user diversity (gender, age, need for privacy, control, risk taking tolerance and technical self-efficacy) on the evaluation of both driving technologies. Finally, cluster analyses were used out to identify evaluation profiles in both benefits and barriers, respectively. Overall, CD is seen significantly more positive compared to AD. With increasing risk taking tolerance and technical self-efficacy, the perception of the benefits was higher. In contrast, the perception of barriers in novel vehicle technologies was independent of personality factors and attitudes. Here, privacy and data protection issues were seen as key disadvantages. The findings can be used to develop timely and individually tailored public information and communication strategies for automated and connected vehicle technologies.
Article
The latest developments regarding autonomous vehicles (AVs) have drawn the attention of tech-savvy individuals and marketers. AVs are expected to cause a major change in the markets of vehicle selling, transportation and logistics. Therefore, it is crucial to understand consumer acceptance so that the companies in these markets can develop their penetration strategies accordingly and technology companies can shape their technology development strategies. This study aims to examine individuals' adoption attitudes toward AVs by considering trust and sustainability concerns. This was achieved by expanding the technology acceptance model (TAM). A survey of 391 participants was conducted and the data were analyzed using structural equation modeling (SEM). The results confirmed previous technology acceptance models by showing the relationships between perceived usefulness, perceived ease of use, and behavioral intention. Moreover, the direct and indirect effects of trust on behavioral intention were also shown. This study provides evidence to extend the TAM to the adoption of AVs by uncovering individuals’ sustainability concerns.