ArticlePDF Available

Developing a formative scale to measure consumers’ trust toward interaction with artificially intelligent (AI) social robots in service delivery

Authors:

Abstract and Figures

This study develops and validates a scale of Social Service Robot Interaction Trust (SSRIT) that measures consumers’ trust toward interaction with AI social robots in service delivery. Through a systematic literature review, semi-structured interviews, a focus group study, and rigorous quantitative studies, this study conceptualizes the interaction-based trust and proposes a third-order reflective-formative scale, which suggests that trust in interaction is measured by three second-order indicators: propensity to trust in robot, trustworthy robot function and design, and trustworthy service task and context. Propensity to trust in robot is predicted by familiarity, robot use self-efficacy, social influence, technology attachment, and trust stance in technology. Trustworthy robot function and design is formed by anthropomorphism, robot performance, and effort expectancy. Trustworthy service task and context is determined by perceived service risk, robot-service fit, and facilitating robot-use condition. The convergent, discriminant, external, concurrent, and predictive validities of the scale are validated. Theoretical contributions and managerial implications are provided.
Content may be subject to copyright.
Computers in Human Behavior 118 (2021) 106700
Available online 16 January 2021
0747-5632/Published by Elsevier Ltd.
Developing a formative scale to measure consumerstrust toward
interaction with articially intelligent (AI) social robots in service delivery
Oscar Hengxuan Chi
a
, Shizhen Jia
b
,
*
, Yafang Li
b
, Dogan Gursoy
a
,
c
a
School of Hospitality Business Management, Carson College of Business, Washington State University, Pullman, WA, 99164, USA
b
Department of Management, Information Systems, and Entrepreneurship, Carson College of Business, Washington State University, Pullman, WA, 99164, USA
c
School of Tourism and Hospitality, University of Johannesburg, South Africa
ARTICLE INFO
Keywords:
Trust
Articial intelligence
Interaction
Social robot
Service
Scale development
ABSTRACT
This study develops and validates a scale of Social Service Robot Interaction Trust (SSRIT) that measures con-
sumerstrust toward interaction with AI social robots in service delivery. Through a systematic literature review,
semi-structured interviews, a focus group study, and rigorous quantitative studies, this study conceptualizes the
interaction-based trust and proposes a third-order reective-formative scale, which suggests that trust in inter-
action is measured by 3 s-order indicators: propensity to trust in robot, trustworthy robot function and design,
and trustworthy service task and context. Propensity to trust in robot is predicted by familiarity, robot use self-
efcacy, social inuence, technology attachment, and trust stance in technology. Trustworthy robot function and
design is formed by anthropomorphism, robot performance, and effort expectancy. Trustworthy service task and
context is determined by perceived service risk, robot-service t, and facilitating robot-use condition. The
convergent, discriminant, external, concurrent, and predictive validities of the scale are validated. Theoretical
contributions and managerial implications are provided.
1. Introduction
Articial intelligence (AI), a series of technologies that enable a
system to perceive, understand, react, and learn (Bowen & Morosan,
2018), not only allows automation but also empowers machines to
demonstrate mechanical, analytical, intuitive, and empathetic intelli-
gence (Huang & Rust, 2018). In recent years, different types of AI
technologies, including smart devices, self-service technologies, Chat-
bots, and service robots, have been utilized in the service industry (Chi,
Denton, & Gursoy, 2020), which has become high-tech (Gursoy et al.,
2019). In addition to economic factors such as increasing labor costs
(Gursoy et al., 2019) and increasing minimum wages (Mcafee & Bryn-
jolfsson, 2016) as well as social factors such as an aging society (Lee, Lin,
& Shih, 2018), the recent outbreak of COVID-19 has propelled the use of
robots in service delivery due to the need for social distancing (Gursoy
et al., 2020).
However, customers evaluation of services is signicantly inu-
enced by human-to-human interactions involved in service delivery
(Dedeoglu, Bilgihan, Ye, Buonincontri, & Okumus, 2018). Scholars
argue that using robots to serve customers challenges customers
perception of services, primarily high-touch focused services (Chi et al.,
2020). To neutralize customers need for high-touch and the industry
trend of being high-tech, AI social robots, which have the capability to
follow humans behavioral norms and directly interact with humans,
have become the most competitive candidate to deliver high-touch
services in a rapidly changing service delivery environment (Chi et al.,
2020). Utilizing articial analytical, intuitive, and empathetic intelli-
gence, AI social robots, as service employees, can provide high-quality
personalized and customized services by directly interacting with cus-
tomers (West et al., 2018). Despite the advantages of AI social robot
technology in service delivery, previous studies have suggested that not
all customers are likely to interact with AI robots and accept the services
provided by these devices (e.g., (Chi et al., 2020); Gursoy et al., 2019).
Existing information system (IS) literature has demonstrated that the
perceived uncertainty in experiences associated with using a technology
partially results in the objection to the use of technology (Fang et al.,
2014; McKnight et al., 1998). The level of uncertainty seems to be
further enlarged in a service context. Due to the inherent intangibility
(Kotler, 1997) and heterogeneity (Ai et al., 2019) of services, it is
difcult for customers to evaluate and compare services before actually
experiencing them. As a result, comparing to other contexts (e.g.,
household use), customers are likely to have a higher level of perceived
* Corresponding author.
E-mail addresses: hx.chi@wsu.edu (O.H. Chi), shizhen.jia@wsu.edu (S. Jia), yafang.li@wsu.edu (Y. Li), dgursoy@wsu.edu (D. Gursoy).
Contents lists available at ScienceDirect
Computers in Human Behavior
journal homepage: http://www.elsevier.com/locate/comphumbeh
https://doi.org/10.1016/j.chb.2021.106700
Received 30 May 2020; Received in revised form 14 November 2020; Accepted 10 January 2021
Computers in Human Behavior 118 (2021) 106700
2
uncertainty in interacting with AI robots during service transactions.
Previous IS studies frequently documented that the interaction be-
tween humans and technology is mediated by trust (e.g., Ba & Pavlou,
2002; Gefen et al., 2003) since trust is a critical antecedent of risk-taking
behaviors (McKnight et al., 1998) and can reduce perceived uncertainty
. For these reasons, customers who trust the interaction with AI robots
are likely to choose services provided by these robots in the rst place.
Moreover, a great body of service studies has demonstrated the critical
role of trust in fostering customer-service provider relationships and
customer retention (e.g., Fang et al., 2014; Qureshi et al., 2009).
Accordingly, customers who trust the interaction with robots are likely
to build a long-term relationship with service providers who utilize AI
social robots to deliver services. Despite many existing studies that have
explored the factors that predict the acceptance of AI devices in services
(e.g., Gursoy et al., 2019; Lin et al., 2019; Lu et al., 2019), current
literature lacks an understanding of the trust in the AI devices in ser-
vices. A systematic investigation of why customers develop trust in
human-AI social robot interaction may offer additional contributions to
the service industry.
Given the critical role of trust in human-AI social robot interactions,
knowledge gaps exist in understanding the interaction trust develop-
ment from both theoretical and operational perspectives. From a theo-
retical perspective, previous IS, and marketing research has commonly
used two main frameworks to explain trust in technology: peoples
propensity to trust (e.g., Dimitriadis & Kyrezis, 2010; Lu et al., 2016)
and characteristics of technology (e.g., Chang et al., 2017; Gillath et al.,
2021). The trust disposition framework primarily focuses on psycho-
logical factors (e.g., trusting stance and faith in humanity) that lead
people to have different tendencies to trust (McKnight & Chervany,
2002). The IT characteristic framework investigates different aspects of
technology, such as functionality, reliability, and helpfulness (Lankton
& McKnight, 2011). Since AI social robots are used to interact with
consumers in service delivery directly, consumers perceive these robots
not only as technology devices but also as social entities that can provide
social interactions (Gursoy et al., 2019; Van Doorn et al., 2017). More
importantly, previous human-technology interaction theories suggest
that trust in technology interactions seems to comprise a broader scope
compared to trust in technology. As a result, how consumers evaluate
and trust interactions with these AI social robots is likely to deviate from
traditional technology trust theories. Thus, there is a need for concep-
tualizing trust in human-AI social robot interaction using an
interaction-based approach.
From the operational aspect, to measure trust in AI devices, most
existing studies that investigated antecedents and outcomes of trust have
utilized unidimensional reective scales (e.g., Lee et al., 2018; Nadar-
zynski et al., 2019; van Pinxteren et al., 2019; Xu, 2019). Given the
multidimensional nature of technology trust (Lankton & McKnight,
2011; McKnight & Chervany, 2002), the use of unidimensional scales
limits the understanding of different endogenic elements underlying the
latent construct (Diamantopoulos & Winklhofer, 2001). In addition, a
reective scale measures the outcomes of a construct rather than the
causes or drivers of the construct (Jarvis et al., 2003), providing
little guidance to service providers who are more interested in identi-
fying causes. Therefore, there is a practical need for operationalizing the
trust in human-AI social robot interactions by developing a multidi-
mensional formative measurement scale that captures the actionable
attributes of trust in interaction. Given that trust is a crucial antecedent
of technology acceptance (Sharma & Klein, 2020) and willingness to
build a long-term social exchange relationship, these knowledge gaps in
understanding and measuring consumerstrust towards human-AI social
robot interactions signicantly hinder the effectiveness of designing and
adopting AI social robots that consumers trust to use and interact with
during service deliveries.
Drawing upon the human-computer interaction framework (Zhang &
Li, 2005), this study utilizes an interaction-based approach to concep-
tualize consumers trust in human-AI social service robot interaction.
This study proposes that trust in interaction is likely to be developed
through three dimensions: trustworthy robot function and design,
trustworthy service task and context, and consumer propensity to trust
robots. Accordingly, a formative scale of Social Service Robot Interac-
tion Trust (SSRIT) is proposed in this study. Our new conceptualization
of trust in AI social robots reects the service-context-focused nature of
interactions. In addition, a formative model brings different potential
indicators that measure specic and actionable attributes of trust to the
level of a holistic construct (Cenfetelli & Bassellier, 2009), allowing a
more accurate and informative measurement that captures the dynamic
and complex nature of trust in interactions with AI social robots.
This study is organized into four major sections. In section 2
(Theoretical Background), trust and interaction-based trust are dis-
cussed, and SSRIT is conceptualized. In section 3 (The Approaches to
Measure SSRIT), the use of a formative scale approach is justied
through a systematic literature review. In section 4 (Development of the
SSRIT Scale), reective items are generated, and the scale is rened and
validated via a series of qualitative and quantitative studies. Lastly, in
section 5 (Discussion), the study results, theoretical contributions, and
managerial implications are discussed.
2. Theoretical Background
2.1. Trust
Trust is one of the fundamental antecedents of interactions with
others. It can be described as the intention to rely on a party regardless of
the potential uncertainty and loss (S¨
ollner et al., 2016). Scholars believe
that trust is an expectation of positive outcomes provided by the trustee.
For example, (Cook and Wall (1980), p. 39) dened trust as a willingness
to ascribe good intentions to and have condence in the words and
actions of other people.The literature on trust across different research
domains commonly conceptualizes trust as a belief formed through the
evaluation of certain attributes of an object (Colquitt & Rodell, 2011;
McKnight and Chervany, 2002). Therefore, trust is commonly measured
by using multidimensional scales (McKnight & Chervany, 2002).
2.2. Trust in the Human-AI social robot interactions
In the service industry, an increasing number of service providers
have been introducing AI social service robots to deliver frontline ser-
vices (Chi et al., 2020; Lin et al., 2019). Even though trust has received
great attention (Weitz et al., 2019), the anthropomorphic designs,
interaction functions, use of articial intelligence, and complex service
contexts make the trust in interactions with AI social robots to be
signicantly different from the trust toward traditional technology
products (Gursoy et al., 2019).
Two challenges inhibit the adoption of existing technology trust
frameworks to research trust in interactions with AI social service ro-
bots. First, the traditional view of IT mainly conceived technology as a
tool to promote productivity or to ensure social functions. However, due
to advanced anthropomorphic features and intelligence characteristics
(Hyken, 2017), service robots can embody human-like social skills and
directly interact with consumers (Rodriguez-Lizundia, Marcos, Zalama,
G´
omez-García-Bermejo, & Gordaliza, 2015). Scholars argue that, in
service delivery, social robots are perceived as a social entity instead of a
machine (Van Doorn et al., 2017). Therefore, customers evaluate AI
social robots not only at the robotic embodiment level but also at the
human-oriented level (Yu, 2019) and social level (Gursoy et al., 2019).
For this reason, consumerstrust in the interactions with AI social robots
is likely to be a mixture of human trust in IT and human trust in
human.
Second, trust in interaction seems to be conceptually different from
technology trust and is likely to capture a broader scope. Previous
human-technology interaction research has commonly used the activity
theory to explain the interaction between human and technology
O.H. Chi et al.
Computers in Human Behavior 118 (2021) 106700
3
products (Kaptelinin & Nardi, 2017; Nardi, 1995). According to the
activity theory, human-technology interaction involves three signicant
elements: subject (users), tool (technology), and object (task) (Nardi,
1995). Interactions occur when a user is able to use the technology to
complete the task and believes that the use of technology can provide
better outcomes. Accordingly, a user needs to not only trust the tech-
nology but also trust that the use of technology will help the user
complete the task more efciently in order to develop trust in interaction
with this technology. Due to the complex nature of human-technology
interaction, the trust in interaction may need to be conceptualized by
using an interaction-based approach.
2.3. The human-computer interaction (HCI) framework
Compared to the activity theory, Zhang & Lis (2005) framework of
HCI, which has been used to evaluate human-AI interaction studies (e.g.,
Rzepka & Berger, 2018), offers a slightly more comprehensive picture of
elements that may be involved in human-AI social robot interactions.
Zhang & Lis framework suggests that the interaction between a human
and a technology device is a holistic phenomenon driven by system
characteristics (e.g., system complexity or functions), user characteris-
tics (e.g., personal traits), and the task and context (e.g., leisure or
business, service environment). Similar to the activity theory, Zhang &
Lis framework highlighted the critical roles of user, technology, and
task in human-technology interaction. Moreover, the HCI framework
emphasizes the impact of context on interaction. Due to the inherent
heterogeneity of services (Ai et al., 2019), the HCI framework is likely to
better t the service setting in terms of capturing the dynamic and
complex nature of trust in interactions with AI social robots in service
delivery.
2.4. Conceptualization of SSRIT
Based on the HCI framework discussed above, SSRIT in service de-
livery tends to be inuenced by a combination of factors derived from
three dimensions: robot function and design, service task and context,
and customers propensity to trust in robot. More specically, before
consumers decide to use technology devices, they must trust the tech-
nology or the functions offered by these devices (Benbasat & Wang,
2005). Next, due to the intangible and diverse nature of services, con-
sumers are also likely to evaluate the use of social robots in specic
service contexts (Billings et al., 2012) and tasks (Chi et al., 2020). Lastly,
besides extrinsic factors, consumer trust toward technology is also
inuenced by the users propensity or tendency to trust (Merritt & Ilgen,
2008; Tussyadiah & Miller, 2019).
Based on the general conceptualization of trust discussed in section
2.1, this study conceptualizes SSRIT as a belief that interactions with AI
social robots can provide positive service outcomes. In addition, drawing on
the HCI framework, this study proposes that the trust belief is formed
through trustworthy robot function and design, trustworthy service task and
context, and propensity to trust in robot dimensions, shown in Fig. 1.
3. Approaches to measure SSRIT
3.1. Formative scale
Previous IS and marketing studies have used either reective or
formative scales to measure latent constructs (Diamantopoulos &
Winklhofer, 2001; Jarvis et al., 2003). There are two signicant differ-
ences across these two types of scales. The rst difference is the rela-
tionship between measurement indicators and a latent construct (Jarvis
et al., 2003). In a reective scale, the level of a latent construct is re-
ected by the level of indicators. In other words, indicators can be
considered as results of a latent construct (Jarvis et al., 2003). How-
ever, in a formative scale, or called formative index, indicators are the
causes of a latent construct ((Diamantopoulos & Winklhofer, 2001);
Jarvis et al., 2003). Therefore, the development of a formative scale
focuses on identifying formative indicators that may alter the level of a
latent construct (e.g., Cao et al., 2019; Dickinger & Stangl, 2013; Taheri
et al., 2014).
Another difference is the relationship among indicators. The in-
dicators in a reective scale are inherently interchangeable and corre-
lated (Jarvis et al., 2003). An increase in one reective indicator must be
associated with the increase in other indicators. In contrast, the forma-
tive indicators are neither interchangeable nor theoretically correlated
since formative indicators are used to measure different elements rep-
resenting a latent construct (Jarvis et al., 2003). In other words, a latent
construct measured by formative indicators is a linear function of the
indicators that are potentially theoretically unrelated (Jarvis et al.,
2003).
Cenfetelli and Bassellier (2009) suggest two advantages of a forma-
tive model. First, by measuring a construct using a set of theoretically
different observable variables, a formative model allows researchers to
measure specic and actionable attributes of a latent construct. Second,
Fig. 1. Proposed dimensions of SSRIT
O.H. Chi et al.
Computers in Human Behavior 118 (2021) 106700
4
since formative indicators are the causesof the same latent variable, a
formative model aggregates disparate indicators to measure a single
construct, enable people to study these causes at a holistic level. Due to
the advantages discussed above, compared to a reective scale, a
formative scale is considered to have more practical value in identifying
management problems (e.g., Cao et al., 2019; Dickinger & Stangl, 2013;
Taheri et al., 2014).
3.2. Construction of SSRIT scale
The SSRIT scale was designed to be a third-order reective-formative
scale. As discussed earlier, SSRIT is conceptualized as having three di-
mensions. A multidimensional scale is a formative scale at the highest-
order level since each dimension measures a portion of the latent
construct (Petter et al., 2007). Moreover, since this study focuses on
developing a more specic scale that explains why customers develop
SSRIT, formative indicators are used to measure the three SSRIT di-
mensions based on previously discussed advantages. Lastly, to accu-
rately measure each formative indicator, reective items are used at the
rst-order level. Therefore, the proposed SSRIT scale has a reective
rst order and formative second and third orders.
Following the recommendations from previous research (Dia-
mantopoulos & Winklhofer, 2001; Jarvis et al., 2003) and the proced-
ures used by existing formative scale development studies (e.g., Cao
et al., 2019; Dickinger & Stangl, 2013; Taheri et al., 2014), Section 3.3
presents the procedure for identifying formative indicators or causes
of each dimension.
3.3. Literature review- identication of formative indicators of SSRIT
dimensions
Scholars suggested that the set of indicators in a formative model
must capture the entire scope of the latent construct (Diamantopoulos &
Winklhofer, 2001). Therefore, this study conducted a systematic litera-
ture review to identify as many potential formative indicators as
possible to capture the scope of each SSRIT dimension, which can
potentially form a new second-order dimension.
This systematic literature review utilized ve databases: Science
Direct, Sage, Springer Link, Taylor & Francis, and Web of Science.
Keywords such as trust, AI, articial intelligence, robot, automate, social,
hotel, hospitality, tourism, airline, restaurant, health care, senior care, elder
care, education, school, and service were used to search English-written
journal articles that were published after 2010. This initial search
resulted in an article list of 86 publications. Afterward, this study rened
the article pool by keeping the studies 1) investigating human-robot
interactions in service contexts, 2) that involved AI technology, and 3)
that explored factors that may inuence consumers AI social robot
trust. This process resulted in a nal article pool of 34 publications.
Utilizing the proposed SSRIT framework (Fig. 1), three researchers were
instructed to individually code and categorize the potential formative
indicators into different dimensions, namely, propensity to trust in robot
(user characteristics), trustworthy robot function and design (system
characteristics), and trustworthy service task and context. The results
were then combined and rened leading to an 11-indicator 3-dimen-
sional conceptual model (Fig. 2).
3.3.1. Propensity to trust in robot
The propensity to trust dimension refers to the intrinsic factors
within a persons characteristics that may drive a persons trust toward
AI social robots in service delivery. Previous studies have identied
several potential formative indicators that represent the propensity to
trust in robot.
3.3.1.1. Familiarity. Familiarity reects a persons understanding of an
object (Komiak & Benbasat, 2006). Existing studies found that famil-
iarity signicantly inuences consumers trust in e-commerce (Zhang
et al., 2007), brands (Benedicktus et al., 2010), and recommendation
agents (Komiak & Benbasat, 2006). In general, familiarity reduces
peoples uncertainty towards an object and promotes the accumulation
of trust-related knowledge (Komiak & Benbasat, 2006). When people are
familiar with an object, they tend to process the information in a uent
and straightforward manner, which leads to positive feelings and
Fig. 2. Proposed SSRIT measurement model.
O.H. Chi et al.
Computers in Human Behavior 118 (2021) 106700
5
attitudes toward the object (Kim et al., 2013). In human-robot interac-
tion research, the familiarity-trust relationship is understudied. How-
ever, scholars suggest that users familiarity with robots predicts
acceptance to interact with robots (Kim et al., 2013) and increases
positive evaluations of the human-robot interactions (Fortunati, Sarrica,
Ferrin, Brondi, & Honsell, 2018). Therefore, consumers who are familiar
(vs. unfamiliar) with AI social service robots are more likely to have a
higher propensity to trust in these robots in service delivery.
3.3.1.2. Robot use self-efcacy. Robot use self-efcacy is another factor
that determines users acceptance of service robots. Self-efcacy is
described as peoples perception of their competence to perform a spe-
cic task (Bandura, 1997). Robot use self-efcacy refers to a persons
belief in his/her own ability to use robots (Turja et al., 2019). Latikka
et al. (2019) found that robot use self-efcacy is independent of
self-efcacy. In the health care context, they found that employees who
have a higher level of robot use self-efcacy exhibit higher functional
and social acceptance toward service robots.
According to the Social Cognitive Theory, self-efcacy leads to a
positive perception of a future outcome (Bandura, 1997). Accordingly,
customers who have a high robot use self-efcacy are more likely to
believe that the interaction with robots will deliver positive service
outcomes. Therefore, they are more likely to have a propensity to trust
the interaction. On the other hand, self-efcacy is generated via
knowledge accumulation and the existing experience with an object
(Fan et al., 2019). As discussed previously, users who have more
knowledge and experience with technology are more likely to develop
trust in the devices powered by technology. In IS literature, self-efcacy
has been found to positively affect consumersinitial trust (Zhou, 2012).
For these reasons, this study proposes that robot use self-efcacy is a
signicant indicator of consumerspropensity to trust AI social robots.
3.3.1.3. Social inuence. Social inuence refers to the degree that a
persons social network believes in certain behaviors (Gursoy et al.,
2019). According to the Social Impact Theory (Latan´
e, 1981), people
tend to conform to their social networks beliefs. On the one hand,
conforming to group norms promotes peoples group attachment,
leading to positive affect (Gursoy et al., 2019). On the other hand, group
belief is considered a reliable information source for assessing an object
when the objects knowledge is not sufcient (Althuizen, 2018). Previ-
ous studies found that social inuence signicantly predicts consumers
willingness to interact with AI devices and AI robots (Gursoy et al.,
2019; Lin et al., 2019; Lu et al., 2019). Other studies also reported that
social inuence predicts trust toward online-banking services (Chaouali
et al., 2016). Therefore, if consumers social networks trust and advo-
cate using AI social robots, these consumers are likely to have a pro-
pensity to develop trust in human-robot interaction in service delivery.
3.3.1.4. Technology attachment. Technology attachment is dened as a
psychological connection between a person and technology (Suh et al.,
2011). Scholars argue that people will exhibit a positive behavioral
intention if they are emotionally attached to a technology product (Wu
& Cheng, 2018). Some other scholars suggested that technology
attachment plays a salient role in consumersevaluation of technology
products (Perlaviciute & Steg, 2014). Previous studies provide evidence
of the impact of technology attachment on consumersbehaviors toward
AI technologies. For example, Nadarzynski et al. (2019) explored the
user sacceptance of AI chatbots in the context of healthcare consulting.
They found that users with low IT attachments are less likely to use
Chatbots. Similarly, Wu and Cheng (2018) reported that consumers
satisfaction and trust in smart hotel devices were positively related to
their technology attachment level. Based on the discussion above,
technology attachment is also likely to contribute to consumers pro-
pensity to trust in AI social robots.
3.3.1.5. Trust stance in technology. Trust stance refers to peoples gen-
eral belief that using technology products benets themselves (Tussya-
diah and Miller, 2019). It is a component of the disposition of trust,
which indicates peoples general tendency to rely on technology. Many
existing studies found that some consumers tend to trust in technology
products when other factors are held equal (e.g., Salam et al., 2005),
suggesting the importance of this disposition in explaining overall
technology trust. Since people who have a higher trust stance are more
likely to exhibit a propensity to trust in a specic technology product
(Kikuchi et al., 1996), this study proposes that consumerstrust stance in
technology signicantly drives their propensity to trust in AI social
robots.
3.3.2. Trustworthy robot function and design
The robot trustworthy function and design dimension refer to the
trust associated with the characteristics of AI social robots. These
characteristics are likely to be captured by three potential formative
indicators: anthropomorphism, perceived performance, and effort
expectancy.
3.3.2.1. Anthropomorphism. Anthropomorphism is described as the
level of a robots human-like characteristics (Lin et al., 2019). These
human-like features not only include a humanoid appearance but also
comprise human-like emotions and behaviors (Gursoy et al., 2019).
Drawing on uncanny valley theory, previous studies frequently
found a negative impact of robot anthropomorphism on consumers
attitudes toward robots. For instance, Lu et al. (2019) reported that the
anthropomorphism of AI robotic devices is negatively associated with
customers intention to use those devices in a hotel setting. Lin et al.
(2019) conrmed these ndings in both full-service and limited-service
hotel contexts. Moreover, Yu (2019) investigated hotel customers
perception of a robots human-like feature using YouTube online re-
views. Yus nding indicated a negative relationship between cus-
tomersperceptions of AI robots and the robotshuman-like features.
However, previous studies also reported that the human-like features
of a robot are likely to increase consumers trust in AI robots. Consid-
ering that the fact that AI robots are equipped with advanced articial
intelligence technologies, human-like elements may not only evoke
emotional responses (Aziz, Moganan, Ismail, & Lokman, 2015) but also
provide consumers with a human-like interaction with a social service
robot in service delivery (Lee et al., 2017; Pan et al., 2015; Tung & Au,
2018; Xu, 2019). Therefore, anthropomorphism enables AI social robots
to act as a social entity, which can lead to a higher level of trust. For
instance, Qiu et al. (2019) found that customers are more willing to
build rapport with a robot that is perceived to be human-like and
intelligent. Xu (2019) found that users express more trust toward a robot
with a human-like voice than a machinelike voice. Moreover, scholars
reported that anthropomorphism increases trust toward and other ser-
vice robot evaluations (van Pinxteren et al., 2019). For these reasons,
anthropomorphism is likely to inuence trustworthy robot function and
design.
In this study, robot performance refers to consumersperception that
an AI social robot has the function to deliver satisfactory services and
performs as well as or outperform a human employee. Based on
McKnight et al.‘s Technology Artifact framework (2011), consumers
trust toward a technology product is primarily driven by their perceived
functionality, helpfulness, and reliability of the product. In a service
context, AI robots are used to perform human tasks and directly interact
with consumers. Based on this specic context, the Articially Intelli-
gent Device Use Acceptance framework of Gursoy et al. (2019) suggests
that consumers evaluation of the performance is likely to involve a
comparison between robots and human employees. Since many existing
studies found that consumers evaluation toward a robot is largely
determined by performance (e.g., Bedaf et al., 2019; Gursoy et al., 2019;
Lin et al., 2019), this study proposes that robot performance is a
O.H. Chi et al.
Computers in Human Behavior 118 (2021) 106700
6
signicant indicator of trustworthy robot function and design.
3.3.2.2. Effort expectancy. Effort expectancy is described as users per-
ceptions of the effort required to interact with a social service robot in
service transactions (Gursoy et al., 2019). Even though advanced tech-
nologies embedded in AI social robots allow customers to interact with
AI robots with minimal effort, scholars argue that the required psy-
chological effort to learn how to interact with a humanlike robot is still
signicant (Gursoy et al., 2019). The required psychological effort has
been found to be a salient factor that impacts usersevaluation toward
an AI robot (Gursoy et al., 2019; Lin et al., 2019; Lu et al., 2019). While
anthropomorphic robots may provide human-like interaction opportu-
nities (Qiu et al., 2019; van Pinxteren et al., 2019; Xu, 2019), consumers
who are psychologically challenged by human-like robots may treat
humanlike features as cons rather than pros. Since trust, as a belief, is
formed via the evaluation of different attributes of an object (Colquitt &
Rodell, 2011), customers who perceive that having human-like in-
teractions with a social service robot requires signicant psychological
effort are likely to have low trust in robot function and design.
3.3.3. Trustworthy service task and context
Due to the intangible and diverse nature of services, customers
evaluation of trust in interactions with AI social robots is likely to
depend on the service context and service task a social service robot is
expected to perform (Chi et al., 2020).
3.3.3.1. Perceived service risk. Perceived service risk refers to con-
sumersperception of the potential loss associated with a service failure.
In general, perceived risk indicates the perception of uncertainty (Jin
et al., 2015). When consumers perceive a high uncertainty toward a
service, they are less likely to make a concrete service evaluation. For
instance, an existing study reported that when customers perceived a
high level of risk of using smart hotel technologies, they are unlikely to
recommend the use of smart devices to their friends (Wu & Cheng,
2018). Scholars also argue that in an uncertain situation, perceived risk
determines the level of trust (Mayer et al., 1995). When the service risk
is high, consumers are unlikely to rely on another party providing the
service due to the uncertainty. Several existing studies have identied
negative causalities between risk and trust in different technology use
contexts, such as the online marketplace (Kim & Koo, 2016) and auto-
mated vehicles (Zhang et al., 2019). Based on the discussion above,
perceived service risk is likely to be a signicant indicator that de-
termines service tasks and contextstrustworthiness.
3.3.3.2. Facilitating robot-use condition. Drawing on the denition of
facilitating conditions (Venkatesh et al., 2008), this study denes the
facilitating robot-use condition as the availability of assistance that
helps consumers effectively use and interact with AI social robots in
service delivery. Although these social robots are equipped with
anthropomorphic features and articial human intelligence, customers
with different technology capabilities may still need different levels of
assistance to interact with these robots.
Facilitating condition is considered a critical factor determining
userswillingness to accept technology and has been included in various
technology acceptance frameworks (see van Doorn et al., 2016). A
recent study suggested that facilitating condition positively predicts
consumers willingness to interact with AI robots in hotel services (Lu
et al., 2019). Scholars also suggest that facilitating conditions tends to
inuence users trust belief toward a technology due to the effect of
promoting feelings of security (Lu et al., 2005). Since AI service robot
technology is new, most customers are still in the initial stage of
recognizing and trying a human-robot interaction (Chi et al., 2020).
Even though a customer may not have sufcient knowledge regarding AI
robots, a well-developed facilitating robot-use condition may result in a
perception of a safe and trustworthy service context. For this reason, this
study proposes that facilitating robot-use conditions is an essential in-
dicator of the trustworthy service task and context.
3.3.3.3. Robot-service t. This study denes robot-service t as the
perceived level of tness between robots and service tasks, types, and
scales. First, robots may perfectly t to perform some service tasks but
not others partially due to their functions and designs. As argued by
Huang and Rust (2018), robots can possess one of the four intelligence
levels: mechanical, analytical, intuitive, and empathetic. Different levels
of intelligence may t to perform different tasks. Moreover, service types
can also inuence perceived robot-service t since customers tend to
have different levels of psychological acceptance of robot use across
different types of service (Ivanov et al. (2018). For example, Ivanov et al.
(2018) reported that customers in Russia exhibit high acceptance of the
use of robots in services such as carrying goods (e.g., delivering meals),
providing information (e.g., recommending restaurants), or processing
transactions (e.g., processing orders). However, they show objections to
the use of robots in security services (e.g., guards), or the jobs require
skin contact (e.g., massage). Tussyadiah and Miller (2019) reported that
a robot is as effective as a human employee in inuencing customers
pro-environmental behavioral intentions since customers may consider
robots human-like rather than as robots themselves. Lastly, due to
different service expectations toward different service scales, customers
tend to evaluate robots differently. In hotel service studies (Chan &
Tung, 2019; Lin et al., 2019), scholars found that guests tend to have a
more positive evaluation toward robotic services when they are pri-
marily seeking functional value (vs. hedonic value) from the service (e.
g., limited-service or economy hotels). Since the robot-service t is likely
to alter customersoverall service evaluation, this study proposes that
the robot-service t is expected to impact consumers evaluation of
trustworthy service tasks and contexts.
4. Development of the SSRIT scale
To develop the SSRIT scale, this study followed the procedures rec-
ommended by Mackenzie et al. (2011) (please see Appendix C). A
mental simulation approach was used to let customers fantasize about
interacting with a social service robot since the majority of them have
not had any direct experiences with social robots. Based on the theory of
grounded cognition, mental simulation can be regarded as a revolu-
tionary process of grounded cognition (Barsalou, 2008). It is an
imitative representation of the functioning or process of some event or
series of events(Taylor & Schneider, 1989, p. 175) and has been used to
study events that are likely to occur in the future (Taylor & Schneider,
1989).
This study utilizes mental simulation for two reasons. First, this study
aims to develop a scale to measure SSRIT in different service settings.
Mental simulation allows us to treat context factors (e.g., robot features,
service contexts, service tasks) as random factors. Hence, it promotes the
generalizability of the scale (Escalas, 2004). Second, since mental
simulation enables customers to fantasize about an event, this approach
provides a useful tool for investigating customers attitudes and in-
tentions toward non-existent products (Escalas, 2004) or new products
(e.g., Elder & Krishna, 2012; Casta˜
no et al., 2008), such as electric ve-
hicles (Ebermann et al., 2016) or hotel robots (Lu et al., 2019).
4.1. Stage 1: item generation
To create the initial item pool, this study adopted items to measure
the 11 formative indicators included in the conceptual model from
existing literature. A total of 49 items were identied through the
literature review process (See Appendix A). To further expand the item
pool, as well as the number of potential formative indicators, researchers
conducted a series of one-to-one semi-structural interviews throughout a
month. Fourteen participants in different age groups were interviewed.
O.H. Chi et al.
Computers in Human Behavior 118 (2021) 106700
7
On average, each interview took around 2 h. During the interviews,
researchers rst introduced the purpose of the study and provided ex-
amples of interacting with social robots in different service settings.
Next, interviewees were requested to imagine and describe an interac-
tion with AI social robots in services. Last, interviewees were asked to
discuss the factors that can determine their trust/distrust level in social
robots. The results of interviews were interpreted, coded, and catego-
rized into different themes using a hermeneutical approach to ensure
that the items generated through interviews had a solid theoretical
ground (K¨
ahr et al., 2016). As a result, 13 items were added to the initial
item pool. However, no new formative indicator was found.
4.2. Stage 2: validating face validity
The initial items pool was evaluated by a focus group of 12 in-
dividuals. This group included customers who had direct experiences
with social robots, doctoral students, and faculties who work in the area
of technology and customer behavior research, and managers of related
service rms. These individuals rated the understandability and rele-
vance of each item and offered further insights in order to improve the
face validity of the measurement scale. This process resulted in a pool of
62 items (Appendix A) that were subjected to empirical testing.
4.3. Stage 3: item purication and renement
A customer panel (n =326) was recruited via Amazon MTurk to
purify the initial item pool. A short description of the study and a de-
nition of social robots was provided, followed by a few examples of in-
teractions with these robots in different service contexts. Then,
participants were asked to rate each item using a 5-points Likert scale (1:
strongly disagree; 5: strongly agree). The demographic prole of re-
spondents is displayed in Appendix B.
The dataset was checked for univariate normality based on the
recommendation of skewness and kurtosis values be smaller than 2.
Next, a series of EFA (principal component analysis with a Promax
rotation) was performed. The EFA model was evaluated based on
following criteria: Kaiser-Meyer-Olkin (KMO) >0.5, p-value of the
Bartletts Test of Sphericity <.05, eigenvalue >1, and total variance
explained >60%. Items with a factor-loading score higher than 0.60,
cross-loading score lower than 0.40, and commonality value lower than
0.40 were retained to ensure the uni-dimensionality of each factor.
This process resulted (Table 1) in 50 items that were loaded on 11
latent factors. Five factors, namely familiarity (4 items), robot use self-
efcacy (5 items), social inuence (4 items), technology attachment (3
items), and trust stance in technology (3 items), were related to the pro-
pensity to trust dimension. Three factors, including anthropomorphism (7
items), robot performance (9 items), and effort expectancy (4 items), were
associated with the trustworthy design and function dimension. The
trustworthy service task and context dimension consisted of three fac-
tors, perceived service risk (5 items), robot-service t (3 items), and facil-
itating robot-use condition (3 items). Cronbachs alphas of these 11 items
were ranged between 0.82 and 0.94, suggesting the internal consistency
of each construct. In addition, bivariate correlations of factors ranging
from 0.19 to 0.61 indicated that factor collinearity was not an issue in
the measurement scale (Cenfetelli & Bassellier, 2009).
4.4. Stage 4: validating measurement scale
4.4.1. Methodology
The SSRIT scale was specied as a third-order reective-formative
scale. The proposed scale had 50 reective items in the rst order, 11
formative indicators in the second-order, and three formative indicators
in the third-order. To validate the scale, this study collected the second
set of data consisted of 452 valid responses via Amazon Mturk. PLS-SEM
was chosen to analyze the data since it 1) suits theory building, 2) is
more accurate to examine the predictability of a model compared to the
Table 1
Results of exploratory factor analysis (n =326).
Factor and Item Factor
Loading
Eigenvalue % of
Variance
Cronbachs
Alpha
The propensity to Trust in Robot
Familiarity (F) 2.095 4.19 0.92
F1. I know a lot about AI
social service robots
0.953
F2. I am familiar with AI
social service robots
0.871
F3. I have much knowledge
about AI social service
robots
0.844
F4. I am more familiar than
the average person
regarding AI social
service robots
0.822
Robot Use Self-Efcacy
(RUSE)
1.804 3.61 0.82
RUSE1. I know how to
interact with the AI robot
in services
0.846
RUSE2. I could interact
with the AI robot if
someone showed me
how to do it rst
0.778
RUSE3. I could interact
with the AI robot if I
could call someone for
help if I got stuck
0.775
RUSE4. I could interact
with the AI robot if I had
seen someone else using
it before trying it myself
0.714
RUSE5. I could interact
with the AI robot if I had
just the built-in help
facility for assistance
0.679
Social Inuence (SI) 1.419 2.84 0.88
SI1. People who are
important to me would
encourage me to interact
with AI social service
robots
0.848
SI2. People whose opinions
that I value would prefer
that I interact with AI
social service robots in
services
0.803
SI3. People who inuence
my behavior would want
me to utilize AI social
service robot
0.754
SI4. People in my social
networks who would
interact with AI social
service robots in services
have a high prole
0.679
Technology Attachment
(TA)
1.164 2.33 0.84
TA1. I feel that the AI robot
technology is a part of
me
0.895
TA2. I identify strongly
with the AI robot
technology
0.859
TA3. Using AI robot
technology says a lot
about who I am
0.680
Trust Stance in Technology
(TS)
1.011 2.02 0.82
TS1. My typical approach is
to trust new technologies
until they prove me that I
shouldnt
0.888
TS2. I generally give a
technology the benet of
0.829
(continued on next page)
O.H. Chi et al.
Computers in Human Behavior 118 (2021) 106700
8
Table 1 (continued )
Factor and Item Factor
Loading
Eigenvalue % of
Variance
Cronbachs
Alpha
the doubt when I rst use
it
TS3. I usually trust a
technology until it gives
me a reason not to trust it
0.780
Trustworthy Robot Function and Design
Anthropomorphism (A) 14.746 29.49 0.94
A1. I personally feel AI
social service robots are:
Articial — Lifelike
0.935
A2. I personally feel AI
social service robots are:
machinelike —
humanlike
0.920
A3. I personally feel AI
social service robots are:
Fake — Natural
0.910
A4. I personally feel AI
social service robots are:
Unconscious —
Conscious
0.898
A5. I personally feel AI
social service robots are:
Moving rigidly —
Moving elegantly
0.771
A6. AI social service robots
experience emotions
0.718
A7. AI social service robots
have a mind of their own
0.703
Robot Performance (RP) 7.462 14.92 0.91
RP1. AI social service
robots have the
functionalities to serve
me.
0.828
RP2. AI social service
robots provide
competent guidance
during service.
0.785
RP3. AI social service
robots have the features
required to serve me.
0.775
RP4. AI social service
robots have the overall
capabilities to serve me
0.765
RP5. AI social service
robots fulll my need in a
service through different
features and
technologies.
0.750
RP6. AI social service
robots will provide me
with the help I need
0.745
RP7. AI social service
robots provide more
accurate information
than human beings
0.699
RP8. AI social service
robots offers more
accurate services with
less human errors
0.693
RP9. AI social service
robots provide more
consistent service than
human beings
0.619
Effort Expectancy (EE) 1.772 3.54 0.89
EE1. It will take me too
long to learn how to
interact with AI social
service robots
0.875
EE2. Interacting with AI
social service robots will
be unnecessarily difcult
and complex in services
0.855
0.747
Table 1 (continued )
Factor and Item Factor
Loading
Eigenvalue % of
Variance
Cronbachs
Alpha
EE3. Interactions with AI
social service robots will
take too much of my time
EE4. AI social service
robots will be
intimidating to me
0.718
Trustworthy Service Task and Context
Perceived Service Risk
(PSR)
3.041 6.08 0.93
PSR1. On the whole,
considering all sorts of
factors combined, using
AI social service robots in
service transactions is
risky
0.935
PSR2. In service
transactions, using AI
social service robots is
risky
0.923
PSR3. In service
transactions, using AI
social service robots
exposes you to an overall
risk
0.897
PSR4. In service
transactions, receiving
services provided by AI
social service robots are
dangerous
0.792
PSR5. In service
transactions, receiving
services provided by AI
social service robots
would add great
uncertainty to my service
experience
0.773
Robot-Service Fit (SRF) 1.280 2.56 0.89
RSF1. There is a good t
between what AI social
service robots offer me
and what I am looking
for in services that are
offered these robots
0.918
RSF2. My expectations
from a service are
fullled very well by the
service provided by AI
social service robots
0.858
RSF3. The services that AI
social service robots
currently hold gives me
just about everything
that I want from these
services
0.835
Facilitating Robot-use
Condition (FC)
1.027 2.05 0.90
FC1. In service encounters,
guidance is available to
me in the use of AI social
service robots
0.847
FC2. In service encounters,
specialized instruction
concerning the AI social
service robots is
available to me
0.843
FC3. In service encounters,
a specic employee (or
group) is available for
assistance with AI
robotsdifculties
0.841
Kaiser-Meyer-Olkin Measure of Sampling Adequacy =0.92; p-value of Bartletts Test
of Sphericity <.001; Total Variance Explained =73.64%
O.H. Chi et al.
Computers in Human Behavior 118 (2021) 106700
9
CB-SEM, and 3) allows the co-existence of reective and formative
measures (Vinzi, 2010). The sample size was sufcient based on the
10-times rule,which suggests that the minimum required sample size
for PLS analysis is ten times the number of formative indicators (Vinzi,
2010; Hair, Hollingsworth, Randolph, & Chong, 2017).
Furthermore, to ensure the validity of the scale, this study adopted a
two-step approach to validate the scale (van Riel et al., 2017). The rst
step examined the rst-order reective model via a CFA, which evalu-
ated the item reliability, internal consistency, convergent validity, and
discriminant validity. In the second step, the third-order scale was
evaluated as a whole, by utilizing extended repeated indicators approach in
PLS-SEM (Sarstedt et al., 2020). In this step, higher-order constructs
convergent validity, multicollinearity, indicator weight, absolute indi-
cator contributions, and relative indicator contributions were examined
(Cenfetelli & Bassellier, 2009; Sarstedt et al., 2020).
4.4.2. Results
The results of CFA (Table 2) demonstrated that the factor loadings
(minimum: 0.66; maximum:0.92) were all meaningful and substantial
(>0.60). The composite reliabilities revealed acceptable internal con-
sistency of measurement items (>0.70) (Hair et al., 2011). In addition,
all average variance extracted values (AVE) exceeded 0.50, pointing to
an item-level convergent validity. The square roots of the AVEs were
higher than the corresponding factor correlations (Table 3), suggesting
an acceptable discriminant validity. Furthermore, the 11-factor model
also displayed an acceptable model t (Chi-square =1655.98, df =
1114, RMSEA =0.03, CFI =0.96, TLI =0.96, SRMR =0.05). These
results provided satisfactory evidence that the rst-order reective
model was valid.
To examine the validity of higher-order constructs, this study
examined higher-order constructs convergent validity, multi-
collinearity, indicator weight, absolute indicator contributions, and
relative indicator contributions via a PLS-SEM analysis. The higher-
order constructs convergent validity was explored using redundancy
analysis (Vinzi, 2010). A three-item reective scale (1. I like to trust AI
social robots in services; 2. I nd AI social robots trustworthy in services; 3. I
value the trustworthy characteristics of AI social robots in services.) was
adopted (Palvia, 2009) to capture customers general trust toward AI
social robots. A bootstrapping with 5000 subsamples was performed on
the model. The results revealed an estimated path coefcient of 0.77
(95% CI =0.72 to 0.81), which was signicantly higher than the rec-
ommended threshold of 0.70 (Hair et al., 2017), demonstrating a
convergent validity of the proposed scale. In addition, the highest VIF of
the formative indicators was 1.89 for Social Inuence, which was lower
than the suggested cutoff of 3 (Hair et al., 2017), indicating that mul-
ticollinearity was not an issue among the formative indicators.
Next, a bootstrapping with 5000 subsamples was conducted to assess
indicator weights, which were presented as path coefcients in a PLS
path model (Sarstedt et al., 2020). The result of PLS-SEM (Fig. 3)
revealed substantial relationships between lower-order and higher-
order components. More specically, this study found that anthropo-
morphism (b =0.54, p <.001), robot performance (b =0.51, p <.001),
effort expectancy (b = 0.21, p =.01) signicantly related to trust-
worthy function and design. Perceived service risk (b = 0.39, p =.01),
robot-service t (b =0.50, p <.001), and facilitating robot-use condition
(b =0.47, p <.001) were all signicantly associated with trustworthy
service task and context. Familiarity (b =0.36, p <.001), robot use self-
efcacy (b =0.21, p <.001), social inuence (b =0.36, p <.001),
technology attachment (b =0.25, p <.001), and trust stance (b =0.20, p
<.001) all signicantly predicted propensity to trust. Additionally,
propensity to trust, trustworthy function and design, and trustworthy
service task and context all determined trust toward social robots in the
third order. These results indicated substantial relative contributions
were made by each formative indicator (Cenfetelli & Bassellier, 2009).
Absolute contributions were observed by examining the bivariate cor-
relations between each formative indicator and the construct. The
bivariate correlations displayed in Table 4 are all greater than 0.50,
indicating that all formative indicators made substantial absolute con-
tributions (Hair et al., 2017). These results clearly supported the
construct validity of the proposed scale.
Table 2
Results of conrmative factor analysis (n =452).
Factor and Item Factor Loading AVE CR
The propensity to Trust in Robot
Familiarity (F) 0.75 0.85
F1 0.916
F2 0.810
F3 0.908
F4 0.836
Robot Use Self-Efcacy (RUSE) 0.50 0.72
RUSE1 0.704
RUSE2 0.773
RUSE3 0.755
RUSE4 0.894
RUSE5 0.710
Social Inuence (SI) 0.67 0.79
SI1 0.836
SI2 0.841
SI3 0.845
SI4 0.746
Technology Attachment (TA) 0.67 0.79
TA1 0.845
TA2 0.85
TA3 0.758
Trust Stance in Technology (TS) 0.63 0.75
TS1 0.818
TS2 0.790
TS3 0.765
Trustworthy Robot Function and Design
Anthropomorphism (A) 0.66 0.78
A1 0.746
A2 0.827
A3 0.873
A4 0.841
A5 0.862
A6 0.812
A7 0.725
Robot Performance (RP) 0.56 0.70
RP1 0.740
RP2 0.662
RP3 0.687
RP4 0.821
RP5 0.769
RP6 0.769
RP7 0.788
RP8 0.781
RP9 0.787
Effort Expectancy (EE) 0.71 0.82
EE1 0.870
EE2 0.888
EE3 0.820
EE4 0.782
Trustworthy Service Task and Context
Perceived Service Risk (PSR) 0.75 0.85
PSR1 0.894
PSR2 0.883
PSR3 0.859
PSR4 0.824
PSR5 0.863
Robot-Service Fit (SRF) 0.69 0.80
RSF1 0.800
RSF2 0.869
RSF3 0.814
Facilitating Robot-use Condition (FC) 0.63 0.75
FC1 0.800
FC2 0.798
FC3 0.783
Notes: Chi-square =1655.984; df =1114; RMSEA =0.033; CFI =0.959; TLI =
0.955, SRMR =0.048.
O.H. Chi et al.
Computers in Human Behavior 118 (2021) 106700
10
4.5. Stage 5: external, concurrent, and predictive validity
4.5.1. Methodology
To further validate the proposed scale, this study recruited a new
group of customers via Amazon MTurk (n =362) to examine customers
social service robot interaction trust in a hotel service setting rather than
the general service setting used in the previous stages. A description of
social robots and their application in hotel services were provided to the
participants at the beginning of the survey. This stage examined three
types of validity: external, concurrent, and predictive validity. External
validity suggests that a scale should be able to work properly in different
study contexts. Concurrent validity measures how the proposed scale is
associated with a well-established scale. For this reason, the Interper-
sonal Trust scale (9 items and three dimensions) (McKnight et al., 2002)
and Technology Artifact scale (9 items and three dimensions) (McKnight
et al., 2011) were used to compare with the proposed SSRIT scale. Last,
predictive validity examines how well the proposed scale predicts crite-
rion measures. Intention to use (3 items) and perceived satisfaction (4
items) were used to examine the predictive power of the proposed scale
since these two variables are commonly found as outcomes of trust in the
contexts of technology use (e.g., Dimitriadis & Kyrezis, 2010; Lu et al.,
2016), and hotel services (e.g., Liang et al., 2018).
4.5.2. Results
A PLS-SEM analysis (bootstrap =5000) was performed. The results
indicated that the weights for all formative indicators were signicant.
Compared to the weights found in the general service setting, perceived
service risk in hotel settings (weight
general
= 0.39, weight
hotel
=
0.44) showed a slightly higher effect in predicting the higher-order
construct, while differences were not found for the weights of other
indicators across study settings. The results of the redundancy analysis
revealed a path coefcient of 0.78 (95% CI =0.72 to 0.82) between the
proposed SSRIT scale and the three-item reective trust scale (Palvia,
2009). These ndings demonstrated that the scale worked appropriately
in the hotel service setting, pointing to a good external validity of the
scale.
In addition, this study found that the trust measurement developed
in this study was highly correlated with the trust measured by the
Interpersonal Trust scale (r
2
=0.78, p <.001) and Technology Artifact
scale (r
2
=0.84, p <.001), indicating the concurrent validity.
Table 3
Estimated correlation matrix for the latent variables.
F RUSE SI TA TS A RP EE PSR RSF FC
F (0.866)
RUSE 0.165 (0.707)
SI 0.679 0.282 (0.819)
TA 0.492 0.302 0.572 (0.819)
TS 0.257 0.398 0.430 0.494 (0.794)
A 0.681 0.071 0.719 0.390 0.278 (0.812)
RP 0.460 0.523 0.541 0.564 0.644 0.426 (0.748)
EE 0.387 0.168 0.421 0.110 0.055 0.543 0.010 (0.843)
PSR 0.366 0.096 0.286 0.121 0.048 0.412 0.012 0.744 (0.866)
RSF 0.520 0.367 0.631 0.536 0.591 0.519 0.830 0.144 0.087 (0.831)
FC 0.513 0.445 0.561 0.515 0.521 0.450 0.736 0.163 0.149 0.778 (0.794)
Notes:.
Numbers in parentheses are square roots of AVEs.
Familiarity (F), Robot Use Self-Efcacy (RUSE), Social Inuence (SI), Technology Attachment (TA), Trust Stance in Technology (TS), Anthropomorphism (A), Robot
Performance (RP), Effort Expectancy (EE), Perceived Service Risk (PSR), Robot-Service Fit (RSF), Facilitating Robot-use Condition (FC).
Fig. 3. The results of PLS-SEM
O.H. Chi et al.
Computers in Human Behavior 118 (2021) 106700
11
Furthermore, this study found that the proposed SSRIT scale explained a
signicant portion of variance in intention to use (R
2
=0.40, p <.001)
and perceived satisfaction (R
2
=0.60, p <.001), and the three-item
reective trust scale (R
2
=0.64, p <.001). In contrast, interpersonal
trust scale performed worse when predicting satisfaction (R
2
=0.53, p <
.001) and trust (R
2
=0.60, p <.001) but performed better when pre-
dicting intention to use (R
2
=0.44, p <.001). Technology Artifact scale
showed less predictability when predicting satisfaction (R
2
=0.50, p <
.001) and trust (R
2
=0.59, p <.001), and higher power when predicting
intention to use (R
2
=0.43, p <.001). These results not only suggested
the predictive validity of the proposed scale but also indicated that the
proposed SSRIT was more effective when measuring trust in AI social
service robots compared to other well-established scales.
5. Discussion
Drawing on Zhang and Lis (2005) HCI framework, this study pro-
poses that consumerstrust in interactions with AI social robots during
service delivery is formed by three dimensions: propensity to trust in
robot, trustworthy robot function, and design, and trustworthy service
task and context. The empirical results support this proposition, indi-
cating that all three dimensions signicantly determine the level of trust.
This nding conrms that the trust toward an object, especially a
technology product, is driven by a persons propensity to trust or trust
tendency (Merritt & Ilgen, 2008; Tussyadiah & Miller, 2019). In addi-
tion, similar to other technology products, the trust toward an AI social
robot in a service context is also determined by consumerstrust in the
design and the functions embedded in these devices (Benbasat & Wang,
2005). Furthermore, the results of this study reveal that the character-
istics of the service task and context signicantly contribute to con-
sumerstrust in interactions with robots, suggesting that the same robot
may not be evaluated in the same manner across different service con-
texts (Chi et al., 2020). These ndings illustrate that customerstrust in
interactions with AI social robots is developed through a dynamic and
highly complex interaction process.
This study nds that a persons propensity to trust in AI social robots
can be measured by ve factors. Familiarity with social robots reduces
peoples uncertainty and increases the accumulation of trust-related
knowledge (Komiak & Benbasat, 2006). Robot use self-efcacy leads
consumers to have a positive perception of robot use (Latikka et al.,
2019). Social inuence promotes the formation of trust externally
through the effects of group norms (Gursoy et al., 2019) and group be-
liefs (Althuizen, 2018). Technology attachment provides a psychological
connection between a person and an AI social robot, promoting the
positive evaluation of a technology product (Perlaviciute & Steg, 2014).
Lastly, trust stance in technology, as a disposition of trust, leads con-
sumers to have a high tendency to trust AI social robots (Tussyadiah and
Miller, 2019).
The trustworthy function and design of an AI social robot are found
to be determined by three factors. Anthropomorphism represents the
advanced technologies embedded in a robot, enabling human-like in-
teractions in service delivery (Tung & Au, 2018; Xu, 2019) and leading
to a higher level of trust. Robot performance measures the robots
functionality, helpfulness, and performance compared to human em-
ployees, and it is a primary determinant of the trust toward AI social
robots in terms of the utility expectation (Mcknight, Carter, Thatcher, &
Clay, 2011). Moreover, effort expectancy measures the perceived psy-
chological effort required for a customer to learn human-robot interac-
tion (Gursoy et al., 2019) and has been found to cause signicant distrust
toward AI social robots in service delivery.
The trustworthy service task and context is measured by three fac-
tors. Perceived service risk indicates that consumers perception of
service uncertainty plays a negative role in the formation of service
evaluation (Wu & Cheng, 2018), undermining the trust in AI social ro-
bots. However, facilitating robot-use condition can compensate con-
sumers perceived service risk and help consumers effectively use and
interact with AI social robots in service delivery, contributing to the
trustworthy service context. Finally, robot-service t highlights the
relationship between robot functions and service expectations, con-
rming that consumers evaluate robots differently across different ser-
vice tasks (Chi et al., 2020; Ivanov et al., 2018).
5.1. Theoretical contribution
This study makes critical theoretical contributions by conceptual-
izing and developing a measurement scale to measure consumerstrust
in AI social robots. This study argues that AI social robots differ from
traditional technologies. The traditional view of IT conceives technology
products as tools to enhance productivity or to facilitate certain services.
Thus, traditional technology trust heavily concentrates on the utility
value of the technology product (McKnight et al., 2011). In contrast, AI
social robots in service delivery are used to directly interact with con-
sumers and, therefore, are perceived as social entities (Gursoy et al.,
2019; Van Doorn et al., 2017). Consumers trust in AI social robots is
likely to involve trust in IT, trust in human, and trust in service.
Based on this reasoning, this study draws on the human-computer
interaction framework and conceptualizes the interaction-based SSRIT
as a belief generated through a human-robot interaction process by
considering users psychological and dispositional characteristics, ro-
botsutility performance and designs, and service tasks and contexts.
In addition, the proposed scale of SSRIT treats the trust in interaction
with AI social robots as a third-order reective-formative construct. The
highest order construct, trust, is formed by the second-order indicators
(propensity to trust in a robot, trustworthy robot function, and design,
and trustworthy service task and context) cohesively. Each indicator
measures a unique attribute of trust. Moreover, the 11 third-order
formative indicators measure different elements of respective second-
order indicators. Lastly, the third-order indicators are measured by 50
reective items. The proposed scale exhibits signicant predictive
power in predicting trust in interactions with AI social service robots.
Moreover, by conceptualizing SSRIT as an interaction-based trust, this
third-order measurement scale offers a comprehensive and holistic
overview of trust in interaction with AI social robots that is evidenced by
its signicant explanatory power (Cenfetelli & Bassellier, 2009).
5.2. Managerial implication
The current study heavily focuses on developing a practical assess-
ment tool to measure trust in interactions with AI social robots in service
delivery and to explain why customers develop this trust. By
Table 4
Absolute contributions of each formative indicator.
Correlation t-value p-
Values
A Trustworthy Function and Design .90 91.72 <. 001
RPTrustworthy Function and Design 0.79 29.83 <. 001
EE Trustworthy Function and Design 0.53 2.45 0.014
FC Trustworthy Service Task and Context 0.85 28.83 <. 001
RSFTrustworthy Service Task and Context 0.84 24.84 <. 001
PSR Trustworthy Service Task and Context 0.50 2.42 0.015
F Propensity to Trust 0.78 30.16 <. 001
RUSE Propensity to Trust 0.50 8.43 <. 001
SI Propensity to Trust 0.84 55.90 <. 001
TA Propensity to Trust 0.75 33.12 <. 001
TS Propensity to Trust 0.60 13.12 <. 001
Trustworthy Function and Design Trust 0.67 21.28 <. 001
Trustworthy Service Task and Context Trust 0.68 19.13 <. 001
Propensity to Trust Trust 0.92 105.53 <. 001
Notes: Familiarity (F), Robot Use Self-Efcacy (RUSE), Social Inuence (SI),
Technology Attachment (TA), Trust Stance in Technology (TS), Anthropomor-
phism (A), Robot Performance (RP), Effort Expectancy (EE), Perceived Service
Risk (PSR), Robot-Service Fit (RSF), Facilitating Robot-use Condition (FC).
O.H. Chi et al.
Computers in Human Behavior 118 (2021) 106700
12
conceptualizing the trust as an interaction-based trust and validating a
multi-dimensional measurement scale, the results of this study highlight
the need for distinguishing interaction trust from general technology
trust. Findings suggest that, in a service context, trust in interactions
with AI robots is a more complex phenomenon compared to technology
trust. Therefore, service providers are suggested to deliberate cus-
tomerstrust in interactions from a holistic view. More specically, this
study found that while the utility functions of the robots are important,
AI social robots are likely to be evaluated as a social entity (Gursoy et al.,
2019; Van Doorn et al., 2017). This evaluation tends to differ across
different service circumstances and different users.
Service providers can use the SSRIT scale to identify the drivers of
customers trust in interactions from three elements: customers pro-
pensity to trust, robot function and design, and service task and context.
By doing so, service providers can better understand their target market
segments, customers perceptions of their robotic service devices, and
appropriate robot use service environment. Moreover, the 11 rst-order
indicators offer managers a comprehensive and informative assessment
tool, which can be used to diagnose performance gaps of service robots
and develop corrective actions.
The results of this study also reveal a positive impact of anthropo-
morphism on consumers trust in robot functions and designs. Even
though existing studies frequently found that anthropomorphic features
of a robot may threaten usershuman identity (Gursoy et al., 2019) and
reduce willingness to use (Lu et al., 2019), the current study suggests
that anthropomorphism is a critical feature for a social robot that is used
in social interactions. As mentioned previously, the anthropomorphic
feature indicates advanced technology and enables a human-like inter-
action, which is critical for consumers who demand high-touch in ser-
vice delivery. The results of this study further demonstrate that
consumers who perceive a higher-level of robot anthropomorphism
have signicantly lower effort expectancy (r
2
= 0.54, p <.001), sug-
gesting that consumers perceive that interacting with a human-like
robot requires less psychological effort. For this reason, service robot
designers are recommended to advance anthropomorphism features of
social service robots.
This study also nds that perceived service risk reduces consumers
trust in AI social robots in service delivery. This nding is consistent
with the ndings of many existing trust-risk studies (e.g., Wu & Cheng,
2018; Zhang et al., 2019) and theories (e.g., Mayer et al., 1995). How-
ever, this does not necessarily mean that consumers have a low level of
performance expectations from AI social service robots. In fact, via
semi-structured interviews, this study found that most interviewees
believe that AI social robots can provide more consistent and prompt
services than do human-employees. In high-risk services (e.g., medical
diagnosis), the interviewees indicate an unwillingness to use social ro-
bots mainly due to the uncertainty of who would take the responsibility
for a service failure. Consumers are also concerned that service pro-
viders may shift the responsibility to consumers for service failures
since, unlike human-employees, robots are not likely to take re-
sponsibility for service failures. Therefore, service providers should
carefully consider the responsibility issue of service failure. Further-
more, they should inform consumers in advance when AI service robots
are involved in service delivery.
6. Conclusion
Trust is a crucial determinant of technology acceptance. However,
the trust toward AI social robots in service delivery has not received
much attention from scholars. Through a battery of qualitative and
quantitative procedures, this mixed design study conceptualized trust in
AI social robots and developed a reliable measurement scale. The results
suggested that trust in AI social robots can be conceptualized as a belief
generated through a human-robot interaction in service delivery.
Through a systematic literature review, a series of semi-structured in-
terviews, and a focus group study, a third-order scale with a reective
rst order and formative second and third orders are developed in this
study. The scale indicates that trust in AI social robots is measured by
three indicators: propensity to trust, trustworthy function and design,
and trustworthy service task and context. Propensity to trust can be
measured by familiarity, robot use self-efcacy, social inuence, tech-
nology attachment, and trust stance in technology. The trustworthy
function and design is predicted by anthropomorphism, robot perfor-
mance, and effort expectancy. Trustworthy service task and context is
determined by perceived service risk, robot-service t, and facilitating
robot-use conditions. Afterward, using a multiple-stage scale validation
process and more than a thousand samples, this study rened the scale
and demonstrated the scales convergent, discriminant, external, con-
current, and predictive validities.
This study is not free of limitations. This study used a mental simu-
lation approach to let customers imagine interacting with a social robot
in service delivery since the majority of them have not had any direct
robot-use experiences. Even though the mental simulation approach is
believed to be appropriate in the study of events that are likely to occur
in the future (Taylor & Schneider, 1989) and it has been commonly used
by existing similar studies (e.g., Lu et al., 2019), a eld study is useful to
further demonstrate the validity of the proposed scale and to examine
the outcomes of SSRIT.
Author contribution
Oscar Hengxuan Chi, Writing original draft; literature review;
Conceptualization; Methodology; Formal analysis; Discussion. Shizhen
Jia, Theory development, literature review; Validation; Writing review
& editing, Yafang Li, Introduction, literature review; Writing review &
editing. Dogan Gursoy, Project administration; Writing review &
editing; Resources.
Appendix A. Item Description and Item Number
Factor Name & Item Description Item No. Supporting Sources
Propensity to Trust
Familiarity (F)
I know a lot about AI social service robots F1 Interview
I am familiar with AI social service robots F2 Bhattacherjee (2002)
I have much knowledge about AI social service robots F3 Interview
I am more familiar than the average person regarding AI social service robots F4 Bhattacherjee (2002)
I often spend time gathering information about AI social service robots Dropped Bhattacherjee (2002)
Robot Use Self-Efcacy (RUSE)
I know how to interact with the AI robot in services RUSE1 Interview
I could interact with the AI robot if someone showed me how to do it rst RUSE2 Compeau & Higgins (1995)
I could interact with the AI robot if I could call someone for help if I got stuck RUSE3 (Compeau & Higgins, 1995)
I could interact with the AI robot if I had seen someone else using it before trying it myself RUSE4 (Compeau & Higgins, 1995)
(continued on next page)
O.H. Chi et al.
Computers in Human Behavior 118 (2021) 106700
13
(continued)
Factor Name & Item Description Item No. Supporting Sources
I could interact with the AI robot if I had just the built-in help facility for assistance RUSE5 (Compeau & Higgins, 1995)
I could interact with the AI robot if there was no one around to tell me what to do Dropped Compeau & Higgins (1995)
Social Inuence (SI)
People who are important to me would encourage me to interact with AI social service robots SI1 Gursoy et al., 2019
People whose opinions that I value would prefer that I interact with AI social service robots in services SI2 Gursoy et al., 2019
People who inuence my behavior would want me to utilize AI social service robot SI3 Gursoy et al., 2019
People in my social networks who would interact with AI social service robots in services have a high prole SI4 Gursoy et al. (2019)
Interacting with AI social service robots reects a status symbol in my social networks Dropped Gursoy et al. (2019)
People in my social networks who would be interacting with AI social service robot have more prestige than
those who dont
Dropped Gursoy et al. (2019)
Technology Attachment (TA)
I feel that the AI robot technology is a part of me TA1 Wu and Cheng (2018)
I identify strongly with the AI robot technology TA2 Wu and Cheng (2018)
Using AI robot technology says a lot about who I am TA3 Wu and Cheng (2018)
Trust Stance in Technology (TS)
My typical approach is to trust new technologies until they prove me that I shouldnt TS1 (McKnight et al., 2011)
I generally give a technology the benet of the doubt when I rst use it TS2 (McKnight et al., 2011)
I usually trust a technology until it gives me a reason not to trust it TS3 (McKnight et al., 2011)
Trustworthy Function and Design
Anthropomorphism (A)
I personally feel AI social service robots are: Articial — Lifelike A1 Bartneck, Kuli´
c, Croft, & Zoghbi (2008)
I personally feel AI social service robots are: machinelike — humanlike A2 Bartneck et al. (2008)
I personally feel AI social service robots are: Fake — Natural A3 Bartneck et al. (2008)
I personally feel AI social service robots are: Unconscious — Conscious A4 Bartneck et al. (2008)
I personally feel AI social service robots are: Moving rigidly — Moving elegantly A5 Bartneck et al. (2008)
AI social service robots experience emotions A6 Lu et al. (2019)
AI social service robots have a mind of their own A7 Lu et al. (2019)
AI social service robots have consciousness Dropped Lu et al. (2019)
AI social service robots have their own free will Dropped Lu et al. (2019)
AI social service robots have intentions Dropped Lu et al. (2019)
Robot Performance (RP)
AI social service robots have the functionalities to serve me. RP1 Interview
AI social service robots provide competent guidance during service. RP2 (Lankton, McKnight, & Thatcher, 2014);
(McKnight et al., 2011)
AI social service robots have the features required to serve me. RP3 Lankton et al. (2014); McKnight et al. (2011)
AI social service robots have the overall capabilities to serve me RP4 Interview
AI social service robots fulll my need in a service through different features and technologies. RP5 Interview
AI social service robots will provide me with the help I need RP6 Lankton et al. (2014); McKnight et al. (2011)
AI social service robots provide more accurate information than human beings RP7 Gursoy et al. (2019)
AI social service robots offers more accurate services with less human errors RP8 Interview
AI social service robots provide more consistent service than human beings RP9 Gursoy et al. (2019)
AI social service robots are more accurate than human beings Dropped Gursoy et al. (2019)
AI social service robots provide more customized service than human beings Dropped Gursoy et al. (2019)
AI social service robots provide error-free service Dropped Interview
AI social service robots will not malfunction on me Dropped Lankton et al. (2014); McKnight et al. (2011)
AI social robots enable me to receive services more quickly Dropped Interview
Effort Expectancy (EE)
It will take me too long to learn how to interact with AI social service robots EE1 Gursoy et al. (2019)
Interacting with AI social service robots will be unnecessarily difcult and complex in services EE2 Interview
Interactions with AI social service robots will take too much of my time EE3 Gursoy et al. (2019)
AI social service robots will be intimidating to me EE4 Gursoy et al. (2019)
Trustworthy Service Task and Context
Perceived Service Risk (PSR)
On the whole, considering all sorts of factors combined, using AI social service robots in service transactions is
risky
PSR1 Featherman & Pavlou (2003)
In service transactions, using AI social service robots is risky PSR2 Interview
In service transactions, using AI social service robots exposes you to an overall risk PSR3 (Featherman & Pavlou, 2003)
In service transactions, receiving services provided by AI social service robots are dangerous PSR4 Featherman & Pavlou (2003)
In service transactions, receiving services provided by AI social service robots would add great uncertainty to
my service experience
PSR5 Interview
Robot-Service Fit (SRF)
There is a good t between what AI social service robots offer me and what I am looking for in services that are
offered by these robots
RSF1 Cable & DeRue (2002)
My expectations from a service are fullled very well by the service provided by AI social service robots RSF2 Interview
The services that AI social service robots currently hold gives me just about everything that I want from these
services
RSF3 Cable & DeRue (2002)
Facilitating Robot-use Condition (FC)
In service encounters, guidance is available to me in the use of AI social service robots FC1 (Thompson, Higgins, & Howell, 1991); van Doorn
et al., 2016
In service encounters, specialized instruction concerning the AI social service robots is available to me FC2 (Thompson et al., 1991; van Doorn et al., 2016)
In service encounters, a specic employee (or group) is available for assistance with AI robots difculties FC3 (Thompson et al., 1991; van Doorn et al., 2016)
Appendix B. Demographic Prole
O.H. Chi et al.
Computers in Human Behavior 118 (2021) 106700
14
Stage 3
Distribution (%)
(n =326)
Stage 4
Distribution (%)
(n =452)
Stage 5
Distribution (%)
(n =362)
Gender Male 36.2 38.9 38.7
Female 63.5 60.8 61.0
Other 0.3 0.2 0.3
Age 1825 23.3 22.1 21.8
2634 30.4 33.0 31.5
3554 32.8 32.1 32.3
5564 8.9 9.1 9.7
65 or over 4.6 3.8 4.7
Marital Status Single 33.7 36.1 35.4
Married 54.6 51.5 52.2
Divorced 3.4 4.2 4.7
Widowed 1.8 2.2 2.8
Live together 6.4 6.0 5.0
Occupation Student 12.3 11.1 10.8
Professional 36.9 37.9 38.1
Managerial 23.1 21.6 20.6
Sales 7.7 9.4 10.0
Homemaker 7.4 8.2 9.2
Other 12.6 11.8 11.4
Ethnicity White 74.5 72.1 73.4
Black or African American 7.7 9.1 8.6
American Indian or Alaska Native 0.6 0.9 0.8
Asian 9.2 10.0 9.4
Native Hawaiian or Pacic Islander 8.0 8.0 7.8
Other 0.0 0.0 0.0
Education Less than high school 0.6 0.7 0.6
High school graduate 12.0 10.9 10.2
Some college 15.7 16.2 16.6
2 year degree 8.9 8.9 10.2
4 year degree 44.3 44.6 44.6
Professional degree 17.2 18.0 16.6
Doctorate 1.2 0.9 1.1
Annual Income Less than $20,000 10.7 11.1 11.0
$20,000 - $39,999 23.9 24.3 23.8
$40,000 - $59,999 28.2 26.5 26.5
$60,000 - $79,999 15.0 15.5 15.5
$80,000 - $99,999 9.8 11.3 10.8
More than $100,000 12.3 11.3 12.4
Appendix C. Summary of The Development Process of SSRIT Scale
O.H. Chi et al.
Computers in Human Behavior 118 (2021) 106700
15
References
Ai, J., Chi, O., & Ouyang, Z. (2019). Categorizing peer-to-peer review site features and
examining their impacts on room sales. Journal of Hospitality Marketing &
Management, 28(7), 862881. https://doi.org/10.1080/19368623.2019.1568341
Althuizen, N. (2018). Using structural technology acceptance models to segment
intended users of a new technology: Propositions and an empirical illustration.
Information Systems Journal, 28(5), 879904. https://doi.org/10.1111/isj.12172
Aziz, A. A., Moganan, F. F. M., Ismail, A., & Lokman, A. M. (2015). Autistic childrenˆ
a
s
kansei responses towards humanoid-robot as teaching mediator. Procedia Computer
Science, 76, 488493.
Bandura, A. (2012). Self-efcacy. Freeman.
Ba, S., & Pavlou, P. (2002). Evidence of the effect of trust building technology in
electronic markets: Price premiums and buyer behavior. MIS Quarterly, 26(3),
243268. https://doi.org/10.2307/4132332
Barsalou, L. (2008). Grounded cognition. Annual Review of Psychology, 59(1), 617645.
https://doi.org/10.1146/annurev.psych.59.103006.093639
Bartneck, C., Kuli¨
A, D., Croft, E., & Zoghbi, S. (2008). Measurement instruments for the
anthropomorphism, Animacy, likeability, perceived intelligence, and perceived
safety of robots. International Journal of Social Robotics, 1(1), 7181. https://doi.org/
10.1007/s12369-008-0001-3
Bedaf, S., Marti, P., & De Witte, L. (2017). What are the preferred characteristics of a
service robot for the elderly? A multi-country focus group study with older adults
and caregivers. Assistive Technology, 31(3), 147157. https://doi.org/10.1080/
10400435.2017.1402390
Benbasat, I., & Wang, W. (2005). Trust in and adoption of online recommendation
agents. Journal of the Association for Information Systems, 6(3), 72101. https://doi.
org/10.17705/1jais.00065
Benedicktus, R., Brady, M., Darke, P., & Voorhees, C. (2010). Conveying trustworthiness
to online consumers: Reactions to consensus, physical store presence, brand
familiarity, and generalized suspicion. Journal of Retailing, 86(4), 322335. https://
doi.org/10.1016/j.jretai.2010.04.002
Bhattacherjee, A. (2002). Individual trust in online rms: Scale development and initial
test. Journal of Management Information Systems, 19(1), 211241. https://doi.org/1
0.1080/07421222.2002.11045715.
Billings, D. R., Schaefer, K. E., Chen, J. Y., & Hancock, P. A. (2012). Human-robot
interaction: Developing trust in robots. Proceedings of the seventh annual ACM/IEEE
international conference on Human-Robot Interaction, 109110.
Bowen, J., & Morosan, C. (2018). Beware hospitality industry: The robots are coming.
Worldwide Hospitality and Tourism Themes, 10(6), 726733. https://doi.org/10.1108/
whatt-07-2018-0045
Cao, Y., Li, X., DiPietro, R., & So, K. (2019). The creation of memorable dining
experiences: Formative index construction. International Journal of Hospitality
Management, 82, 308317. https://doi.org/10.1016/j.ijhm.2018.10.010
Casta˜
no, R., Sujan, M., Kacker, M., & Sujan, H. (2008). Managing consumer uncertainty
in the adoption of new products: Temporal distance and mental simulation. Journal
of Marketing Research, 45(3), 320336. https://doi.org/10.1509/jmkr.45.3.320
Cable, D. M., & DeRue, D. S. (2002). The convergent and discriminant validity of
subjective t perceptions. Journal of Applied Psychology, 87(5), 875884. https://doi.
org/10.1037/0021-9010.87.5.875
Cenfetelli, & Bassellier. (2009). Interpretation of formative measurement in information
systems research. MIS Quarterly, 33(4), 689707. https://doi.org/10.2307/
20650323
Chang, S., Liu, A., & Shen, W. (2017). User trust in social networking services: A
comparison of facebook and LinkedIn. Computers in Human Behavior, 69, 207217.
https://doi.org/10.1016/j.chb.2016.12.013
Chan, A., & Tung, V. (2019). Examining the effects of robotic service on brand
experience: The moderating role of hotel segment. Journal of Travel & Tourism
Marketing, 36(4), 458468. https://doi.org/10.1080/10548408.2019.1568953
Chaouali, W., Ben Yahia, I., & Souiden, N. (2016). The interplay of counter-conformity
motivation, social inuence, and trust in customers intention to adopt internet
banking services: The case of an emerging country. Journal of Retailing and Consumer
Services, 28, 209218. https://doi.org/10.1016/j.jretconser.2015.10.007
Cook, J., & Wall, T. (1980). New work attitude measures of trust, organizational
commitment and personal need non-fullment. Journal of Occupational Psychology,
53(1), 3952. https://doi.org/10.1111/j.2044-8325.1980.tb00005.x
Colquitt, J. A., & Rodell, J. B. (2011). Justice, trust, and trustworthiness: A longitudinal
analysis integrating three theoretical perspectives. Academy of Management Journal,
54(6), 11831206. https://doi.org/10.5465/amj.2007.0572
Chi, O. H., Denton, G., & Gursoy, D. (2020). Articially intelligent device use in service
delivery: a systematic review, synthesis, and research agenda. Journal of Hospitality
Marketing & Management, 29(7), 757786. https://doi.org/10.1080/
19368623.2020.1721394
Compeau, D. R., & Higgins, C. A. (1995). Computer self-efcacy: Development of a
measure and initial test. MIS Quarterly, 19(2), 189. https://doi.org/10.2307/249688
Dedeoglu, B. B., Bilgihan, A., Ye, B. H., Buonincontri, P., & Okumus, F. (2018). The
impact of servicescape on hedonic value and behavioral intentions: The importance
of previous experience. International Journal of Hospitality Management, 72, 1020.
O.H. Chi et al.
Computers in Human Behavior 118 (2021) 106700
16
Dickinger, A., & Stangl, B. (2013). Website performance and behavioral consequences: A
formative measurement approach. Journal of Business Research, 66(6), 771777.
https://doi.org/10.1016/j.jbusres.2011.09.017
Dimitriadis, S., & Kyrezis, N. (2010). Linking trust to use intention for technology-
enabled bank channels: The role of trusting intentions. Psychology and Marketing, 27
(8), 799820. https://doi.org/10.1002/mar.20358
Diamantopoulos, Adamantios., & Winklhofer, Heidi. M. (2001). Index Construction with
Formative Indicators: An Alternative to Scale Development. Journal of Marketing
Research, 38(2), 269277.
van Doorn, J., Mende, M., Noble, S. M., Hulland, J., Ostrom, A. L., Grewal, D., &
Petersen, J. A. (2016). Domo arigato mr. Roboto. Journal of Service Research, 20(1),
4358. https://doi.org/10.1177/1094670516679272
Ebermann, C., Piccinini, E., Busse, S., Leonhardt, D., & Kolbe, L. M. (2016). What
determines the adoption of digital innovations by digital natives?the role of
motivational affordances. In Thirty seventh international conference on information
systems (ICIS). Dublin: AIS.
Elder, R. S., & Krishna, A. (2012). The Visual depiction effect in advertising:
Facilitating embodied mental simulation through product orientation. Journal of
Consumer Research, 38(6), 9881003. https://doi.org/10.1086/661531
Escalas, J. E. (2004). Imagine yourself in the product : Mental simulation, narrative
transportation, and persuasion. Journal of Advertising, 33(2), 3748. https://doi.org/
10.1080/00913367.2004.10639163
Fang, Y., Qureshi, I., Sun, H., McCole, P., Ramsey, E., & Lim, K. H. (2014). Trust,
satisfaction, and online repurchase intention: The moderating role of perceived
effectiveness of e-Commerce institutional mechanisms. MIS Quarterly, 38(2),
407427. https://doi.org/10.25300/misq/2014/38.2.04
Fan, A., Wu, L., Miao, L., & Mattila, A. S. (2019). When does technology
anthropomorphism help alleviate customer dissatisfaction after a service failure?
the moderating role of consumer technology self-efcacy and interdependent self-
construal. Journal of Hospitality Marketing & Management, 29(3), 269290. https://
doi.org/10.1080/19368623.2019.1639095
Fortunati, L., Sarrica, M., Ferrin, G., Brondi, S., & Honsell, F. (2018). Social robots as
cultural objects: The sixth dimension of dynamicity? The Information Society, 34(3),
141152. https://doi.org/10.1080/01972243.2018.1444253
Featherman, M. S., & Pavlou, P. A. (2003). Predicting E-sErvicEs adoption: A perceived
risk facets perspective. International Journal of Human-Computer Studies, 59(4),
451474. https://doi.org/10.1016/s1071-5819(03)00111-3
Gefen, D., Karahanna, E., & Straub, D. W. (2003). Trust and TAM in online shopping: An
integrated model. MIS Quarterly, 27(1), 5190. https://doi.org/10.2307/30036519
Gillath, O., Ai, T., Branicky, M., Keshmiri, S., Davison, R., & Spaulding, R. (2021).
Attachment and trust in articial intelligence. Computers in Human Behavior, 115,
106607. https://doi.org/10.1016/j.chb.2020.106607
Gursoy, D., Chi, C. G., & Chi, O. H. (2020). COVID-19 study 2 report: Restaurant and hotel
industry: Restaurant and hotel customerssentiment analysis. Would they come back? If
they would, WHEN? (Report No. 2). Carson College of Business, Washington State
University.
Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of articially
intelligent (AI) device use in service delivery. International Journal of Information
Management, 49, 157169. https://doi.org/10.1016/j.ijinfomgt.2019.03.008
Hair, J. F., Ringle, C. M., & Sarstedt, M. (2011). PLS-SEM: Indeed a silver bullet. Journal
of Marketing Theory and Practice, 19(2), 139152. https://doi.org/10.2753/
mtp1069-6679190202
Hair, J., Hollingsworth, C. L., Randolph, A. B., & Chong, A. Y. L. (2017). An updated and
expanded assessment of PLS-SEM in information systems research. Industrial
Management +Data Systems, 117(3), 442458. https://doi.org/10.1108/IMDS-04-
2016-0130
Huang, M., & Rust, R. T. (2018). Articial intelligence in service. Journal of Service
Research, 21(2), 155172. https://doi.org/10.1177/1094670517752459
Hyken, S. (2017). Half of people who encounter articial intelligence dont even realize it.
Forbes. https://www.forbes.com/sites/shephyken/2017/06/10/half-of-people-wh
o-encounter-articial-intelligence-dont-even-realize-it/#6b6c7163745f.
Ivanov, S., Webster, C., & Garenko, A. (2018). Young Russian adults attitudes towards
the potential use of robots in hotels. Technology in Society, 55, 2432. https://doi.
org/10.1016/j.techsoc.2018.06.004
Jarvis, C., MacKenzie, S., & Podsakoff, P. (2003). A critical review of construct indicators
and measurement model Misspecication in marketing and consumer research.
Journal of Consumer Research, 30(2), 199218. https://doi.org/10.1086/376806
Jin, N., Line, N. D., & Merkebu, J. (2015). The impact of brand prestige on trust,
perceived risk, satisfaction, and loyalty in upscale restaurants. Journal of Hospitality
Marketing & Management, 25(5), 523546. https://doi.org/10.1080/
19368623.2015.1063469
K¨
ahr, A., Nyffenegger, B., Krohmer, H., & Hoyer, W. D. (2016). When hostile consumers
wreak havoc on your brand: The phenomenon of consumer brand sabotage. Journal
of Marketing, 80(3), 2541. https://doi.org/10.1509/jm.15.0006
Kaptelinin, V., & Nardi, B. (2017). Activity theory as a framework for human-technology
interaction research. Mind, Culture and Activity, 25(1), 35. https://doi.org/10.1080/
10749039.2017.1393089
Kikuchi, M., Watanabe, Y., & Yamagishi, T. (1997). Judgment accuracy of others
trustworthiness and general trust: An experimental study. Japanese Journal of
Experimental Social Psychology, 37(1), 2336. https://doi.org/10.2130/jjesp.37.23
Kim, A., Jung, Y., Lee, K., & Han, J. (2013). The effects of familiarity and robot gesture
on user acceptance of information. In Proceedings of the 8th ACM/IEEE international
conference on human-robot interaction (pp. 159160). IEEE Press.
Kim, G., & Koo, H. (2016). The causal relationship between risk and trust in the online
marketplace: A bidirectional perspective. Computers in Human Behavior, 55,
10201029. https://doi.org/10.1016/j.chb.2015.11.005
Komiak, & Benbasat. (2006). The effects of personalization and familiarity on trust and
adoption of recommendation agents. MIS Quarterly, 30(4), 941960. https://doi.
org/10.2307/25148760
Kotler, P. (1997). Marketing management: Analysis, planning, implementation, and control
(9th ed.). Englewood Cliffs, NJ: Prentice-Hall.
Lankton, N. K., & McKnight, D. H. (2011). What does it mean to trust Facebook? ACM
SIGMIS - Data Base: The DATABASE for Advances in Information Systems, 42(2),
3254. https://doi.org/10.1145/1989098.1989101
Latan´
e, B. (1981). The psychology of social impact. American Psychologist, 36(4),
343356. https://doi.org/10.1037/0003-066X.36.4.343
Latikka, R., Turja, T., & Oksanen, A. (2019). Self-efcacy and acceptance of robots.
Computers in Human Behavior, 93, 157163. https://doi.org/10.1016/j.
chb.2018.12.017
Lankton, N., McKnight, D. H., & Thatcher, J. B. (2014). Incorporating trust-in-technology
into expectation Disconrmation theory. The Journal of Strategic Information Systems,
23(2), 128145. https://doi.org/10.1016/j.jsis.2013.09.001
Lee, N., Kim, J., Kim, E., & Kwon, O. (2017). The inuence of politeness behavior on user
compliance with social robots in a healthcare service setting. International Journal of
Social Robotics, 9(5), 727743. https://doi.org/10.1007/s12369-017-0420-0
Lee, W. H., Lin, C. W., & Shih, K. H. (2018). A technology acceptance model for the
perception of restaurant service robots for trust, interactivity, and output quality.
International Journal of Mobile Communications, 16(4), 361376. https://doi.org/
10.1504/ijmc.2018.092666
Liang, L. J., Choi, H. C., & Joppe, M. (2018). Exploring the relationship between
satisfaction, trust and switching intention, repurchase intention in the context of
Airbnb. International Journal of Hospitality Management, 69, 4148. https://doi.org/
10.1016/j.ijhm.2017.10.015
Lin, H., Chi, O. H., & Gursoy, D. (2019). Antecedents of customers acceptance of
articially intelligent robotic device use in hospitality services. Journal of Hospitality
Marketing & Management, 29(5), 530549. https://doi.org/10.1080/
19368623.2020.1685053
Li, L., & Zhang, P. (2005). The intellectual development of human-computer interaction
research: A critical assessment of the MIS literature (1990-2002). Journal of the
Association for Information Systems, 6(11), 227292. https://doi.org/10.17705/
1jais.00070
Lu, L., Cai, R., & Gursoy, D. (2019). Developing and validating a service robot integration
willingness scale. International Journal of Hospitality Management, 80, 3651. https://
doi.org/10.1016/j.ijhm.2019.01.005
Lu, B., Fan, W., & Zhou, M. (2016). Social presence, trust, and social commerce purchase
intention: An empirical research. Computers in Human Behavior, 56, 225237.
https://doi.org/10.1016/j.chb.2015.11.057
Lu, J., Yu, C.-S., & Liu, C. (2005). Facilitating conditions, wireless trust and adoption
intention. Journal of Computer Information Systems, 46(1), 1724.
MacKenzie, P., & Podsakoff. (2011). Construct measurement and validation procedures
in MIS and behavioral research: Integrating new and existing techniques. MIS
Quarterly, 35(2), 293334. https://doi.org/10.2307/23044045
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of
organizational trust. Academy of Management Review, 20(3), 709734. https://doi.
org/10.5465/amr.1995.9508080335
Mcknight, D., Carter, M., Thatcher, J., & Clay, P. (2011). Trust in a specic technology.
ACM Transactions on Management Information Systems, 2(2), 125. https://doi.org/
10.1145/1985347.1985353
Mcafee, A., & Brynjolfsson, E. (2016). Human work in the robotic future: policy for the
age of automation. Foreign Affairs, 95(4), 139150.
McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). Developing and validating trust
measures for e-Commerce: An integrative typology. Information Systems Research, 13
(3), 334359. https://doi.org/10.1287/isre.13.3.334.81
McKnight, D. H., Cummings, L. L., & Chervany, N. L. (1998). Initial trust formation in
new organizational relationships. Academy of Management Review, 23(3), 473490.
https://doi.org/10.2307/259290
Merritt, S. M., & Ilgen, D. R. (2008). Not all trust is created equal: Dispositional and
history-based trust in human-automation interactions. Human Factors: The Journal of
the Human Factors and Ergonomics Society, 50(2), 194210. https://doi.org/10.1518/
001872008x288574
Nadarzynski, T., Miles, O., Cowie, A., & Ridge, D. (2019). Acceptability of articial
intelligence (ai)-led chatbot services in healthcare: A mixed-methods study. Digital
Health, 5. https://doi.org/10.1177/2055207619871808, 205520761987180.
Nardi, B. A. (1996). Context and consciousness: Activity theory and human-computer
interaction. MIT Press.
Palvia, P. (2009). The role of trust in e-commerce relational exchange: A unied model.
Information & Management, 46(4), 213220. https://doi.org/10.1016/j.
im.2009.02.003
Pan, Y., Okada, H., Uchiyama, T., & Suzuki, K. (2015). On the reaction to robots speech
in a hotel public space. International Journal of Social Robotics, 7(5), 911920.
https://doi.org/10.1007/s12369-015-0320-0
Perlaviciute, G., & Steg, L. (2014). Contextual and psychological factors shaping
evaluations and acceptability of energy alternatives: Integrated review and research
agenda. Renewable and Sustainable Energy Reviews, 35, 361381. https://doi.org/
10.1016/j.rser.2014.04.003
Petter, S., & Rai. (2007). Specifying formative constructs in information systems
research. MIS Quarterly, 31(4), 623656. https://doi.org/10.2307/25148814
van Pinxteren, M. M., Wetzels, R. W., Rüger, J., Pluymaekers, M., & Wetzels, M. (2019).
Trust in humanoid robots: Implications for services marketing. Journal of Services
Marketing, 33(4), 507518. https://doi.org/10.1108/jsm-01-2018-0045
O.H. Chi et al.
Computers in Human Behavior 118 (2021) 106700
17
Qiu, H., Li, M., Shu, B., & Bai, B. (2019). Enhancing hospitality experience with service
robots: The mediating role of rapport building. Journal of Hospitality Marketing &
Management, 29(3), 247268. https://doi.org/10.1080/19368623.2019.1645073
Qureshi, I., & Compeau, D. (2009). Assessing between-group differences in information
systems research: A comparison of covariance- and component-based SEM. MIS
Quarterly, 33(1), 197214. https://doi.org/10.2307/20650285
Rodriguez-Lizundia, E., Marcos, S., Zalama, E., G ˜
A
3
mez-Garc˜
Aa-Bermejo, J., &
Gordaliza, A. (2015). A bellboy robot: Study of the effects of robot behaviour on user
engagement and comfort. International Journal of Human-Computer Studies, 82,
8395.
van Riel, A. C., Henseler, J., Kem´
eny, I., & Sasovova, Z. (2017). Estimating hierarchical
constructs using consistent partial least squares. Industrial Management & Data
Systems, 117(3), 459477. https://doi.org/10.1108/imds-07-2016-0286
Rzepka, C., & Berger, B. (2018). User interaction with AI-enabled systems: A systematic
review of IS research. In International conference on information systems (ICIS), at san
francisco, California, volume: 39.
Salam, A. F., Iyer, L., Palvia, P., & Singh, R. (2005). Trust in e-commerce. Communications
of the ACM, 48(2), 7277.
Sharma, V. M., & Klein, A. (2020). Consumer perceived value, involvement, trust,
susceptibility to interpersonal inuence, and intention to participate in online group
buying. Journal of Retailing and Consumer Services, 52, 101946. https://doi.org/
10.1016/j.jretconser.2019.101946
S¨
ollner, M., Benbasat, I., Gefen, D., Leimeister, J. M., & Pavlou, P. A. (2016). Trust. MIS
Quarterly Research Curations, 1(1).
Sarstedt, M., Ringle, C. M., Cheah, J.-H., Ting, H., Moisescu, O. I., & Radomir, L. (2020).
Structural model robustness checks in PLS-SEM. Tourism Economics: the Business and
Finance of Tourism and Recreation, 26(4), 531554. https://doi.org/10.1177/
1354816618823921
Suh, K., & Suh. (2011). What if your avatar looks like you? Dual-congruity perspectives
for avatar use. MIS Quarterly, 35(3), 711729. https://doi.org/10.2307/23042805
Taheri, B., Jafari, A., & OGorman, K. (2014). Keeping your audience: Presenting a visitor
engagement scale. Tourism Management, 42, 321329. https://doi.org/10.1016/j.
tourman.2013.12.011
Taylor, S. E., & Schneider, S. K. (1989). Coping and the simulation of events. Social
Cognition, 7(2), 174194. https://doi.org/10.1521/soco.1989.7.2.174
Tung, V. W. S., & Au, N. (2018). Exploring customer experiences with robotics in
hospitality. International Journal of Contemporary Hospitality Management, 30(7),
26802697.
Thompson, R. L., Higgins, C. A., & Howell, J. M. (1991). Personal Computing: Toward a
Conceptual Model of Utilization. MIS Quarterly, 15(1), 125143. https://doi.org/
10.2307/249443
Turja, T., Rantanen, T., & Oksanen, A. (2017). Robot use self-efcacy in healthcare work
(RUSH): Development and validation of a new measure. AI & Society, 34(1),
137143. https://doi.org/10.1007/s00146-017-0751-2
Tussyadiah, I., & Miller, G. (2019). Nudged by a robot: Responses to agency and
feedback. Annals of Tourism Research, 78, 102752. https://doi.org/10.1016/j.
annals.2019.102752
Venkatesh, Brown, Maruping, & Bala. (2008). Predicting different conceptualizations of
system use: The competing roles of behavioral intention, facilitating conditions, and
behavioral expectation. MIS Quarterly, 32(3), 483502. https://doi.org/10.2307/
25148853
Vinzi, V. E., Chin, W. W., Henseler, J., & Wang, H. (2010). Handbook of partial least
squares: Concepts, methods and applications. Springer Science & Business Media.
Weitz, K., Schiller, D., Schlagowski, R., Huber, T., & Andr´
e, E. (2019). Do you trust me?"
Increasing user-trust by integrating virtual agents in explainable ai interaction
design. In Proceedings of the 19th ACM international conference on intelligent virtual
agents (pp. 79).
West, A., Clifford, J., & Atkinson, D. (2018). Alexa, build me a brand: An investigation
into the impact of articial intelligence on branding. Business and Management
Review, 9(3), 321330.
Wu, H., & Cheng, C. (2018). Relationships between technology attachment, experiential
relationship quality, experiential risk and experiential sharing intentions in a smart
hotel. Journal of Hospitality and Tourism Management, 37, 4258. https://doi.org/
10.1016/j.jhtm.2018.09.003
Xu, K. (2019). First encounter with robot Alpha: How individual differences interact with
vocal and kinetic cues in userssocial responses. New Media & Society, 21(1112),
25222547. https://doi.org/10.1177/1461444819851479
Yu, C. (2019). Humanlike robots as employees in the hotel industry: Thematic content
analysis of online reviews. Journal of Hospitality Marketing & Management, 29(1),
2238. https://doi.org/10.1080/19368623.2019.1592733
Zhang, J., Ghorbani, A. A., & Cohen, R. (2007). A familiarity-based trust model for
effective selection of sellers in multiagent e-Commerce systems. International Journal
of Information Security, 6(5), 333344. https://doi.org/10.1007/s10207-007-0025-y
Zhang, T., Tao, D., Qu, X., Zhang, X., Lin, R., & Zhang, W. (2019). The roles of initial trust
and perceived risk in publics acceptance of automated vehicles. Transportation
Research Part C: Emerging Technologies, 98, 207220. https://doi.org/10.1016/j.
trc.2018.11.018
Zhou, T. (2012). Understanding usersinitial trust in mobile banking: An elaboration
likelihood perspective. Computers in Human Behavior, 28(4), 15181525. https://doi.
org/10.1016/j.chb.2012.03.021
O.H. Chi et al.
... Wang et al., 2023). Studies underscore various factors influencing the adoption of new technology, including trust (Chi et al., 2021;Seo & Lee, 2021), perceived usefulness (Ariza-Montes et al., 2023;Tavitiyaman et al., 2022), perceived ease of use , perceived valuecocreation (Demir & Demir, 2023;Sthapit et al., 2023), perceived intelligence , anthropomorphism (Cai et al., 2022;Roy et al., 2020), social influence (Ribeiro et al., 2022;Solakis et al., 2022) and perceived risk (Habib & Hamadneh, 2021;Ribeiro et al., 2022). Notably, research indicate that emotional responses elicited through AI interactions (e.g. ...
... This connection can significantly influence tourists' acceptance and willingness to use the AI model as a travel agent. While previous tourism research has explored human-robot interactions (Chi et al., 2021;Murphy et al., 2019), limited research has been directed towards elucidating how tourists interact with AI-powered language models like ChatGPT that lack physical forms. Therefore, more work is needed to better understand their emergence and impact on tourists' intentions to accept and use such AI models in the tourism and travel domain. ...
Article
ChatGPT has revolutionized the travel industry. This study employs the stimulus-organism-response (SOR) model to develop and validate a conceptual model for ChatGPT acceptance. Social influence and perceived value emerge as key determinants of user cognitive appraisals of ChatGPT's expertise, trustworthiness, and emotional connections through parasocial interaction. These factors subsequently influence traveler acceptance of ChatGPT for travel-related services. Findings reveal that social influence is the most potent predictor of ChatGPT acceptance, while perceived trust directly impacts user acceptance during the cognitive process. These insights advance research on parasocial interaction, while providing valuable guidance for implementing ChatGPT in tourism services.
... Wang et al., 2023). Studies underscore various factors influencing the adoption of new technology, including trust (Chi et al., 2021;Seo & Lee, 2021), perceived usefulness (Ariza-Montes et al., 2023;Tavitiyaman et al., 2022), perceived ease of use , perceived valuecocreation (Demir & Demir, 2023;Sthapit et al., 2023), perceived intelligence , anthropomorphism (Cai et al., 2022;Roy et al., 2020), social influence (Ribeiro et al., 2022;Solakis et al., 2022) and perceived risk (Habib & Hamadneh, 2021;Ribeiro et al., 2022). Notably, research indicate that emotional responses elicited through AI interactions (e.g. ...
... This connection can significantly influence tourists' acceptance and willingness to use the AI model as a travel agent. While previous tourism research has explored human-robot interactions (Chi et al., 2021;Murphy et al., 2019), limited research has been directed towards elucidating how tourists interact with AI-powered language models like ChatGPT that lack physical forms. Therefore, more work is needed to better understand their emergence and impact on tourists' intentions to accept and use such AI models in the tourism and travel domain. ...
Article
ChatGPT has revolutionized the travel industry. This study employs the stimulus-organism-response (SOR) model to develop and validate a conceptual model for ChatGPT acceptance. Social influence and perceived value emerge as key determinants of user cognitive appraisals of ChatGPT’s expertise, trustworthiness, and emotional connections through parasocial interaction. These factors subsequently influence traveler acceptance of ChatGPT for travel-related services. Findings reveal that social influence is the most potent predictor of ChatGPT acceptance, while perceived trust directly impacts user acceptance during the cognitive process. These insights advance research on parasocial interaction, while providing valuable guidance for implementing ChatGPT in tourism services.
... (Han & Yang, 2018); (Pitardi & Marriott, 2021) Perceived risk Perceived risk and uncertainty regarding consequences of the use of a retailer, product, technology, or service. (Chi et al., 2021); (Hasan et al., 2021) Customer alienation The degree to which AI-based products and services may reduce human aspects of relations in customer-firm interaction. (Han & Yang, 2018); (Sung & Jeon, 2020) Uniqueness neglect The degree to which AI products and services neglect subtle differences between customers. ...
... Customers may feel uncertain about the consequences of relying on AI-driven technologies, such as in retail settings, product recommendations, or service delivery (Seo & Lee, 2021). For instance, Chi et al. (2021) in their research show that the perceived risk of AI social robots in service delivery influences customer trust. Additionally, Hasan et al. (2021) found that perceived risk has a direct impact on customer loyalty for voicecontrolled AI such as Siri. ...
Article
Full-text available
Purpose Artificial intelligence (AI) has become a pivotal technology in both marketing and daily life. Despite extensive research on the benefits of AI, its adverse effects on customers have received limited attention. Design/methodology/approach We employed meta-analysis to synthesise effect sizes from 45 studies encompassing 50 independent samples ( N = 19,503) to illuminate the negative facets of AI's impact on customer responses. Findings Adverse effects of AI, including privacy concern, perceived risks, customer alienation, and uniqueness neglect, have a negative and significant effect on customers' cognitive (perceived benefit, trust), affective (attitude and satisfaction) and behavioural responses (purchase, loyalty, well-being). Additionally, moderators in AI (online versus offline), customer (age, male vs. female), product (hedonic vs. utilitarian, high vs. low involvement), and firm level (service vs. manufacturing) and national level (individualism, power distance, masculinity, uncertainty avoidance, long-term orientation) moderate these relationships. Practical implications Our findings inform marketing managers about the drawbacks of utilising AI as part of their value proposition and provide recommendations on how to minimise these effects in different contexts. Additionally, policymakers need to consider the dark side of AI, especially among the vulnerable groups. Originality/value This paper is among the first research studies that synthesise previous research on the dark side of AI, providing a comprehensive view of its diminishing impact on customer responses.
... Four elements highlighting the advantages of AI in promoting sustainable space tourism were derived from Schulz and Nakamoto (2013), Truby (2020), and Poortvliet et al. (2018), such as "I think that using AI in space tourism will enhance my experience in space tourism-related trips." The trust in AI as it pertains towards achieving sustainable space tourism was measured using four queries derived earlier research (Cheng et al., 2022a;Chi et al., 2021;Cheng et al., 2022b), for instance, "I trust that AI algorithms will function without errors during my space tourism experiences." Lastly, four items related to behavioral intentions towards the sustainability of space tourism were based on previous research (Han, 2015;Han et al., 2019;Kim et al., 2020Kim et al., , 2021, such as "I intend to actively participate in initiatives for sustainable space tourism." ...
Article
Full-text available
Space tourism is a growing industry sector that faces challenges of cost, risk, environmental impact, and sustainability. However, few studies address space tourism in an Asian culture, particularly in the context of artificial intelligence (AI), which is an increasingly significant topic both in the tourism sector and in society overall. To address the research gap, this work establishes an analytical framework which contrasts three varieties of space tourism using partial least squares, multi-group analysis, and fuzzy-set Qualitative Comparative Analysis. It surveyed 1,000 prospective space travelers from South Korean who are eager to take part in space tourism to examine AI's role in enhancing sustainable space tourism. Findings indicate that recognizing AI benefits are crucial for sustainable on-Earth, sub-orbital, and orbital space tourism, particularly the latter. The study offers both conceptual and applied knowledge to enhance the sustainability of space tourism.
... Much evidence seems to support this strategy, showing the rise in prosocial attitudes toward social robots [80,[83][84][85] and improvements in human-robot collaboration [86][87][88][89]. Chi et al. [90] have demonstrated that occasionally, people's faith in robots with human-like traits may increase. ...
Article
Full-text available
Despite the overwhelming evidence of climate change and its effects on future generations, most individuals are still hesitant to make environmental changes that would especially benefit future generations. In this study, we investigate whether dialogue can influence people’s altruistic behavior toward future generations of humans, and how it may be affected by participant age and the appearance of the conversation partner. We used a human, an android robot called Telenoid, and a speaker as representatives of future generations. Participants were split among an old age group and a young age group and were randomly assigned to converse with one of the aforementioned representatives. We asked the participants to play a round of the Dictator Game with the representative they were assigned, followed by an interactive conversation and another round of the Dictator Game in order to gauge their level of altruism. The results show that, on average, participants gave more money after having an interactive conversation, and that older adults tend to give more money than young adults. There were no significant differences between the three representatives. The results show that empathy might have been the most important factor in the increase in altruistic behavior for all participants.
... In this way, we could exclude the possibility that ChatGPT merely copies or integrates existing work. Nonetheless, recent research showed that there is a need for a scale development process in related areas such as measuring consumers' trust toward interactions with AI service robots (Chi et al., 2021). We rely on the traditional process of creating a new scale, which distinguishes the process of scale development from the process of scale evaluation (Churchill, 1979). ...
Article
Full-text available
AI-tools such as ChatGPT can assist researchers to improve the performance of the research process. This paper examines whether researchers could apply ChatGPT to develop and empirically validate new research scales. The study describes a process how to prompt ChatGPT to assist the scale development of a new construct, using the example of the construct of perceived value of ChatGPT-supported consumer behavior. The paper reports four main empirical studies (US: N = 148; Australia: N = 317; UK: N = 108; Germany: N = 51) that have been employed to validate the newly developed scale. The first study purifies the scale. The following studies confirm the adjusted factorial validity of the reduced scale. Although the empirical data imply a simplification of the initial multi-dimensional scale, the final three-dimensional operationalization is highly reliable and valid. The paper outlines the shortcomings and several critical notes to stimulate more research and discussion in this area.
... They have also outlined a significant dampening effect of perceived privacy on utilitarian and social benefits. The negative effect of privacy concerns related to sensitive data listening has been confirmed in other studies (Chi et al., 2021;Hasan et al., 2021;Jain et al., 2022). We also find some previous studies that have focused on UX by comparing different types of IVAs. ...
Article
Full-text available
The digital transformation, in which we have actively participated over the last decades, involves integrating new technology into every aspect of the business and necessitates a significant overhaul of traditional business structures. Recently there has been an exponential increase in the presence of Artificial Intelligence (AI) in people’s daily lives, and many new AI-infused products have been developed. This technology is relatively young and has the potential to significantly affect both industry and society. The paper focuses on the Intelligent Voice Assistants (IVAs) and the User eXperience (UX) evaluation. IVAs are a relatively new phenomenon that has generated much academic and industrial research interest. Starting from the contribution to systematization provided by the Artificial Intelligence User Experience (AIXE®) scale, the idea is to develop an easy UX evaluation tool for IVAs that decision-makers can adopt. The work proposes the Partial Least Squares-Path Modeling (PLS-PM) to investigate different dimensions that affect the UX, and to verify if it becomes possible to quantify the impact and performance of each dimension on the general latent dimension of UX. The Importance Performance Matrix Analysis (IPMA) is utilised to evaluate and identify the primary factors that significantly influence the adoption of IVAs. IVA developers should examine the main aspects as a guide to enhancing the UX for individuals utilising IVAs.
Article
Full-text available
This study examined whether employees develop perceptions about 3 different types of fit: person-organization fit, needs-supplies fit, and demands-abilities fit. Confirmatory factor analyses of data from 2 different samples strongly suggested that employees differentiate between these 3 types of fit. Furthermore, results from a longitudinal design of 187 managers supported both the convergent and discriminant validity of the different types of fit perceptions. Specifically, person-organization fit perceptions were related to organization-focused outcomes (e.g., organizational identification, citizenship behaviors, turnover decisions), whereas needs-supplies fit perceptions were related to job- and career-focused outcomes (e.g., job satisfaction, career satisfaction, occupational commitment). Although demands-abilities fit perceptions emerged as a distinct construct, they were not related to hypothesized outcomes (e.g., job performance, raises).
Article
Full-text available
This study undertakes a systematic review of Artificial Intelligence and its applications to service encounters and the hospitality industry by reviewing publications that (1) mainly discuss AI technology, (2) are in the context of services, and (3) investigate the use or the adoption of AI technology rather than technical issues such as system design, algorithms, voice recognition modules, or psychological knowledge representations. Seven major themes are identified via a review of 63 publications. The themes are (1) current AI technology in service frontline, (2) levels of artificial intelligence, (3) AI agents, (4) human–AI service encounters, (5) theoretical frameworks of the acceptance of AI, (6) reasons for adopting AI, and (7) potential challenges of AI. This study also offers a further research agenda that highlights nine critical research areas to guide human–AI interaction and AI adoption researches.
Article
Full-text available
Postulating consumer involvement as crucial to online group buying, this study deploys consumer perceived value, perceived trust, and susceptibility to interpersonal influence to provide a closer look at consumer intention to participate in online group buying. The results of the proposed model, tested using structural equation modeling on a sample of 553 respondents, show that consumer involvement plays a central role in explaining intention to participate in online group buying. Consumer perceived value, perceived trust, and susceptibility to interpersonal influence all show a significant relationship with consumer involvement. Consumer perceived value also exhibits a strong relationship with perceived trust, which, in turn, exhibits a significant relationship with intention to participate in online group buying. The results furnish significant contributions to the theory and practice of online group buying and retailing. The limitations of the study and implications for future research in online group buying are discussed.
Article
Full-text available
Background: Artificial intelligence (AI) is increasingly being used in healthcare. Here, AI-based chatbot systems can act as automated conversational agents, capable of promoting health, providing education, and potentially prompting behaviour change. Exploring the motivation to use health chatbots is required to predict uptake; however, few studies to date have explored their acceptability. This research aimed to explore participants' willingness to engage with AI-led health chatbots. Methods: The study incorporated semi-structured interviews (N-29) which informed the development of an online survey (N-216) advertised via social media. Interviews were recorded, transcribed verbatim and analysed thematically. A survey of 24 items explored demographic and attitudinal variables, including acceptability and perceived utility. The quantitative data were analysed using binary regressions with a single categorical predictor. Results: Three broad themes: 'Understanding of chatbots', 'AI hesitancy' and 'Motivations for health chatbots' were identified, outlining concerns about accuracy, cyber-security, and the inability of AI-led services to empathise. The survey showed moderate acceptability (67%), correlated negatively with perceived poorer IT skills OR = 0.32 [CI95%:0.13-0.78] and dislike for talking to computers OR = 0.77 [CI95%:0.60-0.99] as well as positively correlated with perceived utility OR = 5.10 [CI95%:3.08-8.43], positive attitude OR = 2.71 [CI95%:1.77-4.16] and perceived trustworthiness OR = 1.92 [CI95%:1.13-3.25]. Conclusion: Most internet users would be receptive to using health chatbots, although hesitancy regarding this technology is likely to compromise engagement. Intervention designers focusing on AI-led health chatbots need to employ user-centred and theory-based approaches addressing patients' concerns and optimising user experience in order to achieve the best uptake and utilisation. Patients' perspectives, motivation and capabilities need to be taken into account when developing and assessing the effectiveness of health chatbots.
Article
Lack of trust is one of the main obstacles standing in the way of taking full advantage of the benefits artificial intelligence (AI) has to offer. Most research on trust in AI focuses on cognitive ways to boost trust. Here, instead, we focus on boosting trust in AI via affective means. Specifically, we tested and found associations between one's attachment style—an individual difference representing the way people feel, think, and behave in relationships—and trust in AI. In Study 1 we found that attachment anxiety predicted less trust. In Study 2, we found that enhancing attachment anxiety reduced trust, whereas enhancing attachment security increased trust in AI. In Study 3, we found that exposure to attachment security cues (but not positive affect cues) resulted in increased trust as compared with exposure to neutral cues. Overall, our findings demonstrate an association between attachment security and trust in AI, and support the ability to increase trust in AI via attachment security priming.
Article
Brands are built by “wrapping mediocre products in emotive and social associations” (Galloway, 2016). Nike and Coca-Cola differentiate through the emotional benefits associated with their brand, not their products functional benefit — with the latter long considered the worlds’ most valuable brand (Interbrand, 2016). This brand-building model has not been scrutinised in an environment where technology is a primary driver of organisational success, not merely a support function (E&Y, 2011). Artificial Intelligence (AI) has made “giant leaps” (Hosea, 2016) — algorithms — fly our planes and beat us at chess. Organisational spending on AI is set to reach $47 billion by 2020 (Ismail, 2017) with many (32%) claiming its biggest impact will be in marketing. Marketing communities conject that AI will “revolutionise” marketing (John, 2015) and while companies like Amazon appear to use a different model — utilizing AI to fulfil customer’s functional needs (commerce) — AI’s impact on brand has seldom been explored in an academic context. This paper aims to establish the implementation of AI as a source of brand success — recommending to marketing professionals how to allocate resources to sustain brand effectiveness. Grounded theory research was used; semi-structured interviews were conducted and data collection/analysis was done concurrently. There were three major findings: AI can improve operational efficiency — improving the consistency in which a brand delivers their promise. Natural Language Processing (NLP) can improve elements of customer service. And Machine Learning enables personalized offerings, but organizations are limited by data quality/quantity and knowledge of the technologies applications.
Article
This study examines the antecedents of customers’ willingness and objection to use artificially intelligent robotic devices in hospitality services (full-service and limited-service hotels). Drawing on the Artificially Intelligent Device Use Acceptance (AIDUA) theory, this study validates and extends the AIDUA framework in the hospitality service setting. The results point to the applicability of the AIDUA framework, suggesting that hospitality customers’ intention to the use of artificially intelligent devices are influenced by social influence, hedonic motivation, anthropomorphism, performance and effort expectancy, and emotions toward the artificially intelligent devices. Findings further suggest that compared to limited-service hotel customers, full-service hotel customers rely less on their social groups when evaluating artificially intelligent robotic devices; their emotions toward the use of artificially intelligent devices are less likely to be influenced by effort expectancy; and their emotions cause less impact on their objection to the use. Theoretical contributions and managerial implications of this study are discussed. Limitations and future study recommendations are provided.
Article
This study investigated the influence of service robot attributes on customers’ hospitality experience from the perspective of relationship building. Through literature review and a preliminary study with in-depth interviews, a conceptual framework was developed. A scenario-based experiment and questionnaire survey were designed to test the model. The results indicate that robots’ being perceived as humanlike or intelligent positively affects customer-robot rapport building and the hospitality experience. Additionally, customer-employee rapport building was found to mediate the relationship between robot attributes and the hospitality experience, but customer-robot rapport building was not. Based on these findings, theoretical contributions and practical implications were discussed.
Article
Purpose Service robots can offer benefits to consumers (e.g. convenience, flexibility, availability, efficiency) and service providers (e.g. cost savings), but a lack of trust hinders consumer adoption. To enhance trust, firms add human-like features to robots; yet, anthropomorphism theory is ambiguous about their appropriate implementation. This study therefore aims to investigate what is more effective for fostering trust: appearance features that are more human-like or social functioning features that are more human-like. Design/methodology/approach In an experimental field study, a humanoid service robot displayed gaze cues in the form of changing eye colour in one condition and static eye colour in the other. Thus, the robot was more human-like in its social functioning in one condition (displaying gaze cues, but not in the way that humans do) and more human-like in its appearance in the other (static eye colour, but no gaze cues). Self-reported data from 114 participants revealing their perceptions of trust, anthropomorphism, interaction comfort, enjoyment and intention to use were analysed using partial least squares path modelling. Findings Interaction comfort moderates the effect of gaze cues on anthropomorphism, insofar as gaze cues increase anthropomorphism when comfort is low and decrease it when comfort is high. Anthropomorphism drives trust, intention to use and enjoyment. Research limitations/implications To extend human–robot interaction literature, the findings provide novel theoretical understanding of anthropomorphism directed towards humanoid robots. Practical implications By investigating which features influence trust, this study gives managers insights into reasons for selecting or optimizing humanoid robots for service interactions. Originality/value This study examines the difference between appearance and social functioning features as drivers of anthropomorphism and trust, which can benefit research on self-service technology adoption.
Article
Given the increasing presence of humanoid service robots at airports, hotels and restaurants, the current study investigates how consumers’ interdependent self-construal and technology self-efficacy jointly influence their reactions to service machines with humanlike features in a service failure context. The results demonstrate that consumers show varying levels of dissatisfaction with a service failure caused by an anthropomorphic (vs. non-anthropomorphic) self-service machine depending on their levels of interdependent self-construal (high vs. low) and technology self-efficacy (high vs. low). The underlying mechanism is self-blame. The theoretical contributions to the existing service technology research and the emerging anthropomorphism literature are discussed. This research also provides practical guidelines to industry practitioners for more efficient usage of service robots in delivering customer service.