Conference PaperPDF Available

Follow Me If You Want to Live - Understanding the Influence of Human-Like Design on Users' Perception and Intention to Comply with COVID-19 Education Chatbots

Authors:

Abstract

Following recommendations and complying with behavioral attitudes is one major key in overcoming global pandemics, such as COVID-19. As the World Health Organization (WHO) highlights, there is an increased need to follow hygiene standards to prevent infections and in reducing the risk of infections transmissions (World-Health-Organization, 2021). This urgent need offers new use cases of digital services, such as conversational agents that educate and inform individuals about relevant counter measurements. Specifically, due to the increased fatigue in the population in the context of COVID-19, (Franzen and Wöhner, 2021), CAs can play a vital role in supporting and attaining user's behavior. We conducted an experiment (n=116) to analyze the effect of a human-like-design CA on the intention to comply. Our results show a significant impact of a human-like design on the perception of humanness, source credibility, and trust, which are all (directly or indirectly) drivers of the intention to comply.
This is the author’s version of a work that has been published in the following outlet:
Pietrantoni, N; Greulich, R.S.; Brendel, A.B.; Hildebrandt, F.; (2022): Follow Me If You Want
to Live - Understanding the Influence of Human-Like Design on Users’ Perception and
Intention to Comply with COVID-19 Education Chatbots. Proceedings of the Forty-Third
International Conference on Information Systems (ICIS 2022), Copenhagen, Denmark.
(forthcoming)
Chair of Business Informatics, Esp.
Intelligent Systems and Services
Prof. Dr. Alfred Benedikt Brendel
Helmholtzstr. 10
01069 Dresden – Germany
https://tu-dresden.de/bu/wirtschaft/winf/isd
NeuroIS Research Group
Dr. R. Stefan Greulich
Helmholtzstr. 10
01069 Dresden - Germany
Please note: The copyright is owned by the author and / or the publisher.
Commercial use is not allowed.
This work is licensed under a Creative Commons Attribution-NonCommercial-
NoDerivatives 4.0 International License.
Users’ Intention to Comply with COVID-19 Chatbots
Forty-Third International Conference on Information Systems, Copenhagen 2022
1
Follow Me If You Want to Live -
Understanding the Influence of Human-Like
Design on Users’ Perception and Intention to
Comply with COVID-19 Education Chatbots
Completed Research Paper
Nico Pietrantoni
Technische Universität Dresden,
Dresden, Germany,
nico.pietrantoni@mailbox.tu-
dresden.de
R. Stefan Greulich
Technische Universität Dresden,
Dresden, Germany,
stefan.greulich@tu-dresden.de
Alfred Benedikt Brendel
Technische Universität Dresden,
Dresden, Germany,
alfred_benedikt.brendel@tu-
dresden.de
Fabian Hildebrandt
Technische Universität Dresden,
Dresden, Germany,
fabian.hildebrandt@tu-dresden.de
Abstract
Following recommendations and complying with behavioral attitudes is one
major key in overcoming global pandemics, such as COVID-19. As the World
Health Organization (WHO) highlights, there is an increased need to follow
hygiene standards to prevent infections and in reducing the risk of infections
transmissions (World-Health-Organization, 2021). This urgent need offers new
use cases of digital services, such as conversational agents that educate and
inform individuals about relevant counter measurements. Specifically, due to the
increased fatigue in the population in the context of COVID-19, (Franzen and
Wöhner, 2021), CAs can play a vital role in supporting and attaining user’s
behavior. We conducted an experiment (n=116) to analyze the effect of a human-
like-design CA on the intention to comply. Our results show a significant impact
of a human-like design on the perception of humanness, source credibility, and
trust, which are all (directly or indirectly) drivers of the intention to comply.
Keywords: Digital Health, COVID-19, Conversational Agent, Human-like-design,
Intention to comply
Users’ Intention to Comply with COVID-19 Chatbots
Forty-Third International Conference on Information Systems, Copenhagen 2022
2
Introduction
Due to the COVID-19 pandemic, the World Health Organization (WHO) and countries around the world
provided recommendations and implemented measures to contain outbreaks (European-Centre-for-
Disease-Prevention-and-Control 2020). One central component of these recommendations is increased
hygiene, such as very frequent handwashing (Venkatesh and Edirappuli 2020) and social distancing
(Shearston et al. 2021). To comply with social distancing rules, individuals had to be counseled and
educated at home regarding various relevant topics, such as self-testing and hygiene measures (Amato et
al. 2017; Barakat and Kasemy 2020; European-Centre-for-Disease-Prevention-and-Control 2020).
Different means of communication were used to reach and inform all citizens, including traditional
approaches, such as TV spots and flyers (Michigan-Government 2021), and new digital approaches,
including Conversational Agents (CAs) (Miner et al. 2020).
CAs are “software-based systems designed to interact with humans using natural language” (Feine et al.
2019, p.1). The benefits of CAs are the ease of use and comfort of interacting via natural language instead
of potential complex and confusing graphical interfaces (Ahmad et al. 2018). CAs can be differentiated into
voice or text-based systems, whereas text-based CAs are often referred to as chatbots (Diederich et al.
2022). One prominent example is the chatbot of the WHO, accessible via WhatsApp. It was launched in
March 2020 and provides users with important information on how to prevent a COVID-19 infection
(World-Health-Organization 2021).
CAs have the potential to alter users' affection, cognition, and behavior (Diederich et al. 2022). Social cues
(e.g., having an avatar, greeting users, and utilizing emoticons) can be implemented to induce a sense of
humanness and social presence in users (Gefen and Straub 2004). This effect causes users to see a CA as a
social actor, similar to a human (Nass et al. 1994). As a result, a human-like designed CA can induce a sense
of trustworthiness (de Visser et al. 2016), enjoyment (Lee and Choi 2017) and persuasiveness (Diederich et
al. 2019). Besides increasing a CAs’ technical skills (i.e., improving algorithms for processing natural
language), researching the impact and effect on users of human-like design elements remains a key topic of
interest for theory and practice (Diederich et al. 2022; Feine et al. 2019).
The increasing importance of building effective CAs for health counseling and prevention (e.g., COVID-19),
such as advising about hygiene measures (Miner et al. 2020), has led to a number of recent studies on this
topic (Abd-Alrazaq et al. 2020; Almalki 2021; El Hefny et al. 2021; Jordan et al. 2021). One prominent topic
is to investigate how CAs should be designed to improve users’ intention to comply. In this context, several
factors have been found to play an important role, such as accuracy (Espinoza et al. 2020), trust, and
situational factors (e.g., the severity of symptoms) (Dennis et al. 2020). However, understanding the effect
of CA’s human-like design on users’ intention to comply with COVID-19 related hygiene measures has yet
to be engaged in research. To our best knowledge, this has not been investigated so far. Against this
background, this study aims to answer the following research question:
RQ: How does CA’s human-like design influence a user’s intention to comply with health-related
recommendations?
To address this research question, we conducted an online experiment with 116 users to investigate the
relationship of a human-like design CA (e.g., human-like versus non-human-like) on the perception of
humanness, persuasiveness, source credibility, trust and the intention to comply. Our results provide
support for a positive impact on the intention to comply of a human-like design CA. However, we reveal
that trust is mediated by persuasion, which in turn positively influences the intention to comply.
Research Background
Conversational Agents for Healthcare Services and COVID-19
In healthcare contexts, ELIZA was one of the first CAs and it was built to emulate a therapist (Weizenbaum
1966). Since then, CAs have been applied to numerous healthcare-related areas, including mental health
(Park et al. 2021), medication adherence (Fadhil 2018), psychiatric counseling (Oh et al. 2017) and health
nutritution (Casas et al. 2018). Specifcally in healthcare, CAs go beyond existing static information forms
and provide a convient customer and patient experience (Laranjo et al. 2018). Compared to human service
encounters, CAs are not limited by time and place, which is an advantages for providers and users (Gnewuch
Users’ Intention to Comply with COVID-19 Chatbots
Forty-Third International Conference on Information Systems, Copenhagen 2022
3
et al. 2018; Verhagen et al. 2014). Regarding COVID-19, CAs have been applied for various services, ranging
from personal risk assessments, acquiring general information about preventing an infection, to combating
fake news, and misinformation (Judson et al. 2020). For example, the chatbot Clara was introduced by the
Centers for Disease Control and Prevention as a public self-checking tool, asking various questions about
the individual vaccination status and health symptoms, and subsequently providing recommendations
(e.g., staying at home and take a test)(CDC 2020).
However, for these CAs to lead to significant effects, users and patients must comply with the
recommendations and advices (Dennis et al. 2020). To the best of our knowledge, a unified definition of
compliance is in medicine and psychology contexts still missing and many synonyms are used, such as
adherence, therapeutic alliance or cooperation (Kyngäs et al. 2000). In this study, we understand intention
to comply as a patient's willingness to follow healthcare experts' prescriptions (e.g., treatment programs)
(Murphy and Coster 1997). The patient’s willingness to act complaint, depends on numerous relational (i.e.,
trust) and situational factors (i.e., style of information presentation) (Hojat et al. 2010; Segal 1994).
Adapted to the context of hygiene and COVID-19 CAs, users act compliant with the suggestions of the CA
when they act as recommended (e.g., wash hands more frequently). In this context, the user’s intention to
comply can be expected to depend on how the CA and its recommendations are perceived (Dennis et al.
2020; Liu and Sundar 2018). For example, even when a CA provides perfect recommendations, it still has
to be perceived as trustworthy for users to comply (Dennis et al. 2020).
Human-Like Designed Conversational Agents
The tendency of associating human-like characteristics to objects is anchored in the human subconscious
mind and called anthropomorphism (Howard and Kunda 2000). This bias causes humans to associate
objects (e.g., pet rocks), cartoon characteristics (e.g., Goofy) and animals (e.g., smiling monkeys) with
human characteristics (Epley et al. 2007). Anthropomorphism also applies to the context of users
communicating with CAs (e.g., by using Siri or Alexa). The phenomenon is further explained by the
“Computers are Social Actors” (CASA) paradigm (Nass et al. 1994) and the Social Response Theory (Nass
and Moon 2000).
The CASA paradigm states, that computers are attributed by their users with a certain level of humanness,
despite knowing it is a machine (Nass and Moon 2000). The level of perceived humanness is influenced by
the extent of human-like design features i.e., quantity and type of social cues. Based on the perceived level
of humanness, users apply social norms to CAs (e.g., gender stereotypes) (Lang et al. 2013; Nass et al. 1994;
Nass and Moon 2000). Furthermore, the Social Response Theory states that users are triggered by social
cues to act similar to a human-to-human encounter (e.g., saying thank you at the end of a conversation;
Feine et al. 2019; Nass and Moon 2000). In this context, recent studies reported various cognitive and
behavioral effects when CAs are equipped with social cues, such as increased enjoyment (Lee and Choi
2017), persuasiveness (Diederich et al. 2019), and perceived trust (Araujo 2018).
To structure social cues for CAs, Seeger et al. (2018) presented three main types: human identity, verbal
cues, and non-verbal cues. Examples for human identity cues are a name (Cowell and Stanney 2005) or
gender (Nunamaker et al. 2011). Verbal cues include turn-taking (Gong 2008), word and syntax variability
(Seeger et al. 2018) and self-reference (Schuetzler et al. 2018). Non-verbal cues include the use of emoticons
(Feine et al. 2019) and dynamic response delays (Gnewuch et al. 2018).
Research Model and Hypotheses
Our study aims to investigate the role of human-like CA design and the resulting perception of humanness
in context of users’ intention to comply with hygiene recommendations. Building upon the social response
theory (Nass and Moon 2000) and CASA (Nass et al. 1994), we develop a set of hypothesis on how perceived
humanness influences source credibility, trust, persuasiveness, and users’ intention to comply, including
the relationships among these constructs (see Figure 1). In the following sections, we will present and
explain our set of hypotheses in more detail.
Users’ Intention to Comply with COVID-19 Chatbots
Forty-Third International Conference on Information Systems, Copenhagen 2022
4
Figure 1. Research Model
Perceived Humanness
A human-like designed CA means that it is equipped with social cues (Feine et al. 2019; Seeger et al. 2018).
Social cues can be the display of an avatar, a name (Cowell and Stanney 2005; Gong 2008; Nunamaker et
al. 2011), self-reference, self-disclosure, greeting (Cafaro et al. 2016; Schuetzler et al. 2018), and dynamic
response delays (Gnewuch et al. 2018). These social cues trigger anthropomorphism in users (Dacey 2017),
i.e., users perceive the CA as human like (Epley et al. 2007). Generally, users are aware that CAs are
machines, but this does not prevent the perception of humanness (Nass and Moon 2000).
For instance, a recent study of Westerman et al. (2019) showed that grammar and typing errors influence
perceived humanness. Similarly, de Kleijn et al. (2019) studied how unique language characteristics effect
perceived humanness and found significance for right-branching sentences (i.e., sentences in which the
main topic is stated before further details). Additionally, Go and Sundar (2019) found that a CA with a
human-like avatar was associated with higher levels of perceived humanness. We therefore hypothesize:
H1: A human-like CA design increases the perceived humanness of the CA.
Source Credibility
Perceived source credibility can be understood as the judgment made by the message receiver about the
communicator’s believability (Gilly et al. 1998). In this regard, humans tend to add subjective factors to
their judgement process i.e., source credibility is not an objective measure but is influenced by situational
and relational factors (Kumkale et al. 2010), such as initial thoughts on overall impressions (Fogg 2002;
Lowry et al. 2008). In CA contexts, Beldad et al. (2016) reported that embodied virtual agents elevate
perceived source credibility, leading to a higher purchase intention. These results are supported by the study
of Tan and Liew (2020), showing that social cues in mobile commerce chatbots can increase perceived
credibility. Thus:
H2: Perceived humanness increases the perceived source credibility.
Trust
To trust means to belief that another entity (either human or artificial) will help in reaching one’s goals,
despite vulnerability or uncertainty (Lee and See 2004). In the healthcare context, vulnerability refers to a
condition associated with patients or humans potentially suffering from an illness (Gjengedal et al. 2013).
and uncertainty is the incapacity to interpret or predict illness-related occurrences (Mishel 1981). In context
of a COVID-19 CA, trusting a CA means that users belief that it will provide accurate and helpful services,
despite the dangers of COVID-19. Because humans are social animals, they are inclined to build trust in
social interactions (Yamagishi and Yamagishi 1994). Hence, the perception of humanness in a CA can be
expected to increase trust.
The findings of Toader et al. (2020) support this assumptions by demonstrating that users have a higher
level of trust for a human-like design chatbot. Similarly, Følstad et al. (2018) reported that human-like
features may induce higher levels of trust. Further, the results of Lankton et al. (2015) support a link
Users’ Intention to Comply with COVID-19 Chatbots
Forty-Third International Conference on Information Systems, Copenhagen 2022
5
between human-like technology design and a user's trust in a system. Thus, we state the following
hypothesis:
H3a: Perceived humanness increases perceived trust.
Furthermore, the perception of the source of information significantly influence’s trust, based on the
attractiveness of the source (Hovland et al. 1953; Wiener and Mowen 1986). When interacting with digital
recommender systems, users are exposed to a trust transference process i.e., relying on cues linked to
trusted ‘proof sources’ (Bo and Benbasat 2007; Doney and Cannon 1997). In CA contexts, Yen and Chiang
(2021) have reported that credibility has a positive effect on trust if users perceive the source and
information as believable. Further, when individuals evaluate the reliability and quality of communication,
source credibility has been identified as one of the most important factors impacting trust (Edwards et al.
2016). We therefore derive the following hypothesis:
H3b: Perceived source credibility increases perceived trust.
Persuasiveness
In context of CAs, persuasion is succeeding in changing a user’s attitude toward a desired stance during the
interaction (Lehto et al. 2012) (e.g., taking the dangers of COVID-19 seriously). In this context, research of
Cui et al. (2020) have shown that verbal social cues have a high positive impact on persuasion. Similarly,
Paskojevic (2014) showed that when users perceive the content on websites as socially present, a website's
persuasiveness increases. Regarding CA literature and human-machine-interactions, Diederich et al.
(2019) reported that perceived humanness increases persuasiveness. Against this background, we
hypothesize:
H4a: Perceived humanness increases perceived persuasiveness.
Following Lehto et al. (2012), credibility is one of the main drivers of persuasiveness. In this regard, the
study of Pornpitakpan (2004) reported that high credible sources result in higher perceived persuasiveness.
Similarly, von Hohenberg and Guess (2022) reported that perceived source credibility drives
persuasiveness of partisan topics in media related contexts. Thus:
H4b: Perceived source credibility increases perceived persuasiveness.
Furthermore, it has been reported that persuasiveness is influenced by trust due to its effect on the decision-
making process (Milliman and Fugate 1988). Beyond human-to human interactions, Dehnert and Mongeau
(2022) provide similar findings in human-AI interactions. Hence, trust can be seen as a parameter that
significantly influences the user’s persuasiveness. In CA research, Hildebrand and Bergner (2019) reported
that higher levels of trust are impacting the persuasion process by enforcing a stronger and intimate
consumer-brand relationship in human-machine-interactions. Furthermore, current literature state that
relational agents are more liked and trusted that in turn lead to a higher perception of behavioral change of
users (Sillice et al. 2018). Therefore, we hypothesize:
H4c: Perceived trust increases perceived persuasiveness.
Intention to Comply
In the context of CAs, users’ intention to comply with recommendations of the CA can be understood as
their willingness and ability to follow these recommendations (Dennis et al. 2020; Murphy and Coster 1997)
and is a necessary condition for actual compliant behavior (Guhr et al. 2019). In human-machine-
interaction, trust is a key driver of intention to comply, because it facilitates cooperative behavior (Kulms
and Kopp 2018). For example, patient's trust in physicians can have a favorable impact on the patient's
willingness to comply (Lowry et al. 2014; Lu and Zhang 2019). Similarly, trust has been shown to drive
intention to comply with CAs’ COVID-19-related recommendations (Bulgurcu et al. 2010; Dennis et al.
2020). Thus, we derive the following hypothesis:
H5a: Perceived trust increases users’ intention to comply.
Persuasion can influence users’ intentions to comply because persuasion is the change of one’s beliefs and
attitudes (Miller 1965; Petty and Briñol 2010), which are the triggers behind intention and subsequent
behavior (Feldman and Lynch 1988). Therefore, when a CAs succeeds in persuasions regarding COVID-19
Users’ Intention to Comply with COVID-19 Chatbots
Forty-Third International Conference on Information Systems, Copenhagen 2022
6
related hygiene measures (i.e., users take hygiene more seriously), the subsequent intention to behave
accordingly is also increased. In CA literature it is also reported that perceived persuasiveness significantly
impacts user’s intention to comply (Drozd et al. 2012). Similarly, current literature about COVID-19 CAs
show that users comply when higher levels of anthropomorphism are applied (Kim and Ryoo 2022). Against
this background, we hypothesize:
H5b: Perceived persuasiveness increases users’ intention to comply.
Method
We conducted a between-subject online experiment in the context of CAs for education of COVID-19-realted
hygiene measures, including recommendations for future hygiene behavior. Via the experiment, we
investigate the influence of a human-like design CA on perceived humanness, source credibility, persuasion,
trust, and intention to comply. The experiment was conducted in April of 2022. In the following sections,
we will present our sample, task and procedure, treatment designs, and measures.
Participants
We recruited participants via the crowd working platform Clickworker. In total, 118 native German-
speakers participated in our experiment. We applied two attention checks and two responses were invalid,
resulting in a sample size of 116. The mean age of all participants was 41,5 years and 41,4% were female.
Overall, completing the experiment and filling out the survey took in the median under 13 minutes. All
participants were reimbursed with 1,30€ for their participation.
Task and Procedure
Following the example of previous studies with CAs (e.g. Bührke et al. 2021; Diederich et al. 2020; Gnewuch
et al. 2018), we implemented a structured dialogue with concrete tasks. We specifically selected hygiene as
the topic of the interaction because the experiment was conducted around two years after the outbreak of
the COVID-19 pandemic. Subsequently, many people have started to fatigue and, thereby, reduce their
efforts (e.g., wash hands less frequently) (Franzen and Wöhner 2021; MacIntyre et al. 2021). Subsequently,
the interaction with the CA is relevant and timely, and can lead to actual compliance and intention to comply
in users.
Treatments
We applied a between-subject design with the comparison of human-like and non-human-like CA design.
Users were randomly assigned to one of the two chatbots, to avoid carryover effects (Boudreau et al. 2001).
The CAs were implemented via Google Dialogflow and trained with identical language phrases and similar
dialogue contexts. Both chatbots were able to understand and process various user inputs (i.e., synonyms
or different phrasings with the same intention). The only difference of both chatbots is their appearance;
one being equipped with additional social cues (see Figure 2).
The human-like design cues were based on the structural taxonomy introduced by Feine et al. (2019),
following visual, verbal and invisible cues. We decided to implement a drawn human-like avatar, name
(Emma) and an associated gender (female). Furthermore, it uses emoticons, self-reference (“Hi, I am
Emma…”) and direct addressing (“do you think that…”). Further, we applied variability in the syntax and
the chatbot started the dialogue by greeting the users. Additionally, we implemented a delay of the chatbot
responses (e.g., know from instant message services like WhatsApp).
Users’ Intention to Comply with COVID-19 Chatbots
Forty-Third International Conference on Information Systems, Copenhagen 2022
7
Note: Dialogues translated to English from German
Figure 2. CA with Human-like Design (left) and without Human-like Design (right)
Measures
For our research model, we included constructs and related items from established literature. We measured
perceived humanness (Holtgraves et al. 2007) and source credibility (McComas and Trumbo 2001) on a 9-
point semantic differential scale. Trust (Yoo and Gretzel 2008), persuasiveness (Lehto et al. 2012), and
intention to comply (Bulgurcu et al. 2010) were measured on a 7-pont Likert scale, ranging from 1 (“fully
disagree”) to 7 (“strongly agree”).
All constructs provide a sufficient CR (> .70), a sufficient Cronbach’s α value of >.70 and an AVE (> .50)
(Cortina 1993; Nunally 1970). As suggested by Gefen and Straub (2005), only factor loadings above .60
were considered. Thus, we removed one item of perceived humanness. A comprehensive overview of the
respective constructs and items with their corresponding mean, standard deviation (SD) and factor loading,
including Cronbach’s α, composite reliability (CR), and average variance extracted (AVE) are visualized in
Table 1.
Users’ Intention to Comply with COVID-19 Chatbots
Forty-Third International Conference on Information Systems, Copenhagen 2022
8
Mean
SD
Loadings
Perceived Humanness (Cronbach’s α = .821, CR = .874, AVE = .585) (Holtgraves et al. 2007)
The chatbot is…
extremely inhuman-like - extremely human-like
4.034
1.480
.680
extremely unskilled extremely skilled
4.914
1.418
.804
extremely unthoughtful extremely thoughtful
4.526
1.190
.819
extremely impolite extremely polite
5.017
1.364
.744
extremely unresponsive extremely responsive
4.466
1.585
.111
extremely unengaging extremely engaging
4.707
1.292
.842
Trust (Cronbach’s α = .846, CR = .898, AVE = .691) (adapted from Hyan Yoo and Gretzel 2008)
The chatbot is reliable.
4.750
1.532
.881
The chatbot is consistent in the recommendations they provide.
5.293
1.358
.816
The chatbot does not make mistakes.
3.871
1.618
.688
The chatbot is dependable.
4.655
1.539
.920
Persuasiveness (Cronbach’s α = .876, CR = .923, AVE = .801) (Lehto et al. 2012)
The chatbot has an influence on my thinking regarding hygiene.
3.345
1.804
.887
The chatbot is personally relevant for me.
3.181
1.811
.916
The chatbot makes me reconsider my thinking about hygiene.
3.267
1.805
.851
Intention to comply (Cronbach’s α = .951, CR = .976, AVE = .953) (adapted from Bulgurcu et al. 2010)
I will follow the chatbots’ hygiene suggestions.
4.491
1.825
.977
I will comply with the hygiene recommendations of the chatbot.
4.371
1.949
.975
Perceived Source Credibility (Cronbach’s α = .861, CR = .898, AVE = .691) (adapted from McComas and
Trumbo 2001)(Kim et al. 2009)
The chatbot is…
Inaccurate - Accurate
4.897
1.517
.876
Unfair - Fair
4.810
1.631
.893
Biased - Unbiased
4.509
1.863
.916
CR = Composite Reliability, AVE = Average Variance Extracted, SD= Standard Deviation
Note all items were translated to German for the survey.
Table 1. Measurement of Constructs and Items
Further, our results show a sufficient convergent validity and discriminant validity (see Table 2). Due to an
AVE >.50, convergent validity is given for all constructs (Hair et al. 2010). Ultimately, our square roots of
the AVE (see Table 2, bold numbers) are higher than the correlations between the constructs (Fornell and
Larcker 1981). To summarize, our research model indicates sufficient validity and reliability.
Constructs
1
2
3
4
5
6
1. Human-like Design
n.a.
2. Humanness
.238
.765
3. Intention to Comply
.090
.358
.976
4. Persuasiveness
.008
.447
.723
.895
5. Source Credibility
.060
.575
.319
.449
.885
6. Trust
.078
.631
.345
.479
.510
.831
n.a. = not applicable
Table 2. Inter-Construct Correlations and Validities
Results
We applied the PLS method using Smart PLS 3.3.9 to test our derived hypotheses regarding the relations
of a human-like design CA, perceived humanness, perceived source credibility, trust, persuasiveness, and
intention to comply. In our analysis, we used the bootstrapping re-sampling method with 5,00o samples to
assess the significance paths, as suggested by Chin (1998). For this study, we followed the structural
equitation model approach from Bagozzi and Yi (1988) due to the consideration of measurement errors and
its multidimensional structure of theoretical constructs. Because of its advantages in terms of limiting
assumptions, the partial least squares estimator is commonly utilized in experimental research (Fombelle
et al. 2016). Our results with respective coefficients, R2 values, and significance levels are visualized in
Figure 3.
Users’ Intention to Comply with COVID-19 Chatbots
Forty-Third International Conference on Information Systems, Copenhagen 2022
9
***= p < .001, **= p < .01, *= p < .05
Figure 3. PLS Structural Model (N=116)
The human-like design of our CA (human-like and non-human-like) shows a significant impact on users'
perception of humanness (β = .238, p = .005). As a result, we can support hypothesis H1, meaning that
using social cues in CAs lead to higher levels of perception regarding humanness. Further, we can support
H2 stating that perceived humanness positively influences the perceived source credibility (β = .575, p <
.001). This analysis also reveals that hypothesis H3a perceived humanness has a significant positive impact
on trust (β = .505, p < .001). Additionally, our results indicate a positive effect of source credibility on trust
= .219, p = .028), which supports H3b. In contrast, we found no support for hypothesis H4a that
postulates an impact of perceived humanness towards persuasiveness (β = .146, p = .194). In the context of
COVID-19, we show that source credibility has a significant influence on persuasiveness (β = .227, p = .037),
supporting H4b. Our results also support H4c by indicating a positive and significant influence of trust on
persuasiveness (β = .271, p = .034). However, our results do not indicate a significant influence of trust on
intention to comply (β = .001, p = .990) and thus we found no support for our Hypothesis H5a. Finally, we
found a significant effect of persuasiveness on intention to comply (β = .725, p < .001) and therefore our
hypothesis H5b is supported. All our hypotheses, including their β-value, t-value, and the derived support
are summarized in Table 3.
Hyp.
Relationship
β-value
t-value
p-value
Support
H1
Human-like design Perceived Humanness
.238
2.806
.005**
Supported
H2
Perceived Humanness Source Credibility
.575
9.711
.000***
Supported
H3a
Perceived Humanness Trust
.505
5.606
.000***
Supported
H3b
Source Credibility Trust
.219
2.202
.028*
Supported
H4a
Perceived Humanness Persuasiveness
.146
1.299
.194
Not supported
H4b
Source Credibility Persuasiveness
.227
2.082
.037*
Supported
H4c
Trust Persuasiveness
.271
2.126
.034*
Supported
H5a
Trust Intention to Comply
.001
0.012
.990
Not supported
H5b
Persuasiveness Intention to Comply
.725
11.301
.000***
Supported
Note all β-values are standardized | ***= p < .001, **= p < .01, *= p < .05
Table 3. Results of Hypothesis Tests
Based on Cohen (2013), our R2 values show a large power for source credibility (R2 = .330), trust (R2 = .431),
persuasiveness (R2 = .297), and intention to comply (R2 = .522), and a small power for perceived humanness
(R2 = .057). Further, trust has a positively impact on persuasiveness, but showed no significance on
intention to comply. Therefore, we analyzed the specific indirect effect of trust via persuasiveness on
intention to comply which shows significance (trust persuasiveness intention to comply, β = .197, p =
0.033) and thus trust is fully mediated by persuasion. Further, our results suggest a mediation between
perceived humanness and persuasiveness by trust. However, the specific indirect effect (perceived
humanness trust → persuasiveness, β = .164, p = 0.065) is not significant.
Discussion
The aim of this study was to investigate the relationship between a human-like-designed CA and the
intention to comply in the context of hygiene and COVID-19. The results contribute to the current discourse
by advancing the understanding of CAs in healthcare contexts and by providing empirical evidence that
Users’ Intention to Comply with COVID-19 Chatbots
Forty-Third International Conference on Information Systems, Copenhagen 2022
10
human-like design CAs impact the intention to comply. We show that in human-machine-interactions
about COVID-19 hygiene information’s, users tend to be more convinced to follow recommendations when
human-like-design is applied. In this context, we will outline several implications for theory, future
research, and practice.
Implications for Theory and Future Research
Our results indicate that perceived humanness does not directly increase persuasiveness. However, the
effect of perceived humanness on persuasiveness is fully mediated by trust and source credibility. This
implies that the mere presence of perceived humanness is not enough to persuade users. Instead, the
perception of humanness is critical to increase other factors related to persuasiveness. Thus, understanding
which social cues are related to factors critical for persuasiveness is of high relevance for future research.
For instance, a chatbot portraying a local physician (e.g., “Hi, I’m Dr. Jones from your local hospital”) might
be differently perceived than a generic human personality (e.g., “Hi, I’m John and I …”) regarding source
credibility.
Perceived humanness has a strong and highly significant effect on source credibility. From a pure logical
perspective, source credibility should be an objective judgement and not influenced by arbitrary situational
factors i.e., the perceived humanness is not a direct indicator of a source’s credibility. However, when
viewing this effect through the lens of cognitive biases, the observed influence can be explained. In human-
to-human interaction, the so called “Halo Effect” is the tendency of humanness to extrapolate one specific
trait to the overall impression of an individual or object (Forgas and Laham 2016) e.g., an influence of a
student’s name with less appealing surnames on grading (Erwin et al. 1984; Malouff et al. 2014). Future
research could study the influence of small errors on the source credibility; for instance, when the
information provided by a chatbot is correct and truthful, but it also produces typing errors.
Furthermore, current literature reports a strong influence of trust on user’s compliance and therefore
intention to comply in COVID-19 contexts (Sarracino et al. 2022). Users are actively seeking counseling by
a CA (i.e., users state symptoms and the CA analyses if it is likely to be COVID-19 and what steps to take)
and their compliance is driven by trust and not persuasiveness. In our study, intention to comply is driven
by persuasiveness and only indirectly by trust (i.e., trust is mediated by persuasiveness). In this context, we
would like to offer the following explanation for this contradiction. The service of our implemented CAs was
to educate users about hygiene in relation to COVID-19. Hence, the service was not critical or directly
related to a life-threatening situation. In contrast, getting counseling in context of a potential COVID-19
infection is highly critical and potential life-threatening. Thus, trust drives intention to comply in critical
interactions and persuasiveness and less critical ones. Based on this explanation, we would like to direct
future research to investigate when the turning point is (i.e., what situational factors have to change that
trust is no longer important, but persuasiveness is and vice versa).
Lastly, we would like to address the issues related to using human-like design to improve persuasiveness.
Specifically, reacting to human-like characteristics is an automatic and mindless behavior (Kim and Sundar
2012). It interacts with users’ beliefs and decisions, without their knowledge, compromising freewill.
Subsequently, using human-like design elements (i.e., social cues) can be seen as unethical. A similar
discussion is currently ongoing in the area of digital nudging (i.e., the usage of digital design elements to
influence decisions (Mirsch et al. 2017; Schneider et al. 2018). Lembcke et al. (2019) pointed out that the
application of digital nudges should only be done when considering freedom of choice (i.e., decisions are
not forced by omitting options), goal-justification (i.e., the digital nudge is implemented to achieve pro-
social, pro-environment, or pro-self-goals), and transparency (i.e., users are aware of the nudges).
Following these recommendations, we should be careful when to implement human -like characteristics to
achieve high levels of intention to comply. In the case of preventing and managing a COVID-19 pandemic,
we would judge their application as justified. However, for other contexts, future research should engage in
an extensive discussion on when and how human-like CA design is ethically justified.
Implications for Practice
Our results highlight the importance of designing a CA with human-like features when aiming to achieve
intention to comply in healthcare contexts. Hence, CA designers should consider designing their CA to be
human-like to obtain high levels of intention to comply. Nonetheless, applying them unrestrained and freely
Users’ Intention to Comply with COVID-19 Chatbots
Forty-Third International Conference on Information Systems, Copenhagen 2022
11
to any context can lead to undesired and unethical side effects (e.g., the human-like design of a CAs lead
patients to select the wrong treatment option), which should be considered.
Limitations
This study is not free of limitations. Our sample is exclusively comprised of crowd workers, recruited via a
commercial crowd working online platform. However, crowd working samples can be considered
appropriate for studying general technology purposes (Paolacci and Chandler 2014). Further, our CA was
limited by geographical boundaries since it was only available in German language and on German territory.
Regarding intention to comply, our study only focused on short-term effects, leaving it open to
interpretation if users are still following recommendations long-term. Further, our CA was designed with
generic responses that did not take up and evaluate individual answers. This could open up future research
opportunities in design science research to show how CAs should be designed to actually act social.
Lastly, we recommend using NeuroIS methods to analyze direct brain effects that indicate specific stimuli
for effecting behavioral attitudes, such as trust. As a possible starting point, Riedl et al. (2010) show how
NeuroIS methods can be applied in this context (e.g., by using functional magnetic resonance imaging
(fMRI)). Due to analyzing the root causes in human minds, this interdisciplinary IS approach can enrich
future research directions.
Conclusion
In context of COVID 19 and similar situations (e.g., natural disaster, pandemic of a different virus), it is
important to communicate guidelines to the general public in a timely and convincing manner. To avo id
possible infections, virus transmissions, and fatigue behavior, complaining with hygiene recommendations
is from vital importance. We conducted a between-subject online experiment to better understand the
relation of human-like design of a CA and users’ intention to comply. Our study contributes to the current
discussions by reporting evidence for the influence of a human-like designed CA on the intention to comply
in healthcare contexts. Specifically, we find support for a significant impact of a human -like design on the
perception of humanness, source credibility, and trust, which are all (directly or indirectly) drivers of the
intention to comply. We provide practical implications by underlining the importance of human-like
designed CA and its influence on users’ intention to comply with COVID-19-related recommendations.
References
Abd-Alrazaq, A., Safi, Z., Alajlani, M., Warren, J., Househ, M., and Denecke, K. 2020. “Technical Metrics
Used to Evaluate Health Care Chatbots: Scoping Review,” Journal of Medical Internet Research (22:6).
(https://doi.org/10.2196/18301).
Ahmad, N. A., Hafiz, M., Hamid, C., Zainal, A., Fairuz, M., Rauf, A., and Adnan, Z. 2018. Review of
Chatbots Design Techniques,” International Journal of Computer Applications (181:8), pp. 9758887.
Almalki, M. 2021. “Exploring the Influential Factors of Consumers’ Willingness Toward Using COVID-19
Related Chatbots: An Empirical Study,” Medical Archives (Sarajevo, Bosnia and Herzegovina) (75:1),
pp. 5055. (https://doi.org/10.5455/medarh.2021.75.50-55).
Amato, F., Marrone, S., Moscato, V., Piantadosi, G., Picariello, A., and Sansone, C. 2017. “Chatbots Meet
Ehealth: Automatizing Healthcare,” CEUR Workshop Proceedings (1982), pp. 4049.
Araujo, T. 2018. “Living up to the Chatbot Hype: The Influence of Anthropomorphic Design Cues and
Communicative Agency Framing on Conversational Agent and Company Perceptions,” Computers in
Human Behavior (85), Elsevier Ltd, pp. 183189. (https://doi.org/10.1016/j.chb.2018.03.051).
Bagozzi, R. P., and Yi, Y. 1988. “On the Evaluation of Structural Equation Models,” Journal of the Academy
of Marketing Science (16:1), Springer, pp. 7494.
Barakat, A. M., and Kasemy, Z. A. 2020. “Preventive Health Behaviours during Coronavirus Disease 2019
Pandemic Based on Health Belief Model among Egyptians,” Middle East Current Psychiatry (27:1),
Middle East Current Psychiatry. (https://doi.org/10.1186/s43045-020-00051-y).
Beldad, A., Hegner, S., and Hoppen, J. 2016. “The Effect of Virtual Sales Agent (VSA) Gender - Product
Gender Congruence on Product Advice Credibility, Trust in VSA and Online Vendor, and Purchase
Intention,” Computers in Human Behavior (60), Elsevier Ltd, pp. 6272.
Users’ Intention to Comply with COVID-19 Chatbots
Forty-Third International Conference on Information Systems, Copenhagen 2022
12
(https://doi.org/10.1016/j.chb.2016.02.046).
Bo, X., and Benbasat, I. 2007. “E-Commerce Product Recommendation Agents: Use, Characteristics, and
Impact,” MIS Quarterly: Management Information Systems (31:1), pp. 137209.
(https://doi.org/10.2307/25148784).
Boudreau, M.-C., Gefen, D., and Straub, D. W. 2001. “Validation in Information Systems Research: A State-
of-the-Art Assessment,” MIS Quarterly (25:1), Management Information Systems Research Center,
University of Minnesota, pp. 116. (https://doi.org/10.2307/3250956).
Bührke, J., Brendel, A. B., Lichtenberg, S., Greve, M., and Mirbabaie, M. 2021. “Is Making Mistakes
Human? On the Perception of Typing Errors in Chatbot Communication,” Proceedings of the Annual
Hawaii International Conference on System Sciences (2020-Janua), pp. 44564465.
(https://doi.org/10.24251/hicss.2021.541).
Bulgurcu, B., Cavusoglu, H., and Benbasat, I. 2010. “Information Security Policy Compliance: An Empirical
Study of Rationality-Based Beliefs and Information Security Awareness,” MIS Quarterly: Management
Information Systems (34:SPEC. ISSUE 3), pp. 523548. (https://doi.org/10.2307/25750690).
Cafaro, A., Vilhjalmsson, H. H., and Bickmore, T. 2016. “First Impressions in Human-Agent Virtual
Encounters,” ACM Transactions on Computer-Human Interaction (24:4), pp. 140.
Casas, J., Mugellini, E., and Khaled, O. A. 2018. “Food Diary Coaching Chatbot,” in Proceedings of the 2018
ACM International Joint Conference and 2018 International Symposium on Pervasive and
Ubiquitous Computing and Wearable Computers, pp. 16761680.
CDC. 2020. “COVID-19 Testing: What You Need to Know | CDC.”
(https://www.cdc.gov/coronavirus/2019-ncov/symptoms-testing/testing.html, accessed April 26,
2022).
Chin, W. W. 1998. “Commentary: Issues and Opinion on Structural Equation Modeling,” MIS Quarterly,
JSTOR, viixvi.
Clemm Von Hohenberg, B., and Guess, A. M. 2022. “When Do Sources Persuade? The Effect of Source
Credibility on Opinion Change,” Journal of Experimental Political Science, Cambridge University
Press, pp. 115. (https://doi.org/10.1017/XPS.2022.2).
Cohen, J. 2013. Statistical Power Analysis for the Behavioral Sciences, Routledge.
Cortina, J. M. 1993. “What Is Coefficient Alpha? An Examination of Theory and Applications,” Journal of
Applied Psychology. (https://doi.org/10.1037/0021-9010.78.1.98).
Cowell, A. J., and Stanney, K. M. 2005. “Manipulation of Non-Verbal Interaction Style and Demographic
Embodiment to Increase Anthropomorphic Computer Character Credibility,” International Journal of
Human Computer Studies. (https://doi.org/10.1016/j.ijhcs.2004.11.008).
Cui, T., Peng, X., and Wang, X. 2020. “Understanding the Effect of Anthropomorphic Design: Towards
More Persuasive Conversational Agents,” ICIS 2020 Proceedings.
(https://aisel.aisnet.org/icis2020/user_behaviors/user_behaviors/9).
Dacey, M. 2017. “Anthropomorphism as Cognitive Bias,” Philosophy of Science.
(https://doi.org/10.1086/694039).
Dehnert, M., and Mongeau, P. A. 2022. Persuasion in the Age of Artificial Intelligence ( AI ): Theories and
Complications of AI-Based Persuasion, (00), pp. 118.
Dennis, A. R., Kim, A., Rahimi, M., and Ayabakan, S. 2020. “User Reactions to COVID-19 Screening
Chatbots from Reputable Providers,” Journal of the American Medical Informatics Association (27:11),
J Am Med Inform Assoc, pp. 17271731. (https://doi.org/10.1093/jamia/ocaa167).
Diederich, S., Brendel, A. B., Morana, S., and Kolbe, L. 2022. “On the Design of and Interaction with
Conversational Agents: An Organizing and Assessing Review of Human-Computer Interaction
Research,” Journal of the Association for Information Systems (23:1), pp. 96138.
(https://aisel.aisnet.org/jais/vol23/iss1/9).
Diederich, S., Lembcke, T.-B., Brendel, A. B., and Kolbe, L. M. 2020. “Not Human After All: Exploring the
Impact of Response Failure on User Perception of Anthropomorphic Conversational Service Agents,”
in Proceedings of the European Conference on Information Systems (ECIS).
Diederich, S., Lichtenberg, S., Brendel, A. B., and Trang, S. 2019. “Promoting Sustainable Mobility Beliefs
Users’ Intention to Comply with COVID-19 Chatbots
Forty-Third International Conference on Information Systems, Copenhagen 2022
13
with Persuasive and Anthropomorphic Design: Insights from an Experiment with a Conversational
Agent,” in Proceedings of the International Conference on Information Systems (ICIS), Munich,
Germany, pp. 017.
Doney, M., and Cannon, J. P. 1997. “Trust Examination of the Nature of in Buyer-Seller Relationship for
Assistance,” Journal of Marketing (61:2), pp. 3551.
Drozd, F., Lehto, T., and Oinas-Kukkonen, H. 2012. Exploring Perceived Persuasiveness of a Behavior
Change Support System: A Structural Model BT - Persuasive Technology. Design for Health and
Safety, M. Bang and E. L. Ragnemalm (eds.), Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 157
168.
Epley, N., Waytz, A., and Cacioppo, J. T. 2007. “On Seeing Human: A Three-Factor Theory of
Anthropomorphism,” Psychological Review. (https://doi.org/10.1037/0033-295X.114.4.864).
Erwin, P. G., Calev, and A. 1984. “The Influence of Christian Name Stereotypes on the Marking of Children’S
Essays,” British Journal of Educational Psychology (54:2), pp. 223227.
(https://doi.org/10.1111/j.2044-8279.1984.tb02583.x).
Espinoza, J., Crown, K., and Kulkarni, O. 2020. “A Guide to Chatbots for COVID-19 Screening at Pediatric
Health Care Facilities,” JMIR Public Health Surveill 2020;6(2):E18808
Https://Publichealth.Jmir.Org/2020/2/E18808 (6:2), JMIR Public Health and Surveillance, p.
e18808. (https://doi.org/10.2196/18808).
European-Centre-for-Disease-Prevention-and-Control. 2020. “Guidelines for the Use of Non-
Pharmaceutical Measures to Delay and Mitigate the Impact of 2019-NCoV,” ECDC, European Centre
for Disease Prevention and Control, Stockholm, Sweden.
Fadhil, A. 2018. A Conversational Interface to Improve Medication Adherence: Towards AI Support in
Patient’s Treatment. (http://arxiv.org/abs/1803.09844).
Feine, J., Gnewuch, U., Morana, S., and Maedche, A. 2019. “A Taxonomy of Social Cues for Conversational
Agents,” International Journal of Human Computer Studies (132:July), pp. 138161.
(https://doi.org/10.1016/j.ijhcs.2019.07.009).
Feldman, J. M., and Lynch, J. G. 1988. “Self-Generated Validity and Other Effects of Measurement on
Belief, Attitude, Intention, and Behavior, Journal of Applied Psychology (73:3), pp. 421435.
(https://doi.org/10.1037/0021-9010.73.3.421).
Fogg, B. J. 2002. “Persuasive Technology: Using Computers to Change What We Think and Do,” Persuasive
Technology: Using Computers to Change What We Think and Do, pp. 1282.
(https://doi.org/10.1016/B978-1-55860-643-2.X5000-8).
Følstad, A., Nordheim, C. B., and Bjørkli, C. A. 2018. “What Makes Users Trust a Chatbot for Customer
Service? An Exploratory Interview Study,” Lecture Notes in Computer Science (Including Subseries
Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (11193 LNCS), Springer,
Cham, pp. 194208. (https://doi.org/10.1007/978-3-030-01437-7_16).
Fombelle, P. W., Bone, S. A., and Lemon, K. N. 2016. “Responding to the 98%: Face-Enhancing Strategies
for Dealing with Rejected Customer Ideas,” Journal of the Academy of Marketing Science (44:6),
Springer, pp. 685706.
Forgas, J. P., and Laham, S. M. 2016. “Halo Effects,” in Cognitive Illusions, Psychology Press, pp. 286300.
Fornell, C., and Larcker, D. F. 1981. “Evaluating Structural Equation Models with Unobservable Variables
and Measurement Error,” Journal of Marketing Research. (https://doi.org/10.2307/3151312).
Franzen, A., and Wöhner, F. 2021. “Fatigue during the COVID-19 Pandemic: Evidence of Social Distancing
Adherence from a Panel Study of Young Adults in Switzerland,” PLoS ONE (16:12 December 2021).
(https://doi.org/10.1371/journal.pone.0261276).
Gefen, D., and Straub, D. 2005. “A Practical Guide To Factorial Validity Using PLS-Graph: Tutorial And
Annotated Example,” Communications of the Association for Information Systems (16), pp. 91109.
(https://doi.org/10.17705/1cais.01605).
Gefen, D., and Straub, D. W. 2004. “Consumer Trust in B2C E-Commerce and the Importance of Social
Presence: Experiments in e-Products and e-Services,” Omega (32:6), pp. 407424.
(https://doi.org/10.1016/j.omega.2004.01.006).
Gilly, M. C., Graham, J. L., Wolfinbarger, M. F., and Yale, L. J. 1998. “A Dyadic Study of Interpersonal
Users’ Intention to Comply with COVID-19 Chatbots
Forty-Third International Conference on Information Systems, Copenhagen 2022
14
Information Search,” Journal of the Academy of Marketing Science (26:2), pp. 83100.
(https://doi.org/10.1177/0092070398262001).
Gjengedal, E., Ekra, E. M., Hol, H., Kjelsvik, M., Lykkeslet, E., Michaelsen, R., Orøy, A., Skrondal, T.,
Sundal, H., Vatne, S., and Wogn-Henriksen, K. 2013. “Vulnerability in Health Care - Reflections on
Encounters in Every Day Practice,” Nursing Philosophy (14:2), Blackwell Publishing Ltd, pp. 127138.
(https://doi.org/10.1111/J.1466-769X.2012.00558.X).
Gnewuch, U., Morana, S., Adam, M. T. P., and Maedche, A. 2018. “Faster Is Not Always Better:
Understanding the Effect of Dynamic Response Delays in Human-Chatbot Interaction,” in 26th
European Conference on Information Systems: Beyond Digitization - Facets of Socio-Technical
Change, ECIS 2018.
Go, E., and Sundar, S. S. 2019. “Humanizing Chatbots: The Effects of Visual, Identity and Conversational
Cues on Humanness Perceptions,” Computers in Human Behavior (97:June 2018), Elsevier, pp. 304
316. (https://doi.org/10.1016/j.chb.2019.01.020).
Gong, L. 2008. “How Social Is Social Responses to Computers? The Function of the Degree of
Anthropomorphism in Computer Representations,” Computers in Human Behavior (24:4), pp. 1494
1509.
Guhr, N., Lebek, B., and Breitner, M. H. 2019. The Impact of Leadership on Employees’ Intended
Information Security Behaviour: An Examination of the Full-Range Leadership Theory,” Information
Systems Journal (29:2), pp. 340362. (https://doi.org/10.1111/isj.12202).
Hair, J., Black, W. C., Babin, B. J., Anderson, R. E., and Tatham, R. L. 2010. “Multivariate Data Analysis.
6th (Ed.) Prentice-Hall,” Upper Saddle River NJ.
El Hefny, W., El Bolock, A., Herbert, C., and Abdennadher, S. 2021. “Chase Away the Virus: A Character-
Based Chatbot for COVID-19,” SeGAH 2021 - 2021 IEEE 9th International Conference on Serious
Games and Applications for Health (August). (https://doi.org/10.1109/SEGAH52098.2021.9551895).
Hildebrand, C., and Bergner, A. 2019. “AI-Driven Sales Automation: Using Chatbots to Boost Sales,” NIM
Marketing Intelligence Review (11:2), Walter de Gruyter GmbH, pp. 3641.
(https://doi.org/10.2478/NIMMIR-2019-0014).
Hojat, M., Louis, D. Z., Maxwell, K., Markham, F., Wender, R., and Gonnella, J. S. 2010. “Patient
Perceptions of Physician Empathy, Satisfaction with Physician, Interpersonal Trust, and Compliance,”
International Journal of Medical Education (1), IJME, pp. 8387.
(https://doi.org/10.5116/ijme.4d00.b701).
Holtgraves, T. M., Ross, S. J., Weywadt, C. R., and Han, T. L. 2007. “Perceiving Artificial Social Agents,”
Computers in Human Behavior (23:5), pp. 21632174. (https://doi.org/10.1016/j.chb.2006.02.017).
Hovland, C., Janis, I., and Kelley, H. 1953. Communication and Persuasion.
(https://psycnet.apa.org/record/1953-15071-000).
Howard, J. A., and Kunda, Z. 2000. “Social Cognition: Making Sense of People,” Contemporary Sociology.
(https://doi.org/10.2307/2654104).
Jordan, J. J., Yoeli, E., and Rand, D. G. 2021. “Don’t Get It or Don’t Spread It: Comparing Self-Interested
versus Prosocial Motivations for COVID-19 Prevention Behaviors,” Scientific Reports (11:1), Nature
Publishing Group UK, pp. 117. (https://doi.org/10.1038/s41598-021-97617-5).
Judson, T. J., Odisho, A. Y., Young, J. J., Bigazzi, O., Steuer, D., Gonzales, R., and Neinstein, A. B. 2020.
“Implementation of a Digital Chatbot to Screen Health System Employees during the COVID -19
Pandemic,” Journal of the American Medical Informatics Association (27:9), Oxford University Press,
pp. 14501455.
Kim, D. J., Ferrin, D. L., and Rao, H. R. 2009. “Trust and Satisfaction, Two Stepping Stones for Successful
e-Commerce Relationships: A Longitudinal Exploration,” Information Systems Research (20:2),
INFORMS, pp. 237257.
Kim, W., and Ryoo, Y. 2022. “Hypocrisy Induction: Using Chatbots to Promote COVID-19 Social
Distancing,” Cyberpsychology, Behavior, and Social Networking (25:1), pp. 2736.
(https://doi.org/10.1089/cyber.2021.0057).
Kim, Y., and Sundar, S. S. 2012. “Anthropomorphism of Computers: Is It Mindful or Mindless?,” Computers
in Human Behavior (28:1), Elsevier Ltd, pp. 241250. (https://doi.org/10.1016/j.chb.2011.09.006).
Users’ Intention to Comply with COVID-19 Chatbots
Forty-Third International Conference on Information Systems, Copenhagen 2022
15
de Kleijn, R., Wijnen, M., and Poletiek, F. 2019. “The Effect of Context-Dependent Information and
Sentence Constructions on Perceived Humanness of an Agent in a Turing Test, Knowledge-Based
Systems (163), pp. 794799. (https://doi.org/https://doi.org/10.1016/j.knosys.2018.10.006).
Kulms, P., and Kopp, S. 2018. “A Social Cognition Perspective on HumanComputer Trust: The Effect of
Perceived Warmth and Competence on Trust in Decision-Making With Computers,” Frontiers in
Digital Humanities (0), Frontiers, p. 14. (https://doi.org/10.3389/FDIGH.2018.00014).
Kumkale, G. T., Albarracín, D., and Seignourel, P. J. 2010. “The Effects of Source Credibility in the Presence
or Absence of Prior Attitudes: Implications for the Design of Persuasive Communication Campaigns,”
Journal of Applied Social Psychology (40:6), pp. 13251356. (https://doi.org/10.1111/j.1559-
1816.2010.00620.x).
Kyngäs, H., Duffy, M. E., and Kroll, T. 2000. “Conceptual Analysis of Compliance.,” Journal of Clinical
Nursing (9:1), pp. 512.
Lang, H., Seufert, T., Klepsch, M., Minker, W., and Nothdurft, F. 2013. “Are Computers Still Social Actors?,”
in Conference on Human Factors in Computing Systems - Proceedings.
(https://doi.org/10.1145/2468356.2468510).
Lankton, N. K., McKnight, D. H., and Tripp, J. 2015. “Technology, Humanness, and Trust: Rethinking Trust
in Technology,” Journal of the Association for Information Systems (16:10), p. 1.
Laranjo, L., Dunn, A. G., Tong, H. L., Kocaballi, A. B., Chen, J., Bashir, R., Surian, D., Gallego, B., Magrabi,
F., and Lau, A. Y. S. 2018. “Conversational Agents in Healthcare: A Systematic Review,” Journal of the
American Medical Informatics Association (25:9), Oxford University Press, pp. 12481258.
Lee, J. D., and See, K. A. 2004. “Trust in Automation: Designing for Appropriate Reliance,” Human Factors
(46:1), SAGE Publications Sage UK: London, England, pp. 5080.
Lee, S. Y., and Choi, J. 2017. “Enhancing User Experience with Conversational Agent for Movie
Recommendation: Effects of Self-Disclosure and Reciprocity,” International Journal of Human
Computer Studies (103), Elsevier, pp. 95105.
Lehto, T., Oinas-Kukkonen, H., and Drozd, F. 2012. “Factors Affecting Perceived Persuasiveness of a
Behavior Change Support System,” in International Conference on Information Systems, ICIS 2012.
Lembcke, T. B., Engelbrecht, N., Brendel, A. B., Herrenkind, B., and Kolbe, L. M. 2019. “Towards a Unified
Understanding of Digital Nudging by Addressing Its Analog Roots, Proceedings of the 23rd Pacific
Asia Conference on Information Systems: Secure ICT Platform for the 4th Industrial Revolution,
PACIS 2019 (May).
Liu, B., and Sundar, S. S. 2018. “Should Machines Express Sympathy and Empathy? Experiments with a
Health Advice Chatbot,” Cyberpsychology, Behavior, and Social Networking (21:10), Mary Ann
Liebert, Inc., publishers 140 Huguenot Street, 3rd Floor New …, pp. 625636.
Lowry, P. B., Vance, A., Moody, G., Beckman, B., and Read, A. 2008. “Explaining and Predicting the Impact
of Branding Alliances and Web Site Quality on Initial Consumer Trust of E-Commerce Web Sites,”
Journal of Management Information Systems (24:4), pp. 199224.
(https://doi.org/10.2753/MIS0742-1222240408).
Lowry, P. B., Zhang, D., and Wu, D. 2014. “Understanding Patients’ Compliance Behavior in a Mobile
Healthcare System: The Role of Trust and Planned Behavior,” in International Conference on
Information Systems (ICIS 2014), Auckland, New Zealand, December, pp. 1417.
Lu, X., and Zhang, R. 2019. “Impact of Physician-Patient Communication in Online Health Communities
on Patient Compliance: Cross-Sectional Questionnaire Study,” Journal of Medical Internet Research
(21:5), JMIR Publications Inc., Toronto, Canada, p. e12891.
MacIntyre, C. R., Nguyen, P. Y., Chughtai, A. A., Trent, M., Gerber, B., Steinhofel, K., and Seale, H. 2021.
“Mask Use, Risk-Mitigation Behaviours and Pandemic Fatigue during the COVID-19 Pandemic in Five
Cities in Australia, the UK and USA: A Cross-Sectional Survey,” International Journal of Infectious
Diseases (106), Elsevier, pp. 199207. (https://doi.org/10.1016/J.IJID.2021.03.056).
Malouff, J. M., Stein, S. J., Bothma, L. N., Coulter, K., and Emmerton, A. J. 2014. “Cogent Psychology
Preventing Halo Bias in Grading the Work of University Students Preventing Halo Bias in Grading the
Work of University Students Preventing Halo Bias in Grading the Work of University Students,” Cogent
Psychology (1), p. 988937. (https://doi.org/10.1080/23311908.2014.988937).
Users’ Intention to Comply with COVID-19 Chatbots
Forty-Third International Conference on Information Systems, Copenhagen 2022
16
McComas, K. A., and Trumbo, C. W. 2001. “Source Credibility in Environmental Health-Risk Controversies:
Application of Meyer’s Credibility Index,” Risk Analysis (21:3), pp. 467480.
(https://doi.org/10.1111/0272-4332.213126).
Michigan-Government. 2021. “Communications Resources Toolkit.”
Miller, N. 1965. “Involvement and Dogmatism as Inhibitors of Attitude Change,” Journal of Experimental
Social Psychology (1:2), Academic Press, pp. 121132. (https://doi.org/10.1016/0022-1031(65)90040-
5).
Milliman, R. E., and Fugate, D. L. 1988. “Using Trust-Transference as a Persuasion Technique: An
Empirical Field Investigation,” Journal of Personal Selling and Sales Management (8:2), pp. 17.
(https://doi.org/10.1080/08853134.1988.10754486).
Miner, A. S., Laranjo, L., and Kocaballi, A. B. 2020. “Chatbots in the Fight against the COVID-19 Pandemic,”
NPJ Digital Medicine (3:1), Nature Publishing Group, pp. 14.
Mirsch, T., Lehrer, C., and Jung, R. 2017. “Digital Nudging: Altering User Behavior in Digital
Environments,” Proceedings Der 13. Internationalen Tagung Wirtschaftsinformatik (WI 2017), pp.
634648.
Mishel, M. H. 1981. “The Measurement of Uncertainty in Illness,” Nursing Research (30:5), pp. 258263.
(https://doi.org/10.1097/00006199-198109000-00002).
Murphy, J., and Coster, G. 1997. “Issues in Patient Compliance,” Drugs, Springer International Publishing,
pp. 797800. (https://doi.org/10.2165/00003495-199754060-00002).
Nass, C., and Moon, Y. 2000. “Machines and Mindlessness: Social Responses to Computers,” Journal of
Social Issues (56:1), pp. 81103. (https://doi.org/10.1111/0022-4537.00153).
Nass, C., Steuer, J., and Tauber, E. R. 1994. “Computers Are Social Actors,” in Proceedings of the ACM CHI
Conference on Human Factors in Computing Systems, Boston, USA, p. 204.
Nunally, J. C. 1970. “Introduction to Psychological Measurement,” Acta Psychologica.
Nunamaker, J. F., Derrick, D. C., Elkins, A. C., Burgoon, J. K., and Patton, M. W. 2011. “Embodied
Conversational Agent-Based Kiosk for Automated Interviewing,” Journal of Management Information
Systems (28:1), pp. 1748.
Oh, K.-J., Lee, D., Ko, B., and Choi, H.-J. 2017. “A Chatbot for Psychiatric Counseling in Mental Healthcare
Service Based on Emotional Dialogue Analysis and Sentence Generation,” in 2017 18th IEEE
International Conference on Mobile Data Management (MDM), pp. 371375.
(https://doi.org/10.1109/MDM.2017.64).
Paolacci, G., and Chandler, J. 2014. “Inside the Turk: Understanding Mechanical Turk as a Participant
Pool,” Current Directions in Psychological Science. (https://doi.org/10.1177/0963721414531598).
Park, S. H., Thieme, A., Han, J., Lee, S., Rhee, W., and Suh, B. 2021. “I Wrote as If i Were Telling a Story to
Someone i Knew.: Designing Chatbot Interactions for ExpressiveWriting in Mental Health,” DIS 2021
- Proceedings of the 2021 ACM Designing Interactive Systems Conference: Nowhere and Everywhere,
pp. 926941. (https://doi.org/10.1145/3461778.3462143).
Paskojevic, D. 2014. Applying Social Presence Theory: What Effect Does Lifestyle Imagery Have on
Website Persuasiveness?
Petty, R. E., and Briñol, P. 2010. “Attitude Change.,” in Advanced Social Psychology: The State of the
Science., New York, NY, US: Oxford University Press, pp. 217259.
Pornpitakpan, C. 2004. “The Persuasiveness of Source Credibility: A Critical Review of Five Decades’
Evidence,” Journal of Applied Social Psychology (34:2), pp. 243281. (https://doi.org/10.1111/j.1559-
1816.2004.tb02547.x).
Riedl, R., Hubert, M., and Kenning, P. 2010. “Are There Neural Gender Differences in Online Trust? An
FMRI Study on the Perceived Trustworthiness of EBay Offers,” MIS Quarterly (34), pp. 397428.
(https://doi.org/10.2307/20721434).
Sarracino, F., Greyling, T., O’connor, K., Peroni, C., and Rossouw, S. 2022. Trust Predicts Compliance with
COVID-19 Containment Policies: Evidence from Ten Countries Using Big Data. (www.iza.org).
Schneider, C., Weinmann, M., and Brocke, J. Vom. 2018. “Digital Nudging: Guiding Online User Choices
through Interface Design Designers Can Create Designs That Nudge Users toward the Most Desirable
Users’ Intention to Comply with COVID-19 Chatbots
Forty-Third International Conference on Information Systems, Copenhagen 2022
17
Option,” Communications of the ACM (61:7), pp. 6773. (https://doi.org/10.1145/3213765).
Schuetzler, R. M., Grimes, G. M., and Giboney, J. S. 2018. “An Investigation of Conversational Agent
Relevance, Presence, and Engagement,” in Proceedings of the Americas Conference on Information
Systems (AMCIS), New Orleans, USA, pp. 110.
Seeger, A.-M., Pfeiffer, J., and Heinzl, A. 2018. “Designing Anthropomorphic Conversational Agents:
Development and Empirical Evaluation of a Design Framework,” in Proceedings of the International
Conference on Information Systems (ICIS), San Francisco, USA, pp. 117.
Segal, J. 1994. “Patient Compliance, the Rhetoric of Rhetoric, and the Rhetoric of Persuasion,” Rhetoric
Society Quarterly (23:34), Routledge, pp. 90102. (https://doi.org/10.1080/02773949409390998).
Shearston, J. A., Martinez, M. E., Nunez, Y., and Hilpert, M. 2021. “Social-Distancing Fatigue: Evidence
from Real-Time Crowd-Sourced Traffic Data,” Science of The Total Environment (792), Elsevier, p.
148336. (https://doi.org/10.1016/J.SCITOTENV.2021.148336).
Sillice, M. A., Morokoff, P. J., Ferszt, G., Bickmore, T., Bock, B. C., Lantini, R., and Velicer, W. F. 2018.
“Using Relational Agents to Promote Exercise and Sun Protection: Assessment of Participants’
Experiences With Two Interventions,” J Med Internet Res 2018;20(2):E48
Https://Www.Jmir.Org/2018/2/E48 (20:2), Journal of Medical Internet Research, p. e7640.
(https://doi.org/10.2196/JMIR.7640).
Tan, S. M., and Liew, T. W. 2020. “Designing Embodied Virtual Agents as Product Specialists in a Multi-
Product Category E-Commerce: The Roles of Source Credibility and Social Presence,” International
Journal of Human-Computer Interaction (36:12), Taylor & Francis, pp. 11361149.
(https://doi.org/10.1080/10447318.2020.1722399).
Toader, D. C., Boca, G., Toader, R., Măcelaru, M., Toader, C., Ighian, D., and Rădulescu, A. T. 2020. “The
Effect of Social Presence and Chatbot Errors on Trust,” Sustainability (Switzerland).
(https://doi.org/10.3390/SU12010256).
Venkatesh, A., and Edirappuli, S. 2020. “Social Distancing in Covid-19: What Are the Mental Health
Implications?,” Bmj (369), British Medical Journal Publishing Group.
Verhagen, T., van Nes, J., Feldberg, F., and van Dolen, W. 2014. “Virtual Customer Service Agents: Using
Social Presence and Personalization to Shape Online Service Encounters,” Journal of Computer-
Mediated Communication (19:3), Oxford Academic, pp. 529545.
(https://doi.org/10.1111/JCC4.12066).
de Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A. B., McKnight, P. E., Krueger, F., and
Parasuraman, R. 2016. “Almost Human: Anthropomorphism Increases Trust Resilience in Cognitive
Agents,” Journal of Experimental Psychology: Applied (22:3), pp. 331349.
Weizenbaum, J. 1966. “ELIZA—a Computer Program for the Study of Natural Language Communication
between Man and Machine,” Communications of the ACM (9:1), pp. 3645.
Westerman, D., Cross, A. C., and Lindmark, P. G. 2019. “I Believe in a Thing Called Bot: Perceptions of the
Humanness of ‘Chatbots,’” Communication Studies (70:3), Routledge, pp. 295312.
(https://doi.org/10.1080/10510974.2018.1557233/RCST_A_1557233_MED0001.MP4).
Wiener, J. L., and Mowen, J. C. 1986. “Source Credibility: On the Independent Effects of Trust and
Expertise,” ACR North American Advances.
World-Health-Organization. 2021. “WHO Health Alert Brings COVID-19 Facts to Billions via WhatsApp.”
Yamagishi, T., and Yamagishi, M. 1994. “Trust and Commitment in the United States and Japan,
Motivation and Emotion (18:2), Kluwer Academic Publishers-Plenum Publishers, pp. 129166.
(https://doi.org/10.1007/BF02249397).
Yen, C., and Chiang, M. C. 2021. “Trust Me, If You Can: A Study on the Factors That Influence Consumers’
Purchase Intention Triggered by Chatbots Based on Brain Image Evidence and Self-Reported
Assessments,” Behaviour and Information Technology (40:11), Taylor & Francis, pp. 11771194.
(https://doi.org/10.1080/0144929X.2020.1743362).
Yoo, K.-H., and Gretzel, U. 2008. “The Influence of Perceived Credibility on Preferences for Recommender
Systems as Sources of Advice,” Information Technology & Tourism (10:2), pp. 133146.
(https://doi.org/10.3727/109830508784913059).
... CAs have become a driving force in digital health due to their intelligent, automated, and convenient interaction (Soni et al., 2022). Since the first CA, ELIZA, designed in 1966 to mimic the behavior of a therapist (Weizenbaum, 1966), CAs have been used in a variety of health contexts, such as mental health, alcohol, and drug counseling (Barnett et al., 2021), pre-screening of COVID-19 infections , or aiming to influence compliance with health-related procedures (e.g., washing hands regularly) (Pietrantoni et al., 2022). In the context of blood donations, CAs can offer an interactive and convenient experience for patients and customers beyond what traditional static information forms, e.g., written text on a website, can provide (Roman et al., 2020). ...
... These cues can support the effect of perceiving a CA as more human-like even though users are aware that they are interacting with a computer (Dacey, 2017;Epley et al., 2007). For example, recent studies show that human-like CAs can lead to a higher intention to comply Pietrantoni et al., 2022). Further, using human-like avatars can increase perceived humanness (Go & Sundar, 2019). ...
... Additionally, Diederich et al. (2019) showed that perceived humanness fosters persuasiveness by referring to environmental sustainability beliefs. Further, Pietrantoni et al. (2022) reveal that perceived humanness drives perceived persuasion, leading to higher intention to comply rates. Thus, we postulate: ...
Conference Paper
Full-text available
Donating blood is a selfless act that impacts public welfare, potentially saving human lives. However, blood shortage is a rising worldwide issue due to increased demand. Thus, finding ways to animate and motivate potential donors to donate blood is paramount. In this context, conversational agents (CAs) offer a promising approach to educating , promoting, and achieving desired behaviors. In this paper, we conducted an online experimental study (N=303) and investigated the effect of a human-like designed CA and fear-inducing communication on users' intention to donate. Our results show that users' intention is driven by perceived persuasiveness rather than perceived humanness and that fear-inducing communication does not significantly affect the intention to donate. Against this background, we provide numerous theoretical and practical implications , contributing to information system literature by enhancing our understanding of how fear-inducing communication is used in CA interactions.
... Prominent examples of CAs are Amazon's Alexa and Apple's Siri (Reis et al., 2018). Specifically in health contexts, CAs have proven to be an effective mechanism in educating (e.g., regarding COVID-19 (Pietrantoni et al., 2022), or blood donation (Roman et al., 2020)) and impacting users' behavior (e.g., promoting healthy lifestyle (Piao et al., 2020)). Overall, research has shown that CAs can be an effective means of persuasion when designed correctly (Schwede et al., 2023). ...
... Consequently, anthropomorphism can change how users think and behave. For instance, past research has revealed positive effects on increased purchase intentions (Schwede et al., 2023), higher intentions to comply (Pietrantoni et al., 2022), and increased persuasiveness (Diederich et al., 2019). ...
... Anthropomorphic cues refer to a CAs characteristic of having a name (Araujo, 2018), an avatar (Bührke et al., 2021), greeting the user at the beginning of a conversation (Morana et al., 2020), or such as using dynamic response relays (Bao et al., 2022). In this regard, recent examples report the positive effects of anthropomorphic cues on users' behavioral intentions (Adam et al., 2021;Brendel et al., 2022;Pietrantoni et al., 2022). However, because the level of perceived anthropomorphism can vary across individuals based on the extent to which they are applied (J. ...
Conference Paper
Full-text available
The increasing need for organ donations remains a worldwide challenge as transplant waiting lists grow and donation rates persist at constant levels. The increasing popularity of conversational agents (CAs) has prompted new strategies for educating and persuading individuals to adjust their cognitive and behavioral beliefs and become donors. However, how CAs should be designed to modify uninformed users' intention to donate remains unclear. Against this background, we conducted an online experiment (N=134) to examine the impact of a human-like CA design on users' intention to become organ donors. Based on the three-factor theory of anthropomorphism and the elaboration likelihood model, we derive three theoretical mechanisms to understand the influence of a CAs human-like design on users' intention to donate. The findings show that perceived anthropomorphism does not directly impact persuasion and empathy but is mediated via perceived usefulness to influence the intention to donate.
Conference Paper
Stress in the workplace and the resulting disorders present a significant chal-lenge for the employee, employer, and national economy. According to the transactional model of stress, a possible reduction in stress can be achieved by reevaluating the situation. According to the social response theory, con-versational agents should be very suitable for this task, as their increased persuasiveness leads to increased confidence of the user in their abilities. However, the perception of a social agent could also present an additional source of social evaluation stress, i.e., feeling observed and judged by anoth-er person. We propose a NeuroIS experiment utilizing EEG, ECG, and eye-tracking to resolve those opposing predictions.
Article
Full-text available
In this paper we analyze panel data (N = 400) to investigate the change in attitudes towards the Covid-19 measures and the change in compliance behavior between the first and second lockdowns in a sample of young adults from the University of Bern, Switzerland. We find considerable fatigue. While respondents expressed high acceptance of and compliance with the Covid-19 measures during the first lockdown, both acceptance and compliance behavior decreased substantially during the second lockdown. Moreover, we show via a structural equation model that respondents’ compliance behavior is largely driven by the perception of how others behave and by the acceptance of the Covid-19 measures. All other effects scrutinized e.g., individual and social risk perception, trust in politics, and pro-social orientations affect compliance behavior via the acceptance of Covid-19 measures. We also conduct two tests of causality of the estimated relation between attitudes towards the measures and social distancing behavior. The first test incorporates the effect of compliance behavior reported during the first lockdown on attitudes during the second lockdown. The second test involves estimating a first difference panel regression model of attitudes on compliance behavior. The results of both tests suggest that the effect of Covid-19 attitudes on social distancing behavior can be interpreted causally.
Conference Paper
Full-text available
Chatbots are becoming an attractive tool for people who seek medical advice due to their constant availability. Multiple healthcare chatbots were developed for different purposes such as delivering advice, booking appointments, and accessing medical records. Additionally, it was found that personalized healthcare chatbots affected the user experience positively, due to the addition of human empathy. Therefore, we propose building a character-based chatbot named ``Chasey'' for COVID-19, to combat the risk of misinformation amplification during the pandemic. Chasey provides users with various COVID-19 information such as tracking the cases per country, giving advice, answering frequently asked questions, and performing symptoms checking. According to the selected chatbot character, users will receive personalized responses to their inquiries from verified sources. Moreover, we investigate how our chatbot implementation overcomes some of the challenges and limitations of healthcare and COVID-19 chatbots. Finally, an experiment was conducted to evaluate the chatbot's usability, as well as, the likability and trustworthiness of the chatbot characters. Overall, the participants were satisfied with the chatbot features and character change option. Moreover, significant results were found between the likability of the chatbot characters.
Article
Full-text available
Conversational agents (CAs), described as software with which humans interact through natural language, have increasingly attracted interest in both academia and practice, due to improved capabilities driven by advances in artificial intelligence and, specifically, natural language processing. CAs are used in contexts like people's private life, education, and healthcare, as well as in organizations, to innovate and automate tasks, for example in marketing and sales or customer service. In addition to these application contexts, such agents take on different forms concerning their embodiment, the communication mode, and their (often human-like) design. Despite their popularity, many CAs are not able to fulfill expectations and to foster a positive user experience is a challenging endeavor. To better understand how CAs can be designed to fulfill their intended purpose, and how humans interact with them, a multitude of studies focusing on human-computer interaction have been carried out. These have contributed to our understanding of this technology. However, currently a structured overview of this research is missing, which impedes the systematic identification of research gaps and knowledge on which to build on in future studies. To address this issue, we have conducted an organizing and assessing review of 262 studies, applying a socio-technical lens to analyze CA research regarding the user interaction, context, agent design, as well as perception and outcome. We contribute an overview of the status quo of CA research, identify four research streams through a cluster analysis, and propose a research agenda comprising six avenues and sixteen directions to move the field forward.
Article
Full-text available
Objectives To determine patterns of mask wearing and other infection prevention behaviours in cities where mask wearing is not a cultural norm, over two time periods of the pandemic. Methods A cross-sectional survey of masks and other preventive behaviours in adults ≥18 years was conducted in five cities (Sydney, Melbourne, London, Phoenix and New York). Data was analysed according to the epidemiology of COVID-19, mask mandates and a range of predictors of mask wearing. Results The most common measures used were avoiding public areas (80.4%), hand hygiene (76.4%), masks (71.8%) and distancing (67.6%). Over 40% of people avoided medical facilities. These measures decreased from March-July 2020. Pandemic fatigue was associated with younger age, low perceived severity of COVID-19 and declining COVID-19 prevalence. Predictors of mask wearing were location (US, UK), mandates, age <50 years, education, having symptoms and knowing someone with COVID-19. Negative experiences with mask wearing and low perceived severity of COVID-19 reduced mask wearing. Most respondents (98%) believed that hand washing and distancing were necessary, and 80% reported no change or stricter adherence to these measures when wearing masks. Conclusion Pandemic mitigation measures were widely reported across all cities, but decreased between March and July 2020. Pandemic fatigue was more common in younger people. Cities with mandates had higher rates of mask wearing. Promotion of mask use for older people may be useful. Masks did not result in reduction of other hygiene measures.
Conference Paper
Full-text available
The increasing application of Conversational Agents (CAs) changes the way customers and businesses interact during a service encounter. Research has shown that CA equipped with social cues (e.g., having a name, greeting users) stimulates the user to perceive the interaction as human-like, which can positively influence the overall experience. Specifically, social cues have shown to lead to increased customer satisfaction, perceived service quality, and trustworthiness in service encounters. However, many CAs are discontinued because of their limited conversational ability, which can lead to customer dissatisfaction. Nevertheless, making errors and mistakes can also be seen as a human characteristic (e.g., typing errors). Existing research on human-computer interfaces lacks in the area of CAs producing human-like errors and their perception in a service encounter situation. Therefore, we conducted a 2x2 online experiment with 228 participants on how CAs typing errors and CAs human-like behavior treatments influence user’s perception, including perceived service quality.
Article
Artificial intelligence (AI) has profound implications for both communication and persuasion. We consider how AI complicates and promotes rethinking of persuasion theory and research. We define AI-based persuasion as a symbolic process in which a communicative-AI entity generates, augments, or modifies a message—designed to convince people to shape, reinforce, or change their responses—that is transmitted to human receivers. We review theoretical perspectives useful for studying AI-based persuasion—the Computers Are Social Actors (CASA) paradigm, the Modality, Agency, Interactivity, and Navigability (MAIN) model, and the heuristic-systematic model of persuasion—to explicate how differences in AI complicate persuasion in two ways. First, thin AI exhibits few (if any) machinic (i.e., AI) cues, social cues might be available, and communication is limited and indirect. Second, thick AI exhibits ample machinic and social cues, AI presence is obvious, and communication is direct and interactive. We suggest avenues for future research in each case.
Article
Considering widespread resistance to COVID-19 preventive measures, the authors draw on hypocrisy induction theory to examine whether online chatbots can be used to induce hypocrisy and increase compliance with social distancing guidelines. The experiment demonstrates that when a chatbot induces hypocrisy by reminding participants that they have failed to comply with social distancing recommendations, they feel guilty about violating social norms. To reinstate confidence in their personal standards, they form favorable attitudes toward the chatbot ad and establish intentions to comply with recommendations. Interestingly, the persuasive power of hypocrisy induction differs depending on the level of anthropomorphism of the chatbot. When a humanlike chatbot reminds them of their hypocritical behavior, participants feel higher levels of guilt and act more desirably, but a machinelike chatbot is not effective for creating guilt or generating compliance.
Article
Introduction To mitigate the COVID-19 pandemic and prevent overwhelming the healthcare system, social-distancing policies such as school closure, stay-at-home orders, and indoor dining closure have been utilized worldwide. These policies function by reducing the rate of close contact within populations and result in decreased human mobility. Adherence to social distancing can substantially reduce disease spread. Thus, quantifying human mobility and social-distancing compliance, especially at high temporal resolution, can provide great insight into the impact of social distancing policies. Methods We used the movement of individuals around New York City (NYC), measured via traffic levels, as a proxy for human mobility and the impact of social-distancing policies (i.e., work from home policies, school closure, indoor dining closure etc.). By data mining Google traffic in real-time, and applying image processing, we derived high resolution time series of traffic in NYC. We used time series decomposition and generalized additive models to quantify changes in rush hour/non-rush hour, and weekday/weekend traffic, pre-pandemic and following the roll-out of multiple social distancing interventions. Results Mobility decreased sharply on March 14, 2020 following declaration of the pandemic. However, levels began rebounding by approximately April 13, almost 2 months before stay-at-home orders were lifted, indicating premature increase in mobility, which we term social-distancing fatigue. We also observed large impacts on diurnal traffic congestion, such that the pre-pandemic bi-modal weekday congestion representing morning and evening rush hour was dramatically altered. By September, traffic congestion rebounded to approximately 75% of pre-pandemic levels. Conclusion Using crowd-sourced traffic congestion data, we described changes in mobility in Manhattan, NYC, during the COVID-19 pandemic. These data can be used to inform human mobility changes during the current pandemic, in planning for responses to future pandemics, and in understanding the potential impact of large-scale traffic interventions such as congestion pricing policies.