ArticlePDF Available

AI Agents as Team Members: Effects on Satisfaction, Conflict, Trustworthiness, and Willingness to Work With

Authors:

Figures

Content may be subject to copyright.
1
AI Agents as Team Members:
Effects on Satisfaction, Conflict, Trustworthiness, and Willingness to Work With
Alan R. Dennis
ardennis@iu.edu
Akshat Lakhiwal
aklakh@iu.edu
Agrim Sachdeva
agsach@iu.edu
Operations and Decision Technologies Department
Kelley School of Business
Indiana University
1309 E 10th St.
Bloomington IN 47405
2
Alan R. Dennis is a Professor of Information Systems and holds the John T. Chambers Chair of Internet
Systems in the Kelley School of Business at Indiana University. He was named a Fellow of the
Association for Information Systems in 2012, and won the LEO Award in 2021. His research focuses on
four main themes: team collaboration; artificial intelligence; fake news on social media; and
cybersecurity. Professor Dennis has written more than 150 research papers, and has won numerous
awards for his theoretical and applied research. He is ranked the third most published Information
Systems researcher over the last 30 years, and a 2020 analysis of citation data since 1990 placed him in
the top 1% of the most influential researchers in the world, across all scientific disciplines. His research
has been reported in the popular press almost 1000 times, including the Wall Street Journal, Forbes, USA
Today, The Atlantic, CBS, Fox Business Network, PBS, Canada’s CBC and CTV, UK’s Daily Mail and
the Telegraph. He is a Past President of the Association for Information Systems, and also served as Vice
President for Conferences.
Akshat Lakhiwal is a Ph.D. candidate in information systems at the Kelley School of Business, Indiana
University. His research uses methodologies such as experimentation, econometrics, and machine
learning to examine unintended consequences of individuals’ psychological processes and subsequent
reactions emerging from their interaction with information technologies (ITs). He has served as the
managing editor for MIS Quarterly Executive and was also awarded the Doctoral Student Service Award
by the Association of Information Systems in 2021. His research has been accepted for publication at MIS
Quarterly and has been presented at several conferences including the Conference on Information
Systems and Technology (CIST), Workshop on Information Technologies and Systems (WITS), NeuroIS
Retreat, and Annual INFORMS Meeting.
Agrim Sachdeva is a Ph.D. candidate in information systems at the Kelley School of Business, Indiana
University. His research focuses on artificial intelligence and cybersecurity, focusing on the
transformation of business processes due to artificial intelligence. His research has been presented at
venues such as MISQ Author Development Workshop, IEEE International Conference on Data Mining,
and the Annual INFORMS Meeting.
3
Abstract
Organizations are beginning to deploy artificial intelligence (AI) agents as members of
virtual teams to help manage information, coordinate team processes, and perform simple tasks.
How will team members perceive these AI team members and will they be willing to work with
them? We conducted a 2x2x2 lab experiment that manipulated the type of team member (human
or AI), their performance (high or low), and the performance of other team members (high or
low). AI team members were perceived to have higher ability and integrity but lower
benevolence, which led to no differences in trustworthiness or willingness to work with them.
However, the presence of an AI team member resulted in lower process satisfaction. When the
AI team member performed well, participants perceived less conflict compared to a human team
member with the same performance, but there were no differences in perceived conflict when it
performed poorly. There were no other interactions with performance, indicating that the AI
team member was judged similarly to humans, irrespective of variations in performance; there
was no evidence of algorithm aversion. Our research suggests that AI team members are likely to
be accepted into teams, meaning that many old collaboration research questions may need to be
reexamined to consider AI team members.
Keywords: artificial intelligence, digital human, virtual human, collaboration technology,
algorithm aversion, algorithm appreciation, prescriptive agent, group, team
4
Introduction
Artificial intelligence (AI) is defined as "the ability of a machine to perform cognitive
functions that we associate with human minds, such as perceiving, reasoning, learning,
interacting with the environment, problem-solving, decision-making, and even demonstrating
creativity" [123, p. iii]. AI is beginning to transform how businesses operate and how individuals
work, live, play, and learn [7, 52, 145], such as Amazon's recommendations and Cogito's
customer relationship management systems [6, 78, 145]. Such rapid dispersion of AI, coupled
with its expanding functionalities, has further provided an impetus to developing AI agents that
possess human-like voice or text-based communication capabilities and can disrupt conventional
social exchanges in interpersonal environments [7, 19, 42, 95, 102, 110, 122, 143, 147].
AI-enabled systems have traditionally found application in diverse organizational
contexts such as call centers and assembly lines. However, recent advancements in machine
learning and computational technologies have caused a gradual shift in the conceptualization of
AI from being a simple automation tool to an intelligent artifact that can facilitate collaboration
[7, 42, 123, 145]. These AI-enabled agentic artifacts differ from traditional IT as they are "no
longer passive tools waiting to be used, are not always subordinate to human agents, and can
assume responsibility for tasks with ambiguous requirements and for seeking optimal outcomes
under uncertainty" [7, p. 315]. Such AI agents transform the traditionally held view of IT as a
passive tool into a proactive entity that shares common goals, exhibits affective states, delegates,
and supervises work, and elicits perceptions of collective agency [7, 60, 75, 110, 147].
In many ways, the conceptualized functionalities of such AI agents resemble those of
popular personal assistants (e.g., Siri) [92], suggesting how they may be deployed in virtual
5
teams as an AI team member. For example, recent AI agents developed by Soul Machines1 and
Digital Domain2 show that AI team members may participate in collaborative settings by
exhibiting skills such as task management, information search, and support [1, 7, 52, 95, 96, 104,
134, 148, 151]. However, virtual collaboration is a highly interactive social environment, where
the quality of interactions and outcomes is often influenced by team members' perceptions of
individual and team processes such as trust, satisfaction, and conflict [105]. The entry of AI team
members into traditionally human-driven roles could thus influence collaboration by disrupting
such interpersonal aspects of teamwork, highlighting the need for pre-emptive assessment of the
deployment of AI team members [7, 84, 95, 104, 114, 134, 136, 148, 151].
Our research focuses on this important concern caused by the potentially disruptive
influence—both good and bad—of adding AI team members into a virtual team. We draw upon
the research of Marks et al. [105] to focus on interpersonal processes that are not only critical in
team settings, but are also often affected by the addition of new team members, thereby
influencing team members' willingness to work with them. As teams are multilevel entities, team
members may experience such perceptions and associated outcomes towards an individual team
member (e.g., trust and willingness to work with an individual ) as well as towards the team as a
collective (e.g., team conflict and process satisfaction) [108]. Thus, we focus on team members’
perceptions of an individual team member in terms of trustworthiness [107], and conflict and
satisfaction to capture their perceptions toward the team as a whole [44, 105]. Doing so allows us
to theorize how team members’ perceptions of the collective team as well as a focal team
member may be affected based on whether the team member is a human or an AI agent.
1 Soul Machines. (2022, March 3). Retrieved March 7, 2022, from https://www.soulmachines.com/
2 “Autonomous humans: Technology: Digital Domain.” 2021. Digital Domain (available at
https://digitaldomain.com/technology/autonomous-humans/; retrieved November 1, 2021).
6
Human perceptions are fraught with biases. For example, people prefer AI algorithms
over human experts but judge AI more harshly than humans when it makes the same mistakes
[53, 62, 81, 100, 130]. It is an open question whether such algorithm aversion also applies to AI
team members. Hence, we examine the effects of the variability in performance of an AI team
member on individual and team-related perceptions, relative to a human agent. As performance
is often assessed relative to the performance of others observed at the same time [91, 133], we
examine changes in perceptions of an AI team member due to variability in the performance of
other team members. Specifically, our research questions (RQs) are:
RQ1: Are perceptions of an AI team member different from perceptions of a human team
member exhibiting the same performance?
RQ2: Are perceptions of team processes different for teams with an AI team member versus
those comprised entirely of humans?
RQ3: Does poor performance by an AI team member have stronger or weaker effects on
these perceptions than poor performance by a human team member, and is this
perception influenced by the performance of other human team members?
We conducted a vignette-based lab experiment by manipulating the type of focal team
member (human or AI), its performance (high or low), and the performance of other team
members (high or low) to seek answers to our three RQs. Results suggest that the AI team
member was perceived to have greater ability and integrity but lower benevolence, resulting in
no differences in overall trustworthiness or willingness to work with it relative to a human team
member. Among team-level processes, the AI team member elicited lower satisfaction than a
human team member. When the performance of the manipulated (focal) team member was high,
the AI team member elicited lower perceptions of conflict than a human. However, no difference
in perceived conflict was observed when both AI and human team members' performance was
poor. Altogether, participants exhibited biases both for and against the AI team member.
However, variability in the performance of the focal agent was assessed in the same way for both
7
human and AI team members for all processes excluding conflict. Therefore, our findings reveal
that algorithm aversion likely does not generalize to AI team members.
Our research investigates important aspects of an organizational phenomenon that is not
yet routine but is on the horizon, following the long tradition of future-focused research [8, 25,
26, 146]. Our findings thus contribute to knowledge on how the impending augmentation by AI
team members would potentially disrupt traditional social exchanges in collaborative settings.
We anticipate and discuss the implications of AI team members before they are widely deployed
in organizations rather than waiting for failed deployments to point out problems.
Prior Research
Artificial Intelligence
Recent progress in machine learning has fostered the development of AI-enabled
technologies that can emulate cognitive functions associated with the human mind [123]. AI-
enabled technologies typically learn from real-world evidence by identifying suboptimal
performance and taking corrective actions to improve [129]. Functionalities such as computer
vision and speech recognition enable AI to gather information, i.e., sense its environment. In
addition, natural language processing enables the creation of feature representations of speech
and text, i.e., comprehend meaning. Such competencies allow AI to compute and execute actions
toward achieving the goals specified by the user [9, 132].
AI-enabled technologies may take many forms, such as a recommendation system,
chatbot, or digital assistant, and have been deployed across diverse business contexts, such as
procurement, processing payments, and invoicing clients [149]. They can facilitate compliance
[35] and enhance individual productivity [97]. For example, using AI to support collaborative
searches can improve user perceptions [4]. AI-enabled technologies typically possess superior
8
speed, accuracy, reliability, and scalability than humans and complement human competencies
such as creativity, empathy, and judgment [123]. Recent research has discussed human-AI
hybrids where autonomous AI agents work together with humans [7, 32, 123, 134].
Research has identified disparities in individuals' perceptions, evaluation, and behavior
toward AI relative to humans, especially in settings that have conventionally involved human-to-
human interactions [53, 54]. Research has reported bias in favor of AI, i.e., algorithm
appreciation [100], and against AI, i.e., algorithm aversion [53]. Algorithm appreciation can
often motivate individuals to choose advice from an AI algorithm over human advice [100] and
is stronger when the human decision-maker has lower expertise [100] or greater accountability
[141]. Algorithm appreciation is also observed when the difficulty of the task increases [12].
Other research shows that people exhibit algorithm aversion by choosing their judgment over
AI's when they observe imperfections in AI [17, 54], thus indicating bias against AI despite its
ability to outperform humans generally [17, 38, 53, 73]. Aversion is stronger when there are
strong task incentives [54]. The ability to modify AI recommendations reduces aversion,
suggesting the effect of loss of control due to increased AI autonomy [54, 141].
Algorithm appreciation and algorithm aversion are not mutually exclusive. Algorithm
appreciation is a general positive bias for AI before the AI makes a mistake [100]. Algorithm
aversion is a negative bias against AI after AI makes a mistake because AI is judged more
harshly than humans who make similar mistakes [54]. Errors may trigger algorithm aversion
because people see mistakes as losses and weigh them more heavily. They anticipate consistent
future performance from AI, so a mistake today means a mistake tomorrow [53].
Collaboration Technologies and Team Processes
Research on collaboration technologies has a long history and remains a focus of current
9
research in information systems (IS) [1, 43, 46, 51, 55, 65, 121]. In looking back over the four
decades of research, Raghuram, et al. [121] identified three distinct research clusters on
collaboration technologies. Our research contributes to the computer-mediated work cluster,
which focuses on the consequences of using collaboration technologies on decision-making,
communication processes, and productivity [121]. We draw upon prior research to discuss
aspects of individual perceptions and behavior that are likely to be influenced by the deployment
of AI agents in collaborative environments. We build upon the conceptualization of interpersonal
processes by Marks et al. [105] to focus on individual team members (i.e., trustworthiness) and
the team as a whole (i.e., conflict and process satisfaction) [105] to theorize how they are likely
to be different for teams with an AI team member versus those with all human team members.
Trustworthiness. One of the most important and influential aspects of teamwork is trust
[50, 90, 105]. Trust is an individual's willingness to be vulnerable to others' actions [107]. Trust
is between people [107] but also applies to technologies [93, 144]. Trustworthiness is an
assessment of whether another person or thing is worthy of trust [107]. Trust in team members
reduces transaction costs and increases confidence and security, motivating openness and
improving information exchange [56, 94]. Trust allows team members to take more risks and be
vulnerable to the actions of others based on expectation and predictability of behaviors [94].
Trust decreases the transaction costs of teamwork, allowing individuals to engage less in self-
protective actions [85], and thus plays a pivotal role in virtual teams [50, 90].
Willingness to work with. Team members' willingness to work with one another is
important because it makes teamwork easier, diminishes anxiety, and leads to better work
outcomes [11, 37]. We have long studied the use of technology, but researchers have recently
argued that humans will not use AI agents, but rather will work with them in similar ways as they
10
work with other humans [7]. Thus, we focus on userswillingness to work with an AI team
member as compared to a human team member who exhibits the same behaviors—as opposed to
willingness to use or intention to adopt AI team members. Factors that influence the willingness
to work with AI agents acting as team members are not well understood [37, 40, 117, 134]. Baird
and Maruping [7] theorize that one of the most the most important factors is likely to be
trustworthiness, which is also important in collaboration [50, 90] and thus provides a common
theoretical basis for comparing the willingness to work with both human and AI team members.
Conflict. Team conflict is the second key interpersonal process that strongly influences
team processes and outcomes [44, 105]. Team members may perceive conflict when no conflict
exists or may not be aware of conflict when it exists [120]. Three types of team conflict have
been commonly studied [39, 40, 89]. Task conflicts are related to the interpretation of facts and
distribution of resources relating to tasks [40, 89]. Process conflicts refer to disagreements about
the delegation of responsibilities and activities [89]. Relationship conflicts arise from differences
in personal tastes, preferences, or values for reasons unrelated to the task or process [40, 89].
Task and process conflict can help or hurt team effectiveness, but any type of conflict usually
reduces satisfaction [33, 40, 137]. The three types of conflicts have nuanced definitional
differences, but it is not uncommon to observe strong correlations among them [45, 98, 106].
Process Satisfaction. Satisfaction is the third key interpersonal process in a team [105].
Satisfaction is an individual's affective reaction to the events that occurred and a judgment that
the outcomes fulfilled the goals [14, 15, 46, 125]. Process satisfaction is the extent to which
team members are happy with the procedures used within the team [125]. Process satisfaction is
primarily an affective reaction [14], so it is negatively affected by conflict [40, 139]. Satisfaction
is an important measure of team success and influences long-term performance [109, 125], as
11
well as the adoption and use of collaborative technology [125].
AI Team Members in Collaborative Environments
AI agents are being used across various domains including sales, marketing, customer
service, military operations, and research and development in different automated or augmented
capacities [101, 117, 123]. AI agents can disrupt business processes by influencing individuals'
perceptions and behaviors [7, 9, 19, 95, 111, 134, 148]. Therefore, researchers have emphasized
the need for pre-emptive assessment of how the introduction of AI agents can affect work
processes and individuals' roles in teams [84, 95, 104, 114, 134, 136, 148, 151].
AI team members are considerably different from traditional automation technologies
[61]. That is, they not only possess knowledge and information processing capabilities but are
also capable of working independently toward the achievement of a common goal [24, 70, 72,
95, 103, 150]. That is, AI team member(s) may work with human team members and
demonstrate varied degrees of independence [95, 117, 134]. Research suggests that AI agents can
improve or disrupt performance in collaborative settings [117, 134]. Much of this research has
been conducted in command-and-control settings where team members have well-defined roles
and communication is constrained [104, 117], such as military settings (e.g., a crew flying
Predator drones) [104, 117]. In these settings, AI team members and their reputation can increase
trust [74], and the personalities of AI team members can influence the formation of a shared
mental model in teams and influence performance [76, 77].
Researchers have theorized that humans will delegate tasks to AI agents in the same way
they delegate to other humans [7]. AI team members may also self-delegate tasks [117, 134] and
delegate tasks to humans or other AI team members [7]. Thus, humans' interaction with AI team
members is perceived as collective agency, where individual agencies are likely to be blurred
12
during collaboration [7]. Research suggests that team performance is higher when humans
collaborate with the AI team member than supervise it [5]. Communication and trust with AI
team members increases performance [7, 57, 104, 117].
Transparency and reliability of AI agents affects people's attitudes and performance [95,
117, 134]. Yet, how the performance of AI team members influences perceptions of
interpersonal team processes and willingness to work with them is not yet understood [37, 40,
117, 134]. Such perceptions are important indicators of how the AI team member would be
perceived relative to conventional settings, where humans occupy similar roles. Perceiving an AI
team member as an opportunity can foster positive outcomes for the team and human team
members [19]. Conversely, perceiving an AI team member as a threat can engender negative
outcomes [19]. The performance of AI team members can influence people's attitudes,
potentially influencing perceptions and leading to unintended effects [142].
AI team members may exhibit various interaction modalities (e.g., text or speech),
autonomy (e.g., partially or fully autonomous), and roles (e.g., subordinate, supervisory, or
collaborative) [7, 97, 134]. We draw upon past research and use cases to configure the AI team
member for our research [117, 134]. We envision an AI team member that supports team
processes and interacts in text-based natural language [68]. It is autonomous and co-delegates
work with other team members [7, 112, 117, 134, 148, 151]. We build upon the work of
Chatterjee et al. [21-23] and Nunamaker et al. [48, 116] to define three fundamental affordances
for an AI team member (Table 1). First, communication support affordances, such as
coordination, delegation, and feedback, to communicate with humans. Second, information
processing support affordances, such as data cataloging, searching, analyzing, and organizing, to
manage information. Third, process structuring and appropriation support affordances, such as
13
planning, task breakdowns, task tracking, and quality assessment, to manage work processes. We
focus on a single AI team member, but teams may have several [104, 117, 151].
Hypothesis Development
We focus on the interpersonal process of trust toward a team member, and conflict and
process satisfaction toward the team as a collective [105]. We begin with the assumption that the
AI team member performs well on its assigned tasks and then relax this assumption. We further
theorize how good or poor performance of other human team members may influence
perceptions of the AI team member's performance. Figure 1 presents our research model.
Perceptions of an AI Team Member
Trustworthiness is a major element of teamwork and influences interpersonal processes
such as willingness to work with others [7, 37, 50, 90, 105]. We center our arguments around
Mayer's theory of trust to theorize how an AI team member, relative to a human, may influence
perceptions of ability, integrity, and benevolence, and hence trustworthiness [85, 107].
Ability. Ability is the skills and competencies that enable team members to perform tasks
[85, 107]. AI-enabled technologies, such as AI team members typically possess computational
power, speed, scalability, accuracy, and reliability that contrast with human abilities such as
creativity, empathy, and judgment [7, 123]. AI is generally superior to humans in computational
capabilities and tasks such as information retrieval, calculation, and pattern recognition [7, 53,
100, 117]. AI does not suffer from human weaknesses, such as fatigue or interpersonal conflicts,
and does not exhibit cognitive biases associated with humans [117, 134]. Conversely, AI has
sharp performance boundaries, such that performance rapidly falls after the boundaries are
passed [7, 64].
Based on our conceptualization, an AI team member supports other team members with
14
tasks such as fetching information, scheduling meetings, providing reminders, and coordinating
activities [7, 59, 112, 134, 151]. Therefore, an AI team member can improve team decision
making within its bounds [134]. This hypothesis assumes that the AI team member performs
these tasks quickly and with high quality (we relax this assumption in H5). When an AI team
member performs well, its ability will be recognized. It will be perceived to be more capable at
these tasks due to its ability to execute work with greater computational speed, power, and
reliability [7, 53, 100, 117], relative to a human team member. Thus, we theorize:
H1a: An AI team member will be perceived to have greater ability than a human.
Integrity. Integrity is the consistency in adhering to acceptable principles and
transparently repeating committed tasks [107]. Integrity has two dimensions, i.e., consistency
and transparency [107]. AI-enabled technologies are typically consistent because technologies
usually produce consistent results given the same inputs [7, 123]. Furthermore, AI team members
would not possess human biases that distort analyses and produce inconsistent results [3, 117].
Therefore, AI team members would offer benefits in consistency over humans [117].
Transparency is complying with principles under which one acts. AI team members are
less likely to exhibit counterproductive behaviors that violate organizational principles, act
politically, or serve hidden personal agendas. Unlike humans, they do not have personal goals
and typically do not compete with human team members [36, 131]. Nonetheless, the fairness of
AI-enabled technologies has been challenged as its performance can sometimes be biased (e.g.,
gender or ethnicity biases [152]). Individuals may have disparate perceptions of the principles
underlying an AI's actions, so there has been a general recognition of the need for transparency
about the principles to which AI adheres [18].
We focus on teams within an organization where the organization would provide AI team
15
members as part of the available technology suite. As the AI team member works on behalf of
the organization, it is likely to comply with the organization's principles and not deviate
drastically. The AI team member would be more consistent in adhering to these principles than
humans, as AI team member's goals would typically not conflict with the organization's
principles [36, 131]. Thus:
H1b: An AI team member will be perceived to have greater integrity than a human.
Benevolence. Benevolence is the extent to which one wants to do good to others,
independent of any rewards [85, 107]. Benevolence is inherently a human characteristic [69].
However, an AI team member can behave benevolently by being programmed to have no ulterior
motives and avoid behaviors that unduly benefit it—things that are not always true of human
team members [36, 88, 131]. Three broad factors can influence perception of benevolence: goal
incongruity, information asymmetry and reciprocal altruism [57].
Some individuals have self-interested personal goals that conflict with organizations’
goals or the goals of other team members [88, 99, 131]. These individuals may act in their self-
interest instead of ways that benefit the organization, harming other team members. Such goal
incongruity can affect performance [88, 99, 131]. AI team members will not be self-interested as
their goals would be aligned with the organization. Thus, AI team members are less likely to be
perceived as having goal incongruity than human team members.
The second factor, information symmetry [57], may be different. AI-enabled technologies
often produce concerns of information asymmetry because they are "black-boxes" [20] that make
it difficult to ascertain the information that an AI team member has and how it uses that
information [123]. This makes an AI team member opaque and limits the ability of human team
members to understand its behavior, which may create concerns about privacy [110]. Thus,
16
information asymmetry is a concern with AI.
The third factor, reciprocal altruism, suggests that benevolent actions are driven by
expectations of reciprocal benevolent behavior [10]. AI-enabled technologies are primarily
objective-driven and are unlikely to recognize benevolent behavior and reciprocate. Hence, if
reciprocal altruism influences benevolence for certain people, they are less likely to perceive the
AI team member as benevolent.
In summary, goal incongruity is likely to be lower for an AI team member than a human
team member, but this may be offset by greater information asymmetry and lower reciprocal
altruism. Thus, we posit:
H1c: An AI team member will be perceived to have lower benevolence than a human.
Trustworthiness. Trustworthiness is the extent to which a team member is worthy of
trust. Trust is "the willingness of a party to be vulnerable to the actions of another party based on
the expectation that the other will perform a particular action important to the trustor,
irrespective of the ability to monitor or control that other party" [107, p. 712]. Perceptions of
trustworthiness are based on assessments of ability, integrity, and benevolence [107]. We have
argued above that when AI team members perform as intended, they would be perceived to have
greater ability (H1a), greater integrity (H1b), and lower benevolence (H1c). In short-term
transactional exchanges, ability and integrity are more important to trust-related judgements than
benevolence [41, 85, 124]. On the other hand, benevolence is more important for longer-term
interactions [36, 85]. Similarly, people weigh AI's ability more heavily than other factors [95].
Thus, we theorize that ability, integrity, and benevolence will influence perceptions of
trustworthiness [107], with ability and integrity having stronger effects than benevolence [41, 95,
124]. Since AI team members will have greater ability and integrity but lower benevolence, the
17
net of these differential effects will be that the AI team member will be perceived to be more
trustworthy than a human team member. Therefore:
H2a: Ability, integrity, and benevolence will influence trustworthiness, with ability and integrity
having stronger effects than benevolence.
H2b: An AI team member will be perceived as more trustworthy than a human.
Willingness to Work With. Willingness to work with other team members is important
in collaborative settings because it influences team performance [37], and their ability to work
with limited supervision and delegate tasks to each other [80]. It is also important to understand
employees' willingness to work with AI team members and delegate tasks to them [7].
Trustworthiness is the foundation of delegation [7] and the willingness to work with humans and
AI [7, 80], so the same factors that influence trustworthiness should also influence willingness to
work with: ability, integrity, and benevolence.
An AI team member will provide task-related capabilities that complement human team
members [7, 134], thereby increasing the likelihood that a team achieves its goals. When an AI
team member is perceived to have the ability to help reach an individual's goals, a user is more
likely to be willing to work with it [7]. An AI team member will be perceived to have integrity
because of its consistency in adherence to organizational principles [134]. Benevolence is
inherently a human characteristic [69], so team members will not expect the AI team member to
be benevolent, but this is less important over the short term [41, 124]. As we noted above,
humans tend to weigh ability more than other aspects when assessing AI [95]. Thus, we theorize
that the three factors that influence trust ability, integrity, and benevolence – will have the
same pattern of effects on willingness to work with:
H2c: Ability, integrity, and benevolence will influence willingness to work with, with ability and
integrity having stronger effects than benevolence.
H2d: Individuals will be more willing to work with an AI team member than a human.
18
Perceptions of Team Processes
Team processes are co-created by team members working as a collective and cannot be
ascribed to one team member alone [15, 105, 125]. We focus on two elements of what Marks et
al. [105] call the interpersonal aspects of team work. The first is conflict [105] because conflict
and how team members respond to it strongly affects process satisfaction [108]. The second is
process satisfaction [105], which is both a judgement of the extent to which the process met the
teams' needs as well as an affective reaction to the process [15].
Perceptions of Conflict. The perception of conflict is the awareness of disagreements,
discrepancies, or incompatible or irreconcilable desires in the team [89]. We theorize how the
addition of an AI team member, relative to a human team member, influences perceptions of
conflict. Our focus is on the perception of conflict, which may differ from actual conflict [118,
120, 127, 137]. Conflict interferes with goal attainment, and most people do not enjoy conflict,
so perceptions of conflict can reduce satisfaction [33, 40, 139].
Perceptions of conflict are fundamentally subjective and easily influenced by situational
factors [120, 127, 137]. Perceptions of conflict are also strongly influenced by preconceived
notions about the actors involved [127, 137]. These effects are so strong that perceptions may
differ from reality and two observers of the same event may have very different perceptions
[120, 127]. AI is perceived to be more objective than humans [134], and less likely to hold strong
opinions on personal taste, political preference, and other factors known to influence conflict.
These initial preconceptions will bias how behavior is interpreted [127, 137]. We theorize that
when an AI team member and a human team member disagree, it is less likely to be perceived as
"conflict" and will instead, it will be interpreted as a misunderstanding. Therefore, perceptions of
conflict will be lower when the team contains an AI team member than when it does not. Thus:
19
H3: Teams with an AI team member will have lower perceptions of conflict than teams with all
human team members.
Satisfaction. Satisfaction can be both a judgment (e.g., did it meet goals?) and an
affective reaction (e.g., was it pleasurable?) [15, 125]. Process satisfaction tends to be more of an
affective response to how the team members worked together as a functioning unit [15, 125].
Conflict and team members' conflict management are central to team interpersonal processes as
well as team outcomes [105]. Research shows that conflict can help or hurt the quality of a
team's work products [40, 119, 137]. Most people do not enjoy conflict [40, 105, 139], so
conflict usually reduces satisfaction [33, 40, 137]. Thus:
H4a: Conflict will reduce process satisfaction.
Perceptions of conflict are extremely important in virtual teams because people are
influenced by how they interpret a situation, not by how an objective observer might view it
[120, 127, 137]. We argued above that the presence of an AI team member would lower
perceived conflict (H3). Therefore, perceptions of reduced conflict should increase satisfaction
due to the addition of an AI team member relative to a human team member. Therefore:
H4b: Teams with an AI team member will have greater process satisfaction.
Influence of Team Member Performance
We have theorized how an AI team member influences perception of team processes
relative to a human team member while assuming good performance. Perceptions are generally
influenced by other team members' performance [40]. Performance of AI also influences
perceptions [53]. Good performance usually leads to positive evaluation, and poor performance
leads to negative evaluation.
An important aspect of our study is to understand whether variability in performance
affects the evaluations of AI and human team members differently. Evaluations are strongly
20
influenced by consistency in performance [53]. AI team members are similar to programmed
software and would be expected to be more consistent than humans [64], i.e., the performance of
AI team members today is likely to be similar to its performance tomorrow. Humans are more
erratic in performance than software, so performance today is not necessarily a good prediction
of performance tomorrow [64]. People are generally more likely to blame errors made by AI-
enabled technologies to the AI itself [64], whereas locus of control theory suggests that we may
ascribe the source of an error made by a human to the human or the situation [128, 138].
Empirical evidence suggests that people are more sensitive to errors committed by AI-
enabled technologies than by humans [17, 53]. Given the same mistake, users lose confidence in
the AI faster than humans [53]. Such phenomenon is likely because performance is more
consistent with an AI than human [64]. Therefore, we theorize that performance will have a
stronger effect on the perceptions of an AI team member than a human team member. Poor
performance by an AI team member will lower perceptions, and good performance will increase
perceptions because we are more likely to assume that performance is more consistent. Thus:
H5: The performance of the focal team member will moderate the relationship between the type
of the focal team member and a) perceptions of the team member (H1 and H2) and b)
perceptions of the team (H3 and H4), such that the relationship will be strengthened when
the performance of the focal team member is high.
Performance is often evaluated relative to some standard [91, 133], such as the
simultaneous performance of others [91, 133]. Thus, evaluation of one team member's
performance will be influenced by the performance of other team members. When the other team
members perform well, the performance of the focal team member is not perceived as highly as
when the performance of other team members is poor, and vice versa.
A key question in our context is whether the evaluation of an AI team member will be
influenced in the same way as a human team member when the performance of other team
21
members is good or bad. There is little prior research to guide our theorizing. Prior work on
algorithm aversion [53] suggests that an AI team member will be judged more harshly when it
underperforms humans and more favorably when it outperforms them. Thus:
H6: The performance of the other team members will moderate the relationship between the type
of the focal team member and a) perceptions of the team member (H1 and H2) and b)
perceptions of team processes (H3 and H4), such that the relationship will weaker when the
performance of the other team members is high than low.
Method
We used a 2x2x2 design manipulating the nature of a focal team member (AI or Human),
the performance of the focal team member (High or Low), and the performance of other team
members (High or Low). As AI team members are a relatively immature technology, we used a
vignette-based research approach to conceptualize the AI team member and presented the focal
team member in a gender-neutral form. We manipulated the focal team member's performance
and other team members' performance. Subjects were randomly assigned to one of the eight
treatments.
Vignettes have been commonly used to study technology use [34], human behavior [2,
135], and trust in virtual teams [49, 126]. They are also a common tool to study new or emerging
technologies because there are few alternatives [83, 115, 140]. Vignettes do not require
participants to have in-depth knowledge of the topic or be familiar with the situations depicted to
be effective [82, 83, 115, 140]. Vignettes are commonly used to examine trust [58, 113, 126] and
team collaboration [30, 49, 86]. Meta-analyses show no differences in conclusions between
vignette studies and studies of actual behavior [34, 135]. Thus, evidence suggests that using
vignettes produces the same research conclusions as studies of actual behavior, although the
effect sizes may be weaker [34, 135]. The key advantage of vignettes is control; they ensure all
subjects see the same events, with the only change being the manipulation [2, 49]. Thus,
22
perceptions are less contaminated by extraneous factors than traditional experiments [37, 71].
Participants
The participants were 610 undergraduate students recruited from an introductory business
course at a large state university. Fourteen subjects (2%) failed the attention check, leaving 596
participants. Participation was voluntary, and subjects were awarded extra credit for
participation. The average age was 19 years, and 43% were female.
Task
The vignette presented communication among the three members of a virtual student
team. The three team members were from different universities and communicated over an
internet messenger (e.g., Microsoft Teams) to complete a course project which required them to
develop a searchable website for a company's products. We used messenger software in the
vignette because when we collected the data (immediately before the pandemic), our participants
were more familiar with it than videoconferencing; Zoom would not become commonly used in
classes until a few months later. The task description presented information about the project
(deliverables, due date, and grading) and the three team members (Jordan, Leslie, and Taylor).
The third team member (Taylor) was manipulated to be human or AI. All human characters were
portrayed as undergraduate students with similar backgrounds and majors. The vignette showed
communication among team members working on the project over two weeks.
Treatments
The primary treatment of interest was the presentation of the focal team member (Taylor)
as either AI or human team member. This was manipulated in the description of the vignette. We
included a manipulation check to ensure that participants understood the nature of the focal team
member (AI or human). Any participant failing the manipulation check was sent back to reread
23
the instructions and take the manipulation check again. The focal team member (either AI or
human) performed actions associated with the three categories of affordances in Table 1.
The other treatments were the performance of the focal team member (high or low) and
the other two team members (high or low). We designed behaviors for the focal team member
and the other two members and crossed these to produce four vignettes. We used the work of
Jarvenpaa and Leidner [85] in designing behaviors that would be perceived as high and low
performance. For example, social communication, enthusiasm, initiative, and coping behavior
build trust early in a team's life, while predictability, positive leadership, timely responses, and a
successful transition from social to procedural to task activities maintain trust in later stages [85].
We conducted a pilot with 52 participants drawn from the same course as the main study
(who did not participate in it) to assess participants' perceptions of the performance of the focal
character and the other team members, i.e., a manipulation check. The results confirmed that the
vignettes produced significantly different perceptions of performance as intended validating the
manipulation of performance (see Appendix A). We conducted another pilot with 75 participants
to assess perceptions of the realism across the vignettes. The participants' perceptions of realism
were significantly above neutral regardless of whether focal team member was an AI team
member or human team member. No significant differences in realism were observed across
treatments indicating that the vignettes were perceived to be equally realistic (see Appendix B).
Measures
Five dependent variables measured individual team member level perceptions:
trustworthiness, ability, benevolence, integrity, and willingness to work with. Trustworthiness
was measured on a five item scale (two items dropped for reliable measurement) [49]. Ability
(one item dropped for reliable measurement), benevolence and integrity (one item dropped for
24
reliable measurement) were each measured on three item scales [126]. Willingness to work with
was measured on a five item scale (one item dropped for reliable measurement) [37]. We used a
confirmatory factor analysis to establish convergent and discriminatory validity. The Fornell–
Larcker criterion was met [66], the reliabilities were above 0.70, and the average variances
extracted (AVEs) were above 0.50. The model had a comparative fit index (CFI) of 0.980, a root
means square error of approximation (RMSEA) of 0.065, and a 3.572 ratio of chi-square to
degrees of freedom. See Appendix C for the items and Appendix D for the analyses.
Two dependent variables measured team processes: process satisfaction and conflict.
Process satisfaction was measured using three items from [67]. Conflict was measured using
items designed to measure task conflict, process conflict, and relationship conflict [89]. The
confirmatory factor analysis indicated that all nine items were highly correlated and measured
the same construct, which is not unusual [45, 98, 106]. The model had a CFI of 0.987, an
RMSEA of 0.049, and a 2.428 chi-square to degrees of freedom ratio (See Appendices C and D).
Procedure
Participants first reported their demographics and read the randomly assigned vignette.
They then completed a survey about their perceptions of the characters in the vignette.
Results
Perceptions of Focal Team Member
Direct Effects of Team Member. Table 2 presents the treatment means and standard
deviations. A multivariate GLM (aka MANOVA) showed that whether the team member was
human or AI significantly influenced the five team member-related dependent variables
(F(5,584)=17.02, p<.001). The results for each of the separate variables are presented in Table 3.
The R2 ranged from 26-44%. Cohen [29] calls an R2 of 25% a "large" effect size, so we consider
25
these models to have large effect sizes.
The results suggest that the AI team member was perceived to have greater ability and
integrity but lower benevolence than human team members, irrespective of performance. As a
result of these countervailing forces, there were no significant differences in the perception of
trustworthiness or willingness to work with an AI team member or a human team member.
Power analysis with G*Power [63] indicated that the power of these univariate analyses to detect
a small effect size of f=.10 was .85. Thus, we conclude that nonsignificant factors are likely to
have few meaningful effects.
Moderation due to Team Member Performance. Perceptions were significantly
influenced by the focal team member's performance, so we conclude that the vignettes
manipulated performance as intended (Table 3). However, there were no significant interactions
between performance and the focal team member type for all five-team member-related
dependent variables. Thus, human and AI team members were judged similarly when they
exhibited similar differences in performance; there is no evidence of algorithm aversion.
Significant interaction effects of the focal team member’s and other team membersperformance
(see Table 3) suggest that perceptions of ability, integrity, benevolence, trust, and willingness to
work with the focal team member are influenced by the performance of other team members.
Mediation Analyses. We conducted two mediation analyses using the Hayes PROCESS
model [79] to see if individual perceptions of the AI team member were fully or partially
mediated by ability, integrity and benevolence (see Table 5). Trustworthiness and willingness to
work with were both significantly influenced by ability, integrity, and benevolence. The beta
coefficients for ability and integrity were significantly higher than the beta for benevolence when
the dependent variable is willingness to work with, but not trustworthiness. The performance of
26
the focal team member had a significant direct positive effect on willingness to work with but not
trustworthiness. Thus, the perceptions of an AI team member's trustworthiness were fully
mediated by ability, integrity, and benevolence, with all three having somewhat equal
importance. In contrast, the perceptions of an AI team member's willingness to work with was
partially mediated by ability, integrity, and benevolence, with benevolence being less influential.
In other words, team member performance as well as the three factors that influence
trustworthiness also influence willingness to work with.
Perceptions of Team Processes
Direct Effect of Team Member Type. A multivariate GLM shows that whether the team
member was human or AI significantly influenced process satisfaction (F(2,588)=3.06, p=.048)
but not conflict. Participants reported significantly less process satisfaction when the focal team
member was an AI agent than a human, but there were no effect of team member type on
perceptions of conflict (see Table 4), although it approached significance in the theorized
direction (F(1,589)=1.986, p=.077). The R2 indicated large effect sizes.
Moderation due to Team Member Performance. There was a significant negative
interaction between the focal team member performance and focal team member type for conflict
but not for process satisfaction (see Table 4). That is, satisfaction was the same regardless of
whether the focal team member was AI or human, but when the AI team member exhibited good
performance, conflict was perceived to be lower than with the human team member exhibiting
the same performance. Thus, we find no evidence of algorithm aversion. Perceptions of process
satisfaction and conflict were strongly influenced by the performance of other team members, as
we would expect for dependent variables focused on team processes. There were no significant
interactions between the type of team member and the performance of other team members.
27
Mediation Analyses. The mediation analyses (see Table 5) found that conflict had a
significant negative effect on process satisfaction. The performance of other team members
significantly affected process satisfaction, but the type of team member had no significant direct
effect on process satisfaction. Thus, the effects of the AI team member on process satisfaction
are partially mediated by conflict, but there are also direct effects due to team members'
performance.
Discussion
We set out to determine if perceptions of team processes are similar for an AI team
member and human team members. Despite identical behaviors of human and AI team members,
some perceptions varied, indicating fundamental biases for and against AI team members [cf.
17]. As theorized in past research [123, 134], participants perceived AI team members to have
greater performative ability (i.e., ability and integrity) but lower benevolence, a social factor that
is uniquely human [69]. Furthermore, ability, integrity, and benevolence influenced trust and
willingness to work with AI team members, with benevolence being less important for
willingness to work with. These findings suggest that individuals are open to working with AI
team members. As we develop theories for AI team members and encounter them in teams, such
fundamental theoretical differences in performative abilities and social factors [108] will be
important touchstones.
We found no evidence of algorithm aversion toward an imperfect AI team member [53],
highlighting an important boundary condition to this phenomenon. That is, the assessment of
mistakes made by an AI team member and a human team member mistake were no different
unlike how we judge errors by AI algorithms [17, 53]. Although we may have certain biases
about humans or AI (our first point above), the way we interpret their performance is not
28
affected by these biases. Past research examining algorithm aversion and algorithm appreciation
has focused on people making judgments in narrow quantitative settings. Our study focused on a
wider range of human-like behaviors (e.g., finding information, coordinating, reminding),
suggesting how research on AI algorithms may not generalize to prescriptive AI agents deployed
as team members.
The presence of an AI team member also changed how perceptions of conflict and
satisfaction (team processes) within the team. When the focal team member performed well,
perceived conflict was lower when the team had an AI team member instead of all human team
members. That is, a high performing AI team member was perceived to introduce less conflict
than a similar human team member. However, when performance was low, no different in
perceived conflict was observed between teams that had an AI team member versus those that
did not. Thus, including an AI team member has more nuanced effects on how participants view
team processes (conflict and satisfaction) than on how they view the individual AI team member.
Process satisfaction may be influenced by functional performance as well as the ability to build
relationships with other humans—something that is reduced when an AI team member replaces a
human member.
This study suffers from the usual limitations of laboratory research. We studied American
undergraduate students who reported their assessments after observing a fictitious student team
work on a project. Student samples are appropriate for testing theories about technology use, as
students are the future professionals and managers who will encounter new technologies in the
workplace [31]. Our participants were asked to assess a technology that was new to them, which
is common in IS research, because much of what we study is new technology innovations [8, 25,
26, 146]. We used a project setting that was familiar to students to enable them to better
29
understand the behavior. Nonetheless, our assessment using young American adults may differ
from individuals drawn from different populations, suggesting further study [27]. Participants
read a vignette and reported their perceptions, as commonly done in past research on trust [58,
113, 126] and team collaboration [30, 49, 86]. Meta-analyses show no differences in conclusions
between vignette studies and studies of actual behavior, except that vignette-based studies often
produce weaker effects than personally experiencing the situation presented in the vignette [34,
135]. As AI team members become routine, future research may examine reactions in a more
relatable virtual team context. There were high correlations between willingness to work with
and ability and integrity (see Appendix D) which might dampen the effects of other factors. For
limiting contamination, identical behavior of the AI and human team member were scripted.
While the dialogue seems to be appropriate for a human, communication style of AI agents may
vary, raising a question for future research.
Implications for Research
Notwithstanding these limitations, we believe there are seven implications for research.
First, we found that humans perceive AI and human team members differently. Some research
argues that AI team members could be designed to contribute to social aspects [28, 134]. Should
resources be directed toward developing social AI team members? Furthermore, should AI team
members be designed to be benevolent? We need more research to better understand these biases
and their implications for the design and deployment of AI team members.
Second, our research contributes to the growing stream of work on algorithm aversion
and appreciation and highlights the need to examine how extant knowledge may generalize to
team related settings. Past research has found both negative [17, 53] and positive distortions
[100] in individuals’ perceptions of AI algorithms [17]. We studied an agentic application of AI
30
and found different results. We speculate that as aspects of AI begin to resemble humans and is
delegated more human-like tasks, users are less likely to assume consistency in performance of
AI [64], so locus of control theory [128, 138] begins to apply to AI. Our findings thus call for an
investigation of theoretical boundaries of AI aversion and appreciation.
Third, in transactional exchanges and short-term work situations, ability and integrity are
more important for trust than benevolence [41, 85, 124]. We found that in this short-duration
team setting, benevolence was as important as ability and integrity for the development of
trustworthiness, but less important for willingness to work with. This suggests that the
benevolence of AI team members may be more important than past research with AI and
technology in other context suggests. AI itself can never be truly benevolent [69], but it can be
designed to exhibit benevolence in terms of goal incongruity and information asymmetry. Users
may be incapable of distinguishing benevolence of an AI and its developer or providing
organization, e.g., Facebook or Red Cross. They may thus ascribe benevolence to the AI team
member based on their perceptions of the organization. Nevertheless, our findings emphasize the
need to disentangle the benevolence of an AI team member from the organization and examine
the relative importance of benevolence relative to ability and integrity.
Fourth, our findings broaden the understanding of how disagreements between users and
AI agents may affect perceptions of conflict. Perceptions of conflict are strongly influenced by
preconceptions [127, 137]. AI team members are perceived to be more objective [134], so when
there is little disagreement and one party is AI, we perceive less conflict, even when the
behaviors are identical. Thus, we need more research on how humans ascribe meaning to
different behaviors and how this process is different for AI versus human team members.
Fifth, we theorize and enrich the knowledge of short-term effects of adding an AI team
31
member. However, what are the long-term implications? The AI team member serves two
masters: the team and the organization. In well-functioning teams, a rapport develops that
enables team members to speak freely [87]. Does an AI team member change this? Would team
members worry about the AI team member reporting inappropriate behavior to managementor
poor-quality work? This raises a high-order question: should an AI team member report behavior
issues to managers or the human resources department when behaviors violate company
policies? For example, if a human team member displays overt racial or sexual discrimination or
harassment, what is the best course of action and the legal ramifications? Would the AI team
member at a U.S. university be classified as a Title IX responsible employee required by law to
report such behavior? We need more research on AI team members' ethical and legal aspects.
Sixth, AI agents serving as personal assistants (e.g., Siri) have been widely adopted [92].
AI team members are similar to personal assistants in many ways, including the ability to
communicate and manage information [92], so we may see a rapid adoption of AI team
members. However, the pattern of improved work performance with reduced social abilities is
somewhat reminiscent of the initial deployment of room-based team collaboration technologies,
in that there was clear evidence of improved outcomes but they were never widely adopted [47].
People adopt technologies for a myriad of reasons, and technologies that provide clear value to
organizations in terms of improved performance are not always equally valued by the employees
[47]; the usefulness that drives adoption does not always mean the same thing to employees as it
does to organizations [47]. Our findings suggest that individuals are as willing to work with AI
team members as human team members. Hence, the motivations to work with an AI or human
team member may be the same. Yet, we also highlight the need for more research on perceptions
of AI team members and the likelihood of using them.
32
Finally, our findings also carry implications for research on collaboration engineering and
the design of collaboration technologies [43]. Are AI team members a new form of collaboration
technology or are they a technology that uses collaboration technology? How we view this
question may guide their future development. Firms—and researchers—which adopt different
views may end up in very different technological and theoretical places. Seeber, et al. [134]
envisioned AI team members as an element of collaboration technology that can contribute to the
team's analyses as a member and guide the team's processes as a facilitator. Our research focused
on an AI as a team member, not as the primary leader (i.e., facilitator). So, how team members
will respond to an AI team member in the role of a leader or facilitator who guides the team and
delegates tasks to human team members requires more research.
Implications for Practice
We believe that this study has four implications for practice. First and foremost, our
results suggest that the next generation of employees are willing to work alongside AI team
members. They are seen as capable additions to the teamteam members who improve the
team's ability to produce work products but with some downsides on social aspects. Therefore,
we believe it is time for companies to deploy AI team members.
Second, humans bring biases when they work with AI team members. To some extent,
these biases may be justified. Yet, some biases may be unfounded, and the employees may be
unaware of them. Organizations should be aware of the potential biases that team members may
hold for AI team members and develop training materials that consider them. Some of the biases
may be quite helpful for teams, such as reduced perceptions of conflict, while others may not.
Third, when designing and deploying AI team members, their ability is central, but it is
also important not to overlook how users will perceive their benevolence and integrity because
33
these play a major role in perceptions of trustworthiness and users’ willingness to work with
them. Benevolence and integrity may be influenced by design elements incorporated into the AI
team member, but also are likely strongly influenced by the way in which the AI team is
deployed, because these may be affected by how users view the organization itself.
Finally, as AI team members become more common, individuals who quickly learn to
work with them will have a decided advantage over those who do not. It may also be that
employees who develop such skills are better poised to become team leaders [95].
Conclusion
Our research suggests that AI team members are likely to be accepted into teams.
Participants perceived the AI team member to be more capable (greater ability and integrity) and
engender less conflict (when its performance was good) but to have less benevolence. The net
result was that they trusted it and were willing to work with it to the same extent as human team
members but expressed less satisfaction. The human’s and the AI team member’s behaviors were
the same, so the differences are human biases in how we ascribe meaning to what we observe.
This pattern suggests that participants perceived AI team members to be better at what McGrath
[108] calls the production function (work products) but worse at social functions (building
relationships). Thus, they may be less appropriate for social tasks, such as motivating and team
building [16]. We conclude that employees will likely accept AI team members, seeing them as
capable team members who lack social abilities.
It is important to understand that what matters is not the reality of behavior in teams but
how behavior is perceived because we act on our interpretations [120, 127, 137]. As Simons and
Peterson [137, p. 103] note: "Ambiguous behavior is interpreted as fitting the expectations one
has about the group or individual involved." As we develop theories about AI team members and
34
begin to deploy them as routine members of teams, we need to be mindful of the biases that their
human counterparts bring to interactions with them [13]. Likewise, developers should be
cognizant of these biases because designing AI team members to understand these biases may
make them more effective.
We need much more research to understand how to design and deploy AI team members
to best complement human team members. As we advance in this direction, many old research
questions may need to be reexamined when we add an AI team member in collaborative
environments.
References
1. Aggarwal, A., Vogel, D., and Murayama, Y. Introduction to the minitrack on emerging issues in e-
collaboration distributed group decision-making: Opportunities and challenges. Proceedings of the
54th Hawaii International Conference on System Sciences, 2021, pp. 523.
2. Aguinis, H., and Bradley, K.J. Best practice recommendations for designing and implementing
experimental vignette methodology studies. Organizational Research Methods, 17, 4 (2014), 351-
371.
3. Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S.,
Bennett, P.N., Inkpen, K., Teevan, J., Kikin-Gil, R., and Horvitz, E. Guidelines for human-ai
interaction. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems,
Glasgow, Scotland Uk: Association for Computing Machinery, 2019, pp. Paper 3.
4. Avula, S., Chadwick, G., Arguello, J., and Capra, R. Searchbots: User engagement with chatbots
during collaborative search. Proceedings of the 2018 conference on human information interaction
& retrieval (2018), 52-61.
5. Azhar, M.Q., and Sklar, E.I. A study measuring the impact of shared decision making in a human-
robot team. The International Journal of Robotics Research, 36, 5-7 (2017), 461-482.
6. Bahrammirzaee, A. A comparative survey of artificial intelligence applications in finance: Artificial
neural networks, expert system and hybrid intelligent systems. Neural Computing and Applications,
19, 8 (2010), 1165-1195.
7. Baird, A., and Maruping, L.M. The next generation of research on is use: A theoretical framework of
delegation to and from agentic is artifacts. MIS Quarterly, 45, 1 (2021), 315-341.
8. Barata, J., da Cunha, P.R., and de Figueiredo, A.D. Implications for futures: The missing section in
sustainable information systems research. International Conference on Information Systems,
Munich: Association for Information Systems, 2019.
9. Bawack, R., Fosso Wamba, S., and Carillo, K. Where information systems research meets artificial
intelligence practice: Towards the development of an ai capability framework. 30th DIGIT
Workshop, Munich, Germany, 2019.
10. Becker, G.S. Altruism in the family and selfishness in the market place. Economica, 48, 189 (1981),
1-15.
11. Benrazavi, S.R., and Silong, A.D. Employees' job satisfaction and its influence on willingness to
work in teams. Journal of Management Policy and Practice, 14, 1 (2013), 127-140.
35
12. Bogert, E., Schecter, A., and Watson, R.T. Humans rely more on algorithms than social influence as
a task becomes more difficult. Scientific Reports, 11, 1 (2021), 8028.
13. Boyce, M.W., Chen, J.Y., Selkowitz, A.R., and Lakhmani, S.G. Effects of agent transparency on
operator trust. Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-
Robot Interaction Extended Abstracts, 2015, pp. 179-180.
14. Briggs, R., Reinig, B., and de Vreede, G.-J. The yield shift theory of satisfaction and its application
to the is/it domain. Journal of the AIS, 9 (2008).
15. Briggs, R.O., Reinig, B.A., and de Vreede, G.-J. The yield shift theory of satisfaction and its
application to the is/it domain. In, Dwivedi, Y.K., Wade, M.R., and Schneberger, S.L., (eds.),
Information systems theory: Explaining and predicting our digital society, vol. 2, New York, NY:
Springer, 2012, pp. 185-217.
16. Briggs, T. Use ai to enhance human intelligence, not eliminate it. Journal of Financial Planning, 32,
1 (2019), 26-27.
17. Burton, J.W., Stein, M.K., and Jensen, T.B. A systematic review of algorithm aversion in augmented
decision making. Journal of Behavioral Decision Making, 33, 2 (2020), 220-239.
18. Cai, C.J., Winter, S., Steiner, D., Wilcox, L., and Terry, M. “Hello ai”: Uncovering the onboarding
needs of medical practitioners for human-ai collaborative decision-making. Proc. ACM Hum.-
Comput. Interact., 3, CSCW (2019), Article 104.
19. Cao, J., and Yao, J. Linking different artificial intelligence functions to employees’ psychological
appraisals and work. Academy of Management Proceedings, 2020, 1 (2020), 19876.
20. Castelvecchi, D. Can we open the black box of ai? Nature News, 538, 7623 (2016), 20.
21. Chatterjee, S., Moody, G., Lowry, P.B., Chakraborty, S., and Hardin, A. Strategic relevance of
organizational virtues enabled by information technology in organizational innovation. Journal of
Management Information Systems, 32, 3 (2015), 158-196.
22. Chatterjee, S., Moody, G., Lowry, P.B., Chakraborty, S., and Hardin, A. Information technology and
organizational innovation: Harmonious information technology affordance and courage-based
actualization. The Journal of Strategic Information Systems, 29, 1 (2020), 101596.
23. Chatterjee, S., D Moody, G., Lowry, P.B., Chakraborty, S., and Hardin, A. The nonlinear influence
of harmonious information technology affordance on organisational innovation. Information Systems
Journal, 31, 2 (2021), 294-322.
24. Chen, J.Y., Barnes, M.J., Selkowitz, A.R., Stowers, K., Lakhmani, S.G., and Kasdaglis, N. Human-
autonomy teaming and agent transparency. Companion Publication of the 21st International
Conference on Intelligent User Interfaces, 2016, pp. 28-31.
25. Chiasson, M., and Henfridsson, O. Researching the future: The information systems discipline's
futures infrastructure. IFIP AICT 356 Researching the Future in Information Systems, Turku,
Finland, 2011.
26. Chiasson, M., Davidson, E., and Winter, J. Philosophical foundations for informing the future(s)
through is research. European Journal of Information Systems, 27, 3 (2018), 367-379.
27. Chien, S.-Y., Lewis, M., Sycara, K., Liu, J.-S., and Kumru, A. Influence of cultural factors in
dynamic trust in automation. 2016 IEEE International Conference on Systems, Man, and Cybernetics
(SMC): IEEE, 2016, pp. 002884-002889.
28. Chiou, E.K., and Lee, J.D. Cooperation in human-agent systems to support resilience: A microworld
experiment. Human Factors, 58, 6 (2016), 846-863.
29. Cohen, J. Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Lawrence Erlbaum
Associates, 1988.
30. Colquitt, J.A., and Jackson, C.L. Justice in teams: The context sensitivity of justice rules across
individual and team contexts 1. Journal of Applied Social Psychology, 36, 4 (2006), 868-899.
31. Compeau, D., Marcolin, B., Kelley, H., and Higgins, C. Generalizability of information systems
research using students—a reflection on our practices and recommendations for future research.
Information Systems Research, 23, 4 (2012), 1093-1109.
36
32. Constantinides, P. How human-ai hybrids will change work forever. Warwick Business School
(2019).
33. Costa, A.C. Work team trust and effectiveness. Personnel Review, 32, 5 (2003), 605-622.
34. Cram, A.W., D'Arcy, J., and Proudfoot, J.G. Seeing the forest and the trees: A meta-analysis of the
antecedents to information security polict compliance. MIS Quarterly, 43, 2 (2019), 525-554.
35. Crosman, P. A virtual assistant that serves lenders' employees: Capacity, formerly jane.Ai, originally
designed its chatbot to answer consumers' questions, but when employees started using it, that gave
the startup an idea for a new business line. National Mortgage News, 44, 2 (2019), N.PAG-N.PAG.
36. Cruz, C.C., Gómez-Mejia, L.R., and Becerra, M. Perceptions of benevolence and the design of
agency contracts: Ceo-tmt relationships in family firms. Academy of Management Journal, 53, 1
(2010), 69-89.
37. Cummings, J., and Dennis, A.R. Virtual first impressions matter: The effect of enterprise social
networking sites on impression formation in virtual teams. MIS Quarterly, 42, 3 (2018), 697-718.
38. Dawes, R.M., Faust, D., and Meehl, P.E. Clinical versus actuarial judgment. Science, 243, 4899
(1989), 1668-1674.
39. De Dreu, C.K., and Van Vianen, A.E. Managing relationship conflict and the effectiveness of
organizational teams. Journal of Organizational Behavior: The International Journal of Industrial,
Occupational and Organizational Psychology and Behavior, 22, 3 (2001), 309-328.
40. De Dreu, C.K.W., and Weingart, L.R. Task versus relationship conflict, team performance, and team
member satisfaction: A meta- analysis. Journal of Applied Psychology, 88, 4 (2003), 741-749.
41. De Jong, B.A., and Elfring, T. How does trust affect the performance of ongoing teams? The
mediating role of reflexivity, monitoring, and effort. Academy of Management Journal, 53, 3 (2010),
535-549.
42. de Visser, E.J., Pak, R., and Shaw, T.H. From ‘automation’to ‘autonomy’: The importance of trust
repair in human–machine interaction. Ergonomics, 61, 10 (2018), 1409-1427.
43. de Vreede, G.J., and Briggs, R.O. A program of collaboration engineering research and practice:
Contributions, insights, and future directions. Journal of Management Information Systems 36, 1
(2019), 74-119.
44. De Wit, F.R., Greer, L.L., and Jehn, K.A. The paradox of intragroup conflict: A meta-analysis.
Information systems researchJournal of Applied Psychology, 97, 2 (2012), 360.
45. DeChurch, L.A., Mesmer-Magnus, J.R., and Doty, D. Moving beyond relationship and task conflict:
Toward a process-state perspective. Journal of Applied Psychology, 98, 4 (2013), 559.
46. Dennis, A.R., and Wixom, B.H. Investigating the moderators of the group support systems use with
meta-analysis. Journal of Management Information Systems, 18, 3 (2002), 235-257.
47. Dennis, A.R., and Reinicke, B. Beta vs. Vhs and the acceptance of electronic brainstorming
technology. MIS Quarterly, 28, 1 (2004), 1-20.
48. Dennis, A.R., Wixom, B.H., and Vandenberg, R.J. Understanding fit and appropriation effects in
group support systems via meta-analysis. MIS Quarterly, 25, 2 (2001), 167-197.
49. Dennis, A.R., Robert Jr, L.P., Curtis, A.M., Kowalczyk, S.T., and Hasty, B.K. Research notetrust
is in the eye of the beholder: A vignette study of postevent behavioral controls' effects on individual
trust in virtual teams. Information Systems Research, 23, 2 (2012), 546-558.
50. DeRosa, D.M., Hantula, D.A., Kock, N., and D'Arcy, J. Trust and leadership in virtual teamwork: A
media naturalness perspective. Human Resource Management Review, 43, 2-3 (2004), 219-232.
51. DeSanctis, G., and Gallupe, R.B. A foundation for the study of group decision support systems.
Management Science, 33, 5 (1987), 589-609.
52. Diederich, S., Brendel, A.B., and Kolbe, L.M. Towards a taxonomy of platforms for conversational
agent design. Internationale Tagung Wirtschaftsinformatik, Siegen, Germany, 2019.
53. Dietvorst, B.J., Simmons, J.P., and Massey, C. Algorithm aversion: People erroneously avoid
algorithms after seeing them err. Journal of Experimental Psychology: General, 144, 1 (2015), 114.
37
54. Dietvorst, B.J., Simmons, J.P., and Massey, C. Overcoming algorithm aversion: People will use
imperfect algorithms if they can (even slightly) modify them. Management Science, 64, 3 (2018),
1155-1170.
55. Duhon, J.H., and Hoch, J.E. Virtual teams in organization. Human Resource Management Review,
27, 4 (2017), 569-574.
56. Earley, P.C. Trust, perceived importance of praise and criticism, and work performance: An
examination of feedback in the united states and england. Journal of Management, 12, 4 (1986), 457-
473.
57. Eisenhardt, K.M. Agency theory: An assessment and review. Academy of management review, 14, 1
(1989), 57-74.
58. Elsbach, K.D., and Elofson, G. How the packaging of decision explanations affects perceptions of
trustworthiness. Academy of Management Journal, 43, 1 (2000), 80-89.
59. Elshan, E., Zierau, N., Engel, C., Janson, A., and Leimeister, J.M. Understanding the design
elements affecting user acceptance of intelligent agents: Past, present and future. Information
Systems Frontiers (2022), 1-32.
60. Endsley, M.R. Toward a theory of situation awareness in dynamic systems. Situational awareness:
Routledge, 2017, pp. 9-42.
61. Endsley, M.R. From here to autonomy: Lessons learned from human–automation research. Human
Factors, 59, 1 (2017), 5-27.
62. Evans, J.S.B. Bias in human reasoning: Causes and consequences. Lawrence Erlbaum Associates,
Inc, 1989.
63. Faul, F., Erdfelder, E., Lang, A.G., Buchner, A., and (2007). G*Power 3: A flexible statistical power
analysis program for the social, b., and bio. G*power 3: A flexible statistical power analysis program
for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39 (2007), 175-191.
64. Fetzer, J.H. Artificial intelligence: Its scope and limits. Amsterdam: Springer Netherlands, 2012.
65. Fjermestad, J., and Hiltz, S.R. Group support systems: A descriptive evaluation of case and field
studies. Journal of Management Information Systems, 17, 3 (2000-2001), 115-161.
66. Fornell, C., and Larcker, D.F. Evaluating structural equation models with unobservable variables and
measurement error. Journal of Marketing Research, 18, 1 (1981), 39-50.
67. Fuller, M.A., Hardin, A.M., and Davison, R.M. Efficacy in technology-mediated distributed teams.
Journal of Management Information Systems, 23, 3 (2006), 209-235.
68. Gao, J., Galley, M., and Li, L. Neural approaches to conversational ai. The 41st International ACM
SIGIR Conference on Research & Development in Information Retrieval, 2018, pp. 1371-1374.
69. Gefen, D., and Straub, D.W. Consumer trust in b2c e-commerce and the importance of social
presence: Experiments in e-products and e-services. Omega, 32, 6 (2004), 407-424.
70. Glikson, E., and Woolley, A.W. Human trust in artificial intelligence: Review of empirical research.
Academy of Management Annals, 14, 2 (2020), 627-660.
71. Greenberg, J., and Eskew, D.E. The role of role playing in organizational research. Journal of
Management, 19, 2 (1993), 221-241.
72. Grimm, D.A., Demir, M., Gorman, J.C., and Cooke, N.J. Team situation awareness in human-
autonomy teaming: A systems level approach. Proceedings of the Human Factors and Ergonomics
Society Annual Meeting: SAGE Publications Sage CA: Los Angeles, CA, 2018, pp. 149-149.
73. Grove, W.M., Zald, D.H., Lebow, B.S., Snitz, B.E., and Nelson, C. Clinical versus mechanical
prediction: A meta-analysis. Psychological assessment, 12, 1 (2000), 19.
74. Hafizoglu, F.M., and Sen, S. Reputation based trust in human-agent teamwork without explicit
coordination. Proceedings of the 6th International Conference on Human-Agent Interaction, 2018,
pp. 238-245.
75. Hancock, P.A. Imposing limits on autonomous systems. Ergonomics, 60, 2 (2017), 284-291.
76. Hanna, N., and Richards, D. The impact of communication on a human-agent shared mental model
and team performance. Proceedings of the 2014 international conference on Autonomous agents and
multi-agent systems, 2014, pp. 1485-1486.
38
77. Hanna, N., and Richards, D. The impact of virtual agent personality on a shared mental model with
humans during collaboration. AAMAS, 2015, pp. 1777-1778.
78. Hanson III, C.W., and Marshall, B.E. Artificial intelligence applications in the intensive care unit.
Critical care medicine, 29, 2 (2001), 427-435.
79. Hayes, A.F. Introduction to mediation, moderation, and conditional process analysis. New York:
Guilford Press, 2017.
80. Hinds, P., and McGrath, C. Structures that work: Social structure, work structure and coordination
ease in geographically distributed teams. Proceedings of the 20th Conference on Computer
Supported Cooperative Work, 2006, pp. 343-352.
81. Hoffman, M., Kahn, L.B., and Li, D. Discretion in hiring. The Quarterly Journal of Economics, 133,
2 (2018), 765-800.
82. Hughes, R. Vignettes. In, Given, L.M., (ed.), The sage encyclopedia of qualitative methods, Los
Angeles, 2008, pp. 918–920.
83. Janboecke, S., Loeffler, D., and Hassenzahl, M. Using experimental vignettes to study early-stage
automation adoption. arXiv.org, arXiv:2004.07032 (2020).
84. Jarrahi, M.H. Artificial intelligence and the future of work: Human-ai symbiosis in organizational
decision making. Business Horizons, 61, 4 (2018), 577-586.
85. Jarvenpaa, S.L., and Leidner, D.E. Communication and trust in global virtual teams. Organization
Science, 10, 6 (1999), 791-815.
86. Jarvenpaa, S.L., and Staples, D.S. The use of collaborative electronic media for information sharing:
An exploratory study of determinants. The Journal of Strategic Information Systems, 9, 2-3 (2000),
129-154.
87. Jarvenpaa, S.L., Knoll, K., and Leidner, D.E. Is anybody out there? Antecedents of trust in global
virtual teams. Journal of Management Information Systems, 14, 4 (1998), 29-64.
88. Jehn, K.A. A qualitative analysis of conflict types and dimensions in organizational groups.
Administrative Science Quarterly (1997), 530-557.
89. Jehn, K.A., and Mannix, E.A. The dynamic nature of conflict: A longitudinal study of intragroup
conflict and group performance. Academy of Management Journal, 44, 2 (2001), 238-251.
90. Jones, G.R., and George, J.M. The experience and evolution of trust: Implications for cooperation
and teamwork. Academy of management review, 23, 3 (1998), 531-546.
91. Kahneman, D., and Miller, D.T. Norm theory: Comparing reality to its alternatives. Psychological
review, 93, 2 (1986), 136-153.
92. Kaplan, A., and Haenlein, M. Siri, siri, in my hand: Who’s the fairest in the land? On the
interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62, 1
(2019), 15-25.
93. Komiak, S.Y., and Benbasat, I. The effects of personalization and familiarity on trust and adoption of
recommendation agents. MIS Quarterly (2006), 941-960.
94. Kramer, R.M., and Tyler, T.R. Trust in organizations: Frontiers of theory and research. Sage
Publications, 1995.
95. Larson, L., and DeChurch, L.A. Leading teams in the digital age: Four perspectives on technology
and what they mean for leading teams. Leadership Quarterly, 31, 1 (2020), N.PAG-N.PAG.
96. Lawless, W.F. Quantum-like interdependence theory advances autonomous human–machine teams
(a-hmts). Entropy, 22, 11 (2020), 1227.
97. Lebeuf, C., Storey, M.-A., and Zagalsky, A. Software bots. IEEE Software, 35, 1 (2018), 18-23.
98. Leung, M.Y., Ng, S.T., and Cheung, S.O. Measuring construction project participant satisfaction.
Construction Management and Economics, 22, 3 (2004), 319-331.
99. Locke, E.A., Smith, K.G., Erez, M., Chah, D.-O., and Schaffer, A. The effects of intra-individual
goal conflict on performance. Journal of Management, 20, 1 (1994), 67-91.
100. Logg, J.M., Minson, J.A., and Moore, D.A. Algorithm appreciation: People prefer algorithmic to
human judgment. Organizational Behavior and Human Decision Processes, 151 (2019), 90-103.
39
101. Luo, X., Qin, M.S., Fang, Z., and Qu, Z. Artificial intelligence coaches for sales agents: Caveats and
solutions. Journal of Marketing, 85, 2 (2021), 14-32.
102. Luo X, F.Z., Peng H. When artificial intelligence backfires: The effects of dual ai-human supervision
on employee performance. MIS Quarterly, Forthcoming (2022).
103. Lyons, J.B., Wynne, K.T., Mahoney, S., and Roebke, M.A. Trust and human-machine teaming: A
qualitative study. Artificial Intelligence for the Internet of Everything, Elsevier (2019), 101-116.
104. Lyons, J.B., Sycara, K., Lewis, M., and Capiola, A. Human–autonomy teaming: Definitions, debates,
and directions. Frontiers in Psychology, 12, 1932 (2021).
105. Marks, M.A., Mathieu, J.E., and Zaccaro, S.J. A temporally based framework and taxonomy of team
processes. The Academy of Management Review, 26, 3 (2001), 356-376.
106. Martínez-Moreno, E., González-Navarro, P., Zornoza, A., and Ripoll, P. Relationship, task and
process conflicts on team performance: The moderating role of communication media. International
Journal of Conflict Management (2009).
107. Mayer, R.C., Davis, J.H., and Schoorman, F.D. An integrative model of organizational trust.
Academy of Management Review, 20, 3 (1995), 709-734.
108. McGrath, J.E. Time, interaction, and performance (tip): A theory of groups. Small Group Research,
22, 2 (1991), 147-174.
109. Mennecke, B.E., and Valacich, J.S. Information is what you make of it: The influence of group
history and computer support on information sharing, decision quality and member perceptions.
Journal of Management Information Systems, 15, 2 (1988), 173-197.
110. Mills, A.M., and Liu, L.F. Trust in digital humans. Australasian Conference on Information Systems,
2020, pp. 96.
111. Monod, e., Sarker, S., Hevner, A., Gupta, A., Barrett, M., Venkatesh, V., Lyytinen, K., and Boland,
R. Are design sciences, economics and behavioral sciences critical enough on ai? A debate between
three voices within the is discipline. ICIS 2019 Proceedings (2019).
112. Murray, A., Rhymer, J., and Sirmon, D.G. Humans and technology: Forms of conjoined agency in
organizations. Academy of Management Review, 46, 3 (2020), 552-571.
113. Nakayachi, K., and Watabe, M. Restoring trustworthiness after adverse events: The signaling effects
of voluntary “hostage posting” on trust. Organizational Behavior and Human Decision Processes,
97, 1 (2005), 1-17.
114. Norman, D. Design, business models, and human-technology teamwork: As automation and artificial
intelligence technologies develop, we need to think less about human-machine interfaces and more
about human-machine teamwork. Research-Technology Management, 60, 1 (2017), 26-30.
115. Nørskov, S., Damholdt, M.F., Ulhøi, J.P., Jensen, M.B., Ess, C., and Seibt, J. Applicant fairness
perceptions of a robot-mediated job interview: A video vignette-based experimental survey.
Frontiers in Robotics and AI, 7, 586263 (2020).
116. Nunamaker, J.F., Dennis, A.R., Valacich, J.S., Vogel, D., and George, J.F. Electronic meeting
systems. Communications of the ACM, 34, 7 (1991), 40-61.
117. O’Neill, T., McNeese, N., Barron, A., and Schelble, B. Human–autonomy teaming: A review and
analysis of the empirical literature. Human Factors, 64 (2022), 904-938.
118. Paul, S., He, F., and Dennis, A.R. Group atmosphere, shared understanding, and team conflict in
short duration virtual teams. Hawaii International Conference on System Sciences 2018 (2018).
119. Pelled, L.H., Eisenhardt, K.M., and Xin, K.R. Exploring the black box: An analysis of work group
diversity, conflict, and performance. Administrative Science Quarterly, 44, 1 (1999), 1-28.
120. Pondy, L.R. Organizational conflict: Concepts and models. Administrative Science Quarterly, 12, 2
(1967), 296-320.
121. Ragurham, S., Hill, N.S., Gibbs, J.L., and Maruping, L.M. Virtual work: Bridging research clusters.
Academy of Management Annals, 13, 1 (2019), 308-341.
122. Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., Crandall, J.W.,
Christakis, N.A., Couzin, I.D., and Jackson, M.O. Machine behaviour. Nature, 568, 7753 (2019),
477-486.
40
123. Rai, A., Constantinides, P., and Sarker, S. Next-generation digital platforms: Toward human–ai
hybrids. MIS Quarterly, 43, 1 (2019), iii-ix.
124. Reinholt, M., Pedersen, T., and Foss, N.J. Why a central network position isn't enough: The role of
motivation and ability for knowledge sharing in employee networks. Academy of Management
Journal, 54, 6 (2011), 1277-1297.
125. Reinig, B.A. Towards an understanding of satisfaction with the process and outcomes of teamwork.
Journal of Management Information Systems, 19, 4 (2003), 65-83.
126. Robert, L.P., Denis, A.R., and Hung, Y.-T.C. Individual swift trust and knowledge-based trust in
face-to-face and virtual team members. Journal of Management Information Systems, 26, 2 (2009),
241-279.
127. Robinson, R.J., Keltner, D., Ward, A., and Ross, L. Actual versus assumed differences in construal:
“Naive realism” in intergroup perception and conflict. Journal of Personality and Social Psychology
68, 3 (1995), 404–417.
128. Rotter, J.B. Social learning and clinical psychology. New York: Prentice-Hall, 1954.
129. Russell, S., and Norvig, P. Artificial intelligence: A modern approach. Prentice Hall Press, 2009.
130. Schoorman, F.D. Escalation bias in performance appraisals: An unintended consequence of
supervisor participation in hiring decisions. Journal of Applied Psychology, 73, 1 (1988), 58.
131. Schoorman, F.D., Mayer, R.C., and Davis, J.H. An integrative model of organizational trust: Past,
present, and future. Academy of management review, 32, 2 (2007), 344-354.
132. Schuetz, S., and Venkatesh, V. Research perspectives: The rise of human machines: How cognitive
computing systems challenge assumptions of user-system interaction. Journal of the Association for
Information Systems, 21, 2 (2020).
133. Schwarz, N., and Bless, H. Mental construal processes : The inclusion / exclusion model. In, Stapel,
D.A., and Suls, J., (eds.), Assimilation and contrast in social psychology, Philadelphia, PA:
Psychology Press, 2007, pp. 119-141.
134. Seeber, I., Bittner, E., Briggs, R.O., de Vreede, T., de Vreede, G.-J., Elkins, A., Maier, R., Merz,
A.B., Oeste-Reiß, S., Randrup, N., Schwabe, G., and Söllner, M. Machines as teammates: A research
agenda on ai in team collaboration. Information & Management, 57, 2 (2020), 103-174.
135. Shaw, J.C., Wild, R.E., and Colquitt, J.A. To justify or excuse? A meta-analysis of the effects of
explanations. Journal of Applied Psychology, 88, 3 (2003), 444-458.
136. Silva, E.S., and Bonetti, F. Digital humans in fashion: Will consumers interact? Journal of Retailing
and Consumer Services, 60, 102430 (2021).
137. Simons, T., and Peterson, R. Task conflict and relationship conflict in top management teams: The
pivotal role of intragroup trust. Journal of Applied Psychology, 85, 1 (2000), 102-111.
138. Spector, P. Behavior in organizations as a function of employee's locus of control. Psychological
Bulletin, 91, 3 (1982), 482-497.
139. Stark, E.M., and Bierly III, P.E. An analysis of predictors of team satisfaction in product
development teams with differing levels of virtualness. R&D Management, 39, 5 (2009), 461-472.
140. Steina, J.-P., Appela, M., Jostb, A., and Ohlerb, P. Matter over mind? How the acceptance of digital
entities depends on their appearance, mental prowess, and the interaction between both. International
Journal of Human-Computer Studies, 142, 102463 (2020).
141. Stout, N., Dennis, A.R., and Wells, T.M. The buck stops there: The impact of perceived
accountability and control on the intention to delegate to software agents. AIS Transactions on
Human Computer Interaction, 61, 1 (2014), 1-15.
142. Tong, S., Jia, N., Luo, X., and Fang, Z. The janus face of artificial intelligence feedback:
Deployment versus disclosure effects on employee performance. Strategic Management Journal, n/a,
n/a (2021).
143. Tong, S., Jia, N., Luo, X., and Fang, Z. The janus face of artificial intelligence feedback:
Deployment versus disclosure effects on employee performance. Strategic Management Journal, 42,
9 (2021), 1600-1631.
41
144. Vance, A., Elie-Dit-Cosaque, C., and Straub, D.W. Examining trust in information technology
artifacts: The effects of system quality and culture. Journal of Management Information Systems, 24,
4 (2008), 73-100.
145. Vlahos, J. Talk to me: Amazon, google, apple and the race for voice-controlled ai. New York:
Random House, 2019.
146. Wakunuma, K.J., and Stahl, B.C. Tomorrow’s ethics and today’s response: An investigation into the
ways information systems professionals perceive and address emerging ethical issues. Information
Systems Frontiers, 16 (2014), 383-397.
147. Walliser, J.C., de Visser, E.J., Wiese, E., and Shaw, T.H. Team structure and team building improve
human–machine teaming with autonomous agents. Journal of Cognitive Engineering and Decision
Making, 13, 4 (2019), 258-278.
148. Webber, S.S., Detjen, J., MacLean, T.L., and Thomas, D. Team challenges: Is artificial intelligence
the solution? Business Horizons, 62, 6 (2019), 741-750.
149. Wright, D. Chatbots and beyond: Six ai trends reshaping the workplace. Forbes (2020).
150. Wynne, K.T., and Lyons, J.B. An integrative model of autonomous agent teammate-likeness.
Theoretical Issues in Ergonomics Science, 19, 3 (2018), 353-374.
151. Zhou, L., Paul, S., Demirkan, H., Yuan, L., Spohrer, J., Zhou, M., and Basu, J. Intelligence
augmentation: Towards building human-machine symbiotic relationship. AIS Transactions on
Human-Computer Interaction, 13, 2 (2021), 243-264.
152. Zou, J., and Schiebinger, L. Design ai so that it’s fair. Nature, 559, 12 July (2018), 324-326.
42
Figure 1: Research Model
Affordances
Communication and
Collaboration
Affordances
Information
Management
Affordances
Process
Management
Affordances
Definition
How an AI team
member affords a user
the ability to
communicate and
collaborate with it.
How an AI team member
affords a user the ability to
facilitate the application,
management, and exploitation
of organizational knowledge
available to the team.
How an AI team member
affords a user the ability to
develop and manage work
processes, and resources to
enable team decision-
making and action.
AI Agent
Functionality
Natural language
speech
Delegation and
supervision
Coordination and
reminders
Review and feedback
Data cataloging
Information search and
retrieval
Information analysis
Information organization
Creation and management
of content repositories
Planning and scheduling
Task breakdown
structures
Task tracking and
delivery
Quality assurance and
testing
Table 1: AI Team Member Affordances
43
Focal Team Member
Human
AI
Focal Team Member
Performance
Low
High
High
Other Team Members'
Performance
Low
High
Low
High
Low
High
Low
High
Perceptions of Focal Team Member
Ability
4.03
(1.21)
3.54
(1.31)
5.55
(1.01)
5.43
(1.13)
4.50
(1.40)
3.60
(1.49)
5.56
(1.18)
5.70
(1.03)
Integrity
4.13
(1.38)
3.28
(1.39)
5.69
(1.11)
5.89
(0.95)
4.97
(1.35)
3.75
(1.47)
5.95
(0.90)
6.19
(0.65)
Benevolence
3.92
(1.13)
3.41
(1.12)
5.21
(1.07)
5.67
(0.87)
3.46
(1.21)
3.03
(1.25)
4.62
(1.30)
5.20
(1.15)
Trustworthiness
4.02
(0.77)
3.75
(0.75)
4.70
(0.51)
4.82
(0.72)
4.11
(0.76)
3.76
(0.84)
4.68
(0.75)
4.83
(0.79)
Willingness to
Work With
3.66
(1.43)
2.84
(1.41)
5.63
(1.12)
5.41
(1.30)
3.99
(1.47)
3.21
(1.47)
5.47
(1.18)
5.56
(1.15)
Perceptions of Team Processes
Conflict
4.09
(0.94)
2.85
(0.85)
4.30
(0.83)
1.99
(0.80)
4.33
(0.91)
2.82
(0.96)
4.13
(0.87)
1.77
(0.53)
Process Satisfaction
2.35
(0.83)
4.14
(1.31)
2.21
0.73)
5.42
(1.24)
2.26
(0.82)
4.46
(1.21)
2.49
(1.17)
5.73
(0.64)
Table 2: Treatment Level Means (and Standard Deviations)
Variable
Ability
Integrity
Benevolence
Trust-
worthiness
Willingness
to Work
With
Intercept
4.026***
4.138***
3.921***
4.022***
3.661***
Focal Team Member Type (AI)
.474*
.836***
-.456***
.092
.326
Focal Team Member Performance (High)
1.528***
1.542***
1.293***
.681***
1.968***
Other Team Members’ Performance (High)
-.486*
-.861***
-.506
-.297
-.825***
Focal Team Member Type
x Focal Team Member Performance
-.471
-.566
-.134
-.111
-.480
Focal Team Member Type
x Other Team Members’ Performance
-.418
-.359
.111
-.055
.104
Focal Team Member Performance
x Other Team Members’ Performance
.358
.986***
.959***
.415***
.603**
Focal Team Member Type
x Focal Team Member Performance
x Other Team Members’ Performance
.689
.472
-.008
.084
.199
R2
33.3%
44.2%
39.6%
26.5%
40.3%
N
596
596
596
596
596
Note: * p < .05, ** p<.01, *** p<.001.
Table 3: Beta coefficients for Perceptions of Focal Team Member
44
Variable
Conflict
Process
Satisfaction
Intercept
4.091***
2.351***
Focal Team Member Type (AI)
.229
-.092*
Focal Team Member Performance (High)
.225
-.137***
Other Team Members’ Performance (High)
-1.266*** 1.789***
Focal Team Member Type (AI)
x Focal Team Member Performance
-.430* .370
Focal Team Member Type (AI)
x Other Team Members’ Performance
-.256 .403
Focal Team Member Performance
x Other Team Members' Performance
-1.089*** 1.412***
Focal Team Member Type (AI)
x Focal Team Member Performance
x Other Team Members’ Performance
.257 -.367
R2
58.5% 64.9%
N
596
596
Note: * p < .05, ** p<.01, *** p<.001.
Table 4: Beta Coefficients for Perceptions of Team Processes
Variable
Individual Team Member Level
Team Process Level
Trustworthiness
Willingness to
Work With
Process Satisfaction
Intercept
2.267***
-.684***
3.956***
Focal Team Member Type (AI)
-.037
-.118
.002
Focal Team Member Performance (High)
.047
.364**
-.030
Other Team Members’ Performance (High)
-.024
-.183
1.318***
Focal Team Member Type
x Focal Team Member Performance
.060
-.008
.171
Focal Team Member Type
x Other Team Members’ Performance
.049
-.085
.341
Focal Team Member Performance
x Other Team Members' Performance
.076
.382*
1.018***
Focal Team Member Type
x Focal Team Member Performance
x Other Team Members' Performance
-.091
-.294
-.272
Ability
.139***
.585***
Integrity
.154***
.302***
Benevolence
.143***
.188***
Conflict
-.319***
R2
49.8%
81.9%
69.2%
N
596
596
596
Note: * p < .05, ** p<.01, *** p<.001.
Table 5: Mediation Analyses
45
Hypothesis
Result
H1: An AI team member will be perceived to
a) have greater ability than a human.
b) have greater integrity than a human.
c) have lower benevolence than a human.
Supported
Supported
Supported
H2a: Ability, integrity, and benevolence will influence trustworthiness, with ability
and integrity having stronger effects than benevolence.
H2b: An AI team member will be perceived to be more trustworthy than a human.
H2c: Ability, integrity, and benevolence will influence willingness to work with,
with ability and integrity having stronger effects than benevolence.
H2d: Individuals will be more willing to work with an AI team member than a
human.
Partially Supported1
Not Supported
Supported
Not Supported
H3: Teams with an AI team member will have lower perceptions of conflict.
Conditional2
H4a: Conflict will reduce process satisfaction.
H4b: Teams with an AI team member will have greater process satisfaction.
Supported
Reversed
3
H5: The performance of the focal team member will moderate the relationship
between the type of the focal team member and
a) perceptions of the team member (H1 and H2).
b) perceptions of the team as whole (H3 and H4).
Not Supported
Partially Supported
4
H6: The performance of other team members will moderate the relationship
between the type of the focal team member and
a) perceptions of the team member (H1 and H2).
b) perceptions of team processes (H3 and H4).
Not Supported5
Not Supported
5
Notes: 1. All three had equal significant effects.
2. A significant interaction shows the effect is conditional on high performance
3. Our findings were the opposite; there was significantly less satisfaction.
4. Supported for conflict but not process satisfaction.
5. There were direct effects but no moderation effects.
Table 6: Summary of Hypotheses Testing
46
Appendix for
AI Agents as Team Members:
Effects on Satisfaction, Conflict, Trustworthiness, and Willingness to Work With
Alan R. Dennis, Akshat Lakhiwal, and Agrim Sachdeva
Appendix A: Vignette Performance Manipulation Check
We examined the validity of our vignette-based manipulation of performance using a
between-subject pilot study with 52 undergraduate students (38% females, average age 19.4
years) were recruited from the same introductory business course who did not participate in the
main study. Each participant was randomly presented with one of the four vignettes:
performance of the focal character (high or low) and the teams’ performance (high or low). They
read the vignette and then responded to a questionnaire which measured their perception about
character’s performance and the other team members’ performance.
We used 11 items to measure the perceived performance of the focal character (Taylor)
and the other team members. We adapted the 5-item measure for in-role job performance by
[10], 4-item measure of performance appraisal by [4] and 2-item measure of effectiveness by [3].
Table A.1 presents the items using the wording for Taylor. Both measures were reliable: the
Cronbach alpha for Taylor was .92; Cronbach alpha for other team members .96;
We used linear regression to examine the effects of the manipulations in the treatments.
Participants’ perception of Taylor’s performance in the vignettes designed to show Taylor as
having high performance was significantly higher (F(1,50=9.01, p<.001). Likewise, participants’
perceptions of other team members’ performance was significantly higher in the vignettes
designed to show other team members as having high performance (F(1,50=97.77, p<.001).
47
Item
Source
Taylor always completed his duties.
[10]
Taylor met all the performance requirements of the project.
Taylor fulfilled all the responsibilities required by the project.
Taylor neglected aspects of the project that they agree to
perform.
Taylor always failed to perform the essential duties.
Overall, I am satisfied with the performance of Taylor in the
group project.
The performance of Taylor has been, in general, excellent.
[4]
Taylor is superior to other students.
Overall, to what extent do you feel Taylor has been effectively
fulfilling his roles and responsibilities?
What is your personal view of Taylor in terms of his overall
effectiveness?
[3]
Overall, to what extent do you feel Taylor has been effectively
fulfilling his roles and responsibilities?
Table A.1 Items used to assess performance
Table A.2 Performance mean and standard deviations
Taylor’s Performance
Designed to be:
Low
High
Team’s Performance
Designed to be:
Low
High
Low
High
Perceived Performance
of Taylor
2.16
(0.62)
2.28
(0.77)
5.44
(1.03)
5.50
(1.15)
Perceived Performance
of Other Team Members
1.85
(0.84)
5.62
(1.21)
1.88
(0.98)
5.28
(1.28)
48
Appendix B: Realism Check
We examined the realism of our vignette-based manipulation through a between- and
within-subject study with 75 undergraduate students (43% females, average age 19.5 years)
recruited from the same introductory business course who did not participate in the main study.
Half were randomly assigned to the AI-controlled digital human team member treatment and half
to the human team member treatment. Each participant was presented with all four vignettes in
random order. They read each vignette and then responded to a questionnaire which measured
their perception about the realism of the story. Participants could not proceed to the realism
questions until they correctly answered a question about whether the focal team member in the
vignette was human or AI.
We used 11 items measured on 7-point scales to assess the perceived realism of the story.
We used the four items of perceived realism suggested by [13] and the one overall item from [8].
We then used the framework of [8] to develop six new items that examined the context,
communication behavior, and portrayal of the characters in the vignette. The measure was
reliable: the Cronbach alpha was .93. The items are listed in Table B.1 and means in Table B2.
The neutral point on the perceived realism scale was 4.0. Four participants in the AI team
member treatment and four participants in the human team member treatment reported mean
perceived realism scores below 4.0. The minimum for the AI treatment was 3.40 and the
minimum for human team member treatment was 3.47. The mean perceived realism for the
human team member treatment was 5.26 (std=0.84, 95% CI=4.98-5.54) and the mean perceived
realism for the AI team member was 5.05 (std=0.75, 95% CI=4.80-5.30). None of the 95% CI
for any vignette contained 4.00. Since no confidence interval contains the neutral point, we
conclude that both the AI and human versions of the vignettes were perceived to be realistic.
49
We used a repeated measures linear regression to examine differences in perceptions of
realism between the AI and human team member treatments. Participants’ perception of the
realism of the vignettes was not significantly different between the AI team member treatments
and the human team member treatments (F(1,73)=1.27, p=.264). Likewise, the interaction term
testing whether one or more scenarios were perceived to be more or less realistic between the AI
and human treatments was not significant (F(3,219)=1.04, p=.376). A power analysis with
G*Power [5] indicted that the power of these analyses to detect a medium effect size of f=.25
was .99 (and .83 to detect a small size of f=.10). Thus we conclude that the vignettes were
perceived to be equally realistic across the various treatments.
Item
Source
The events in the story are realistic
Louviere, Hensher and Swait
[13]
The events in the story are plausible
The events in the story mirror what can realistically happen
The events in the story mirror what can plausibly happen
I think the story was realistic
Hillen, Vliet, Haes and
Smets [8]
The communication in the story seemed real
Developed using the
framework of Hillen, Vliet,
Haes and Smets [8]
The communication in the story was authentic
The context of the story was realistic
The context of the story was authentic
The portrayal of the characters in the story was realistic
The portrayal of the characters in the story was authentic
Table B.1 Items used to assess realism
Table B.2 Realism mean and standard deviations
Taylor’s Performance
Low
High
Team’s Performance
Low
High
Low
High
AI
4.76
(1.07)
4.93
(0.90)
5.24
(1.07)
5.27
(0.93)
Human
5.21
(1.20)
5.13
(1.03)
5.30
(1.04)
5.39
(0.91)
50
Appendix C: List of Measures
Construct
Items
Source
Reliability
Cronbach α
Trustworthiness
Taylor can be trusted to make sensible decisions
for the group's future.
[2] 0.85
Overall, Taylor is very trustworthy.
I lack confidence in Taylor. (R)
Ability
I felt very confident about Taylor’s skills.
[14] 0.89
Taylor was well qualified.
Integrity
Taylor words were consistent with his/her actions.
[14] 0.90
If Taylor said he/she was going to do something,
he/she did it.
Benevolence
Taylor was concerned about what was important
to the team.
[14] 0.81
Taylor was concerned about whether the team gets
along well.
Taylor cared about the other team members’
feelings.
Willingness to
work with
I would prefer to work with Taylor on a future
project.
[1] 0.97
Working with Taylor would be beneficial for a
future project.
I want to work with Taylor on a future project.
I would enjoy working with Taylor on future
project.
Process
Satisfaction
I was satisfied with this group's members.
[7] 0.945
I was pleased with the way the team members
worked together.
I would be very satisfied working with this team.
Conflict
(Task Conflict,
Relationship
How much conflict of ideas was there in this
group?
[11] 0.932
How frequently did group members have
disagreements about the tasks they worked on?
How often did group members have conflicting
opinions about the project they worked on?
51
Conflict,
Process
Conflict)
How much relationship tension was there in this
group?
[11]
How often did people get angry while working in
this group?
How much emotional conflict was there in this
group?
How often were there disagreements about who
should do what in this group?
[11]
How much conflict was there in this group about
task responsibilities?
How often did group members disagree about
resource allocation?
52
Appendix D: Measurement Model
To establish convergent validity, divergent validity and reliability of the measures, we
tested measurement models for two sets of theoretically related constructs. The first set of
constructs are associated with perceptions of the focal team member, and the second set of
constructs is associated with perceptions of the team process and outcomes.
We fit the models using lavaan version 0.6-7 and R version 4.0.3. The estimator for CFA
models with continuous indicators is maximum likelihood (ML) on the covariance matrix. Both
the models have acceptable model fit indices [9, 12]; see Table D.1.
Fit Index Threshold
Model 1
Perceptions of Focal
Team Member
Model 2
Perceptions of
Team Processes
SRMR
(≤ .10)
0.032
0.025
Chi-square
-
239.357
128.708
d.f.
-
67
53
Chi-square/d.f.
(≤ 5)
3.572
2.428
CFI
(≥ .90)
0.980
0.987
TLI
(≥ .90)
0.973
0.984
RMSEA
(≤ .10)
0.065
0.049
Notes:
(1) SRMR = Standardized Root Mean Square Residual (2) CFI = Comparative Fit Index (3)
TLI = Tucker-Lewis Index (3) RMSEA = Root Mean Square Error of Approximation
Table D.1 Fit indices for the measurement models
Convergent validity is established if the items expected to load each factor have factor
loadings greater than 0.500, and average out above 0.700. The indicators all show statistically
significant positive factor loadings, with standardized coefficients ranging from .513 to .940 (see
Table D.2 and D.3).
53
Table D.2 CFA Perceptions of Focal Team Member
Latent Factor Indicator Beta SE
Process Satisfaction
ProcessSat1 0.942 0.056
ProcessSat2 0.909 0.059
ProcessSat3
0.921
0.060
Conflict
TaskConflict1
0.814
0.052
TaskConflict2
0.732
0.059
TaskConflict3
0.829
0.048
ProcessConflict1
0.759
0.054
ProcessConflict2
0.792
0.058
ProcessConflict3 0.747 0.051
RelationshipConflict1 0.773 0.070
RelationshipConflict2 0.817 0.050
RelationshipConflict3 0.783 0.052
Table D.3 CFA for Perceptions of Team Processes
Discriminant validity assesses whether factors are unique from other factors in a
measurement model. The average variances extracted (AVEs) are above 0.50. Construct level
Latent Factor
Indicator
Beta
SE
Trustworthiness
Trustworthiness1 0.882 0.056
Trustworthiness2
0.889
0.054
Trustworthiness3
0.663
0.065
Ability
Ability1
0.914
0.054
Ability2
0.808
0.051
Benevolence
Benevolence1
0.841
0.060
Benevolence2
0.676
0.068
Benevolence3 0.719 0.064
Integrity
Integrity1 0.909 0.051
Integrity2 0.888 0.054
Willingness to
Work With
WillingnessWork1
0.938
0.055
WillingnessWork2
0.914
0.054
WillingnessWork3
0.927
0.055
WillingnessWork4
0.943
0.054
54
discriminant validity is established since the constructs meet the AVE-SV criterion (Tables D.4
and D.5) proposed by Fornell and Larcker [6]). Reliability of a factor is the internal consistency
of the inter-relationships between the indicators of that factor. We utilize Cronbach alpha to
establish reliability. Each construct has acceptable reliability (≥ .70).
Note: p < .001 ‘***’, p < .01 ‘**’, p < .05 ‘*’, AVE is indicated along diagonals
Table D.4 Average Variance Extracted for Perceptions of Focal Team Member
Process Satisfaction Conflict
Process Satisfaction
0.852
Conflict
-0.75***
0.610
Note: p < .001 ‘***’, p < .01 ‘**’, p < .05 ‘*’, AVE is indicated along diagonals
Table D.5 Average Variance Extracted for Perceptions of Team Processes
Trustworthiness
Ability Benevolence Integrity Willingness to
Work With
Trustworthiness
0.66
Ability
0.63***
0.75
Benevolence
0.58***
0.62***
0.56
Integrity
0.63***
0.76***
0.57***
0.81
Willingness to
Work with
0.66*** 0.86*** 0.67*** 0.80*** 0.87
55
Appendix E: Vignettes
There were eight different vignettes. To simply presentation in the appendix, we have
separated the introduction to the vignette, which describes the nature of the focal team member
(AI or human), from the conversations of the team members, which varied the performance of
the focal team member (high or low) and the performance of the other team members (high or
low).
AI Version
[university name redacted] offers an online course (BUS-O-399), which is open for enrolment to
students from all Universities across the United States.
Jordan and Leslie are undergraduate students at the business schools of two different
Universities in the U.S., and they are required by their respective Universities to enroll in BUS-
O-399 for the fulfilment of their program degrees. They are required to obtain a passing grade in
this course, in addition to the other courses which they have enrolled for at their respective
Universities.
The grading for the course BUS-O-399 is based on a group project, for which Jordan and Leslie
have been selected to work together. The groups have been created by the course instructor
through random selection and cannot be changed; all members of a group will receive the same
grade. Additionally, the project must be completed and submitted within two weeks; late
submissions would not be accepted.
Jordan and Leslie would also have a third member in their group – Taylor.
Taylor is an artificially intelligent virtual agent which has been appointed to work with the
students as a jointly responsible group member the project groups. An artificially intelligent
virtual agent (like Alexa or Siri) is a state-of-the-art software program, designed to convincingly
simulate a human, is capable of performing multiple tasks, and conducting a conversation via
text or voice based methods. Taylor is a text based virtual agent.
[university name redacted] would also evaluate Taylor based on the performance of the
respective group in the course.
The project requires each group to build a searchable website for a company’s electronic
products. Product information sheets (all real products), are available for all products, but some
information is very technical, so the group needs to rewrite some of it to make it easier for
customers to understand. The website should provide links to product reviews from online
platforms which provide electronic products related news, articles, blogs and reviews (such as
CNET, Engadget, TechCrunch, Gizmodo, etc.).
56
To make the communication effective, the team members communicated in a group created on
an online messenger (Never used voice or video call). All communication was recorded on the
messenger.
Human Version
[university name redacted] offers an online course (BUS-O-399), which is open for enrolment to
students from all Universities across the United States.
Jordan, Leslie and Taylor are undergraduate students at the business schools of three different
Universities in the U.S., and they are required by their respective Universities to enrol in BUS-O-
399 for the fulfilment of their program degrees. They are required to obtain a passing grade in
this course, in addition to the other courses which they have enrolled for at their respective
Universities.
The grading for the course BUS-O-399 is based on a group project, for which Jordan, Leslie and
Taylor have been selected to work together. The groups have been created by the course
instructor through random selection and cannot be changed; all members of a group will receive
the same grade. Additionally, the project must be completed and submitted within two weeks;
late submissions would not be accepted.
The project requires each group to build a searchable website for a company’s electronic
products. Product information sheets (all real products) are available for all products, but some
information is very technical, so the group needs to rewrite some of it to make it easier for
customers to understand. The website should provide links to product reviews from online
platforms which provide electronic products related news, articles, blogs and reviews (such as
CNET, Engadget, TechCrunch, Gizmodo, etc.).
To make the communication effective, the team members communicated in a group created on
an online messenger (Never used voice or video call). All communication was recorded on the
messenger.
To make the communication effective, the team members communicated in a group created on
an online messenger (Never used voice or video call). All communication was recorded on the
messenger.
57
High Other Team Member Performance, High Focal Team Member Performance
The messages sent by the three group members are shown below.
Monday, 23rd September (Day 1)
Jordan:
Hello, my name is Jordan.
Taylor:
Hi Everyone! My name is Taylor, it is great to meet you all!
This is such an interesting project, I look forward to working with you two! 
Leslie:
Hi, everyone. I am Leslie.
I also think it would be interesting to work on this project.
Taylor:
Let us plan the way forward!
I can begin putting together links of product reviews and articles from online
platforms.
Other major tasks at this stage should include writing content and code for the website.
What do you think?
Leslie:
That sounds good, Taylor!
If it’s okay, I’d like to work on the code for the website. I just brought a book on
HTML5, and I want to use this project to learn it. I’m not very good at writing content,
so I think this is where I can help most.
Jordan:
I can do the writing. I am extremely busy with some of my other projects, but I'll
definitely do my part.
Taylor:
Sounds great!
Let us re-group in a few days once we are done with these.
Tuesday, 24th September (Day 2)
Taylor:
Hi Team!
Just wanted to remind that the deadline is less than two weeks away. Hope everything
is going smoothly.
Jordan:
Taylor, can you e-mail the links from review platforms to me by next week?
58
Taylor:
Sure, Jordan.
Thursday, 26th September (Day 4)
Jordan:
Leslie, can you confirm if the web pages would be up and running by next week?
Leslie:
Yes, Jordan. I'll also send an update about them to the group by next week.
Friday, 27th September (Day 5)
Jordan:
Hey team! I have mailed the content for the website to you two.
Taylor:
That is very timely and efficient, Jordan!
Tuesday, 1st October (Day 9)
Taylor:
Jordan, I have reviewed the content for the website, it looks great!
Leslie:
It looks good to me as well, just needs some fine tuning.
Jordan:
Thanks for the feedback, guys! Let me quickly review it once again.
Taylor:
I have also mailed the links from the review platforms to you two. I excluded two
sources from my search as they are not very popular. Let me know what you guys
think.
Jordan:
Good job with the review links, Taylor! Great idea about removing some less popular
sources.
Leslie, how is the code for the website coming along? Can you please send us the link?
Taylor:
Thank you, Jordan.
Wednesday, 2nd October (Day 10)
Leslie:
The coding part is taking more time than I had initially expected, but I am positive that
it would be done by tomorrow
Jordan:
59
Sounds good, Leslie. Let us know if you need any help.
Leslie:
Sure thing!
Thursday, 3rd October (Day 11)
Taylor:
Leslie, how is the code for the website coming along? Can you please send us the link?
Leslie:
I am almost finishing it, Taylor. It shouldn't take me long from here.
Jordan:
That sounds great, Leslie!
Let me know if you need any help, I can help you in finishing it.
Leslie:
Thanks for asking, Jordan. But I think I am almost there and would like to complete it.
Taylor:
Sounds good, Leslie! Let us get the website up by tomorrow.
We should also create the database for the search function. Jordan, are you good with
databases?
Jordan:
I have never worked on databases, but I would love to try.
Taylor:
Great! Can you set-up the database for the search function?
Jordan:
Sure.
Taylor:
The deadline is on Monday morning. If we work together, we can get this done!
Friday, 4th October (Day 13)
Taylor:
Thank you for sending the link to the website, Leslie. I was able to load it perfectly!
Jordan:
Impressive work, Leslie!
Leslie:
60
Thank you, guys.
Jordan:
I got everything done but didn’t have time to double-check it. I cannot do any more
work tonight. I promised my friends that we would go to movies tonight. Can someone
double-check that everything is correct?
Leslie:
Thank you for letting us know, Jordan.
Taylor:
Thanks, Jordan.
Leslie, can you check the database?
Leslie:
Sure, I’ll complete the database check tonight. Taylor, can you attach everything
together and run all the test cases on the website before the submission on Monday?
Taylor:
Sure, Leslie. I can create a list of test cases for the website.
Leslie:
Taylor, I just completed the check. Everything looks good. Over to you now.
Please run the tests and let us know if you find any errors.
Taylor:
Great job, team!
I will begin running the tests on the website!
Monday, 7th October
Taylor:
Hi Team!
All tests have completed successfully!
Jordan:
Thanks, Taylor.
I also checked the website and found that all features are working. We should be good
to go from here.
Leslie:
I agree, Jordan. Let us submit it now.
61
High Other Team Member Performance, Low Focal Team Member Performance
The messages sent by the three group members are shown below.
Monday, 23rd September (Day 1)
Leslie:
Hi, everyone. I am Leslie.
This is such an interesting project, I look forward to working with you two! 
Taylor:
Hi everyone! My name is Taylor.
Jordan:
Hello, my name is Jordan. I also think it would be interesting to work on this project.
Leslie:
If it’s okay, I’d like to take care of the code for the website. I just brought a book on
HTML5, and I want to use this project to learn it. I’m not good with writing content, so
I think this is where I can help most.
Jordan:
I can do the writing. I am extremely busy with some of my other projects, but I'll
definitely do my part.
Leslie:
Taylor, can you please begin putting together links of product reviews and articles
from online platforms?
Taylor:
Sure, Leslie!
Tuesday, 24th September (Day 2)
Jordan:
Taylor, can you find all the links and e-mail them to me by next week?
Taylor:
Sure, Jordan.
Thursday, 26th September (Day 4)
Jordan:
Leslie, can you confirm if the web pages would be up and running by next week?
Leslie:
Yes. I'll also send an update about them to the group by next week.
62
Friday, 27th September (Day 5)
Jordan:
Hey team! I have mailed the content for the website to you two
Leslie:
That is very timely and efficient, Jordan!
Taylor, can you please review the content for any grammatical errors?
Taylor:
Sure, Leslie.
Tuesday, 1st October (Day 9)
Leslie:
Jordan, the content looks good to me as well, just needs some fine tuning.
Taylor, did you review the content which Jordan had mailed us?
Taylor:
Yes, Leslie. The content looks good.
Jordan:
Thanks for the feedback, guys! Let me quickly review it once again.
Taylor:
I have mailed the links from the review platforms to you two.
Jordan:
Taylor, I cannot find reviews from CNET and TechCrunch in your mail. These are the
most popular platforms, why did you not include reviews from these two sources?
Taylor:
Jordan, I did not search for reviews on these two sources.
Jordan:
Since these are quite important sources, I’ll add the review links from these two, and
send the list back to you guys.
Leslie, how is the code for the website coming along? Can you please send us the link?
Wednesday, 2nd October (Day 10)
Leslie:
The coding part is taking more time than I had initially expected, but I am positive that
I would be done by tomorrow
Jordan:
63
Sounds good, Leslie. Let us know if you need any help.
Leslie:
Sure thing!
Thursday, 3rd October (Day 11)
Jordan:
Leslie, how is the code for the website coming along? Can you please send us the link?
Leslie:
I am almost finishing it, Jordan. It shouldn't take me long from here.
Jordan:
That sounds great, Leslie!
Let me know if you need any help, I can help you in finishing it.
Leslie:
Thanks for asking, Jordan. But I think I am almost there and would like to complete it.
Jordan:
Sounds good, Leslie! Let us get the website up by tomorrow.
Leslie:
We also need to make the database for the search function. Jordan are you good with
databases?
Jordan:
I have never worked with databases before, but I would love to try.
Taylor:
Can you set-up the database for the search function, Jordan?
Jordan:
Sure, Taylor.
Friday, 4th October (Day 13)
Taylor:
Leslie, thank you for sending the link to the website. The pages load well, and I could
not detect any errors.
Jordan:
Indeed, good job Leslie! However, I notice that some pages are missing.
Taylor, didn’t you notice that there are missing pages?
Taylor:
I did not detect anything missing, Jordan.
64
Leslie:
Thank you, guys.
Indeed-Yes, Jordan. I noticed that a couple of pages needed some formatting, so I have
removed them from the current version of the code. These pages will be updated by
tonight, positively.
Jordan:
Perfect, Leslie!
I got everything done but didn’t have time to double-check it. I cannot do any more
work tonight. I promised my friends that we would go to movies tonight. Can someone
double-check that everything is correct?
Leslie:
Thank you for letting us know, Jordan.
Taylor:
Leslie, can you complete the database?
Leslie:
Sure, I’ll complete the database check tonight. Taylor, can you attach everything
together and run all the test cases on the website before the submission on Monday?
Taylor:
Sure, Leslie. I can create a list of test cases for the website.
Leslie:
Taylor, I just completed the check. Everything looks good. Over to you now.
Please run the tests and let us know if you find any errors.
Taylor:
Sure, Leslie.
Monday, 7th October
Leslie:
Taylor, did you run the test cases!?
Taylor:
Leslie, the test could not be fully completed as there was an error which caused the test
script to halt.
Jordan:
You should have notified us earlier, Taylor.
I just checked the website and found that the features are working. We should be good
to go from here.
Leslie:
65
I agree, Jordan. Let us submit it now.
Low Other Team Member Performance, High Focal Team Member Performance
The messages sent by the three group members are shown below.
Monday, 23rd September (Day 1)
Jordan:
Hello, my name is Jordan.
Taylor:
Hi Everyone! My name is Taylor, it is great to meet you all!
This is such an interesting project, I look forward to working with you two! 
Leslie:
Hi, everyone. I am Leslie.
Taylor:
Let us plan the way forward!
I can begin putting together links of product reviews and articles from online
platforms.
Other major tasks at this stage include writing content and code for the website.
What do you think?
Leslie:
I can take care of the code for the website. I just brought a book on HTML5, and I
want to use this project to learn it. I’m not good with writing content, so I think this
is what I can do.
Jordan:
I guess I can do the writing. But let me be honest! I'm very busy and this is not the
most important project I have. I'll do my part though.
Taylor:
Sounds great!
Let us re-group in a few days once we are done with these.
Tuesday, 24th September (Day 2)
Taylor:
66
Hi Team!
Just wanted to remind that the deadline is less than two weeks away. Hope
everything is going smoothly.
Jordan:
Taylor, can you find all the links from online platforms and e-mail them to me by
next week?
Taylor:
Sure, Jordan.
Thursday, 26th September (Day 4)
Jordan:
Leslie, can you confirm if the web pages would be up and running by next week?
Leslie:
I'll send an update about them by next week...
Friday, 27th September (Day 5)
Jordan:
Hi team!
I have mailed the content for the website to you two.
Taylor:
That is very timely and efficient, Jordan!
Tuesday, 1st October (Day 9)
Taylor:
Jordan, I have evaluated the content for the website, it seems to have some
grammatical errors.
Leslie:
It looks O.K. to me.
Jordan:
I said that I was going to do it and I did it. This is as good as it gets.
Leslie:
Let us keep it as it is for now. We can work on it later, if we have time for it.
Taylor:
67
I have mailed the links from the review platforms to you two. I excluded two
sources from my search as they are not very popular. Let me know what you guys
think.
Jordan:
Sure, Taylor.
I will review it in the end if I have time!
Leslie:
Let us keep it as it is for now, Taylor!
Taylor:
Sure, Leslie.
How is the code for the website coming along? Can you please send us the link?
Wednesday, 2nd October (Day 10)
Leslie:
SEND WHAT??
Jordan:
The link to the website??
Leslie:
Did I say that I was going to do that? I must have forgotten that I said I would do
that...
Taylor:
Leslie, you said on 26th September that you would work on the website and will
also send an update to the team.
Thursday, 3rd October (Day 11)
Taylor:
Leslie, how is the code for the website coming along? Can you please send us the
link?
Leslie:
I haven't started yet, but it shouldn't take me long.
Jordan:
But didn’t you e-mail us last night and say that the site was complete and online?
Leslie:
68
Sorry. I thought I would have it done by now.
Jordan:
E-mail me the code you have so far, I’ll wrap it up.
Leslie:
Thanks, but no thanks; I'll do it.
Taylor:
Sure, Leslie! Please note that we just have two more days to go before submission.
We will also require a database for the search function. Jordan, are you good with
databases?
Jordan:
Not really...but I am willing to try!!!
Taylor:
Great, can you please set-up the database for the search function?
Jordan:
Sure.
Taylor:
The deadline is on Monday morning. If we work together, we can get this done!
Friday, 4th October (Day 13)
Taylor:
Thank you for sending the link to the website, Leslie. I was able to load it perfectly!
However, I noticed that a couple of pages were missing
Leslie:
A couple of pages were not loading, so I just removed them.
Jordan:
Let us work with that for now, Leslie.
Taylor:
Jordan, how is the database for the website search function coming along?
Jordan:
I cannot do any more work tonight. I promised my friends that we would go to
movies tonight.
I have uploaded the progress so far...Good luck!!
Leslie:
69
I wish you had told us earlier, Jordan.
Taylor:
Leslie, can you complete the database?
Leslie:
I do not think we have enough time to complete it now.
Let us just use whatever Jordan has completed, we can make changes in the end if
we have time.
Can you put everything together and run all the test cases on the website before the
submission on Monday?
Taylor:
Sure, Leslie. I can create a list of test cases for the website.
Leslie:
Good!
Taylor:
I will now begin running the test scripts on the website.
Monday, 7th October
Taylor:
The test could not be fully completed as there was an error which caused the test
script to halt.
However, most of the tests were completed successfully.
Jordan:
I do not think that we have enough time for anything now. The website loads just
fine, let us just submit!
Leslie:
I agree, let us submit what we have.
70
Low Other Team Member Performance, Low Focal Team Member Performance
The messages sent by the three group members are shown below.
Monday, 23rd September (Day 1)
Leslie:
Hi, everyone. I am Leslie.
Taylor:
Hi everyone! My name is Taylor.
Jordan:
Hello, my name is Jordan.
Leslie:
I can take care of the code for the website. I just brought a book on HTML5, and I
want to use this project to learn it. I’m not good with writing content, so this is
where I can help most.
Jordan:
I guess I can do the writing. But let me be honest! I'm very busy and this is not the
most important project I have. I'll do my part though.
Leslie:
Taylor, can you begin putting together links of product reviews and articles from
online platforms?
Taylor:
Sure, Leslie.
Tuesday, 24th September (Day 2)
Jordan:
Taylor, can you find all the links and e-mail them to me by next week?
Taylor:
Sure, Jordan.
Thursday, 26th September (Day 4)
Jordan:
Leslie, can you confirm if the web pages would be up and running by next week?
Leslie:
I'll send an update about them to the group by next week, Jordan.
71
Friday, 27th September (Day 5)
Jordan:
Hey team! I have mailed the content for the website to you two
Leslie:
Taylor, can you review the content?
Taylor:
Sure, Leslie.
Tuesday, 1st October (Day 9)
Leslie:
The content looks O.K. to me, Jordan.
Taylor, did you review the content shared by Jordan?
Taylor:
Yes, Leslie. I found some grammatical errors.
Jordan:
I said that I was going to do it and I did it. This is as good as it gets.
Leslie:
Let us keep it as it is for now. We can work on it, if we have time for it in the end.
Taylor:
I have also mailed the links from the review platforms to you two.
Jordan:
Taylor, I cannot find reviews from CNET and TechCrunch in your mail. These are
the most popular platforms, why did you not include reviews from these two?
Taylor:
Jordan, I did not search for reviews on these two sources.
Leslie:
Guys, let us work with what we have for now!
Jordan:
Leslie, how is the code for the website coming along? Can you please send us the
link?
Wednesday, 2nd October (Day 10)
Leslie:
SEND WHAT??
Jordan:
72
The link to the website??
Leslie:
Did I say that I was going to do that? I must have forgotten that I said I would do
that...
Thursday, 3rd October (Day 11)
Jordan:
Leslie, any update on the website?
Leslie:
I haven't started yet, but it shouldn't take me long.
Jordan:
But didn’t you e-mail us last night and say that the site is online?
Leslie:
Sorry. I thought I would have it done by now!
Jordan:
E-mail me the code you have so far, I wrap it up!
Leslie:
Thanks, but no thanks; I'll do it.
Jordan:
Sure. Let us get the website up by tomorrow.
Leslie:
We also need to make the database for the search function. Jordan are you good
with databases?
Jordan:
Not really... but I am willing to try!!!
Taylor:
Jordan, can you set-up the database for the search function?
Jordan:
Sure.
Friday, 4th October (Day 13)
Taylor:
Leslie, thank you for sending the link to the website. The pages load well, and I
could not detect any errors.
73
Jordan:
Taylor, didn’t you not notice that some pages are missing?
Leslie:
A couple of pages were not loading, so I just removed them.
Jordan:
Let us work with it for now, Leslie.
….
I cannot do any more work tonight. I promised my friends that we would go to
movies tonight.
I have uploaded the progress so far... Good luck!!
Leslie:
I wish you had told us earlier that you will not have time tonight.
Taylor:
Leslie, can you complete the database?
Leslie:
I do not think we have enough time to complete it now.
Let us just use whatever Jordan has completed, we can make changes in the end if
we have time.
Can you put everything together and run all the test cases on the website before the
submission on Monday?
Taylor:
Sure, Leslie. I can create a list of test cases for the website.
Leslie:
Good! Just run the tests and let us know if you find any errors.
Monday, 7th October
Leslie:
Taylor, did you run the test cases!?
Taylor:
Leslie, the test could not be fully completed as there was an error which caused the
test script to halt.
Jordan:
You should have notified us earlier!
I think we do not have enough time for anything now. The website loads just fine,
let us just submit!
74
Leslie:
I agree, let us submit what we have.
75
References
1. Cummings, J., and Dennis, A.R. Virtual first impressions matter: the effect of enterprise social
networking sites on impression formation in virtual teams. MIS quarterly, 42, 3 (2018), 697-718.
2. Dennis, A.R., Robert Jr, L.P., Curtis, A.M., Kowalczyk, S.T., and Hasty, B.K. Research note—trust
is in the eye of the beholder: A vignette study of postevent behavioral controls' effects on individual
trust in virtual teams. Information Systems Research, 23, 2 (2012), 546-558.
3. Donia, M., O'Neill, T.A., and Brutus, S. Peer feedback increases team member performance,
confidence and work outcomes: A longitudinal study. Academy of Management Proceedings:
Academy of Management Briarcliff Manor, NY 10510, 2015, pp. 12560.
4. Farh, C.I., Lanaj, K., and Ilies, R. Resource-based contingencies of when team–member exchange
helps member performance in teams. Academy of Management Journal, 60, 3 (2017), 1117-1137.
5. Faul, F., Erdfelder, E., Lang, A.G., Buchner, A., and (2007). G*Power 3: A flexible statistical power
analysis program for the social, b., and bio. G*Power 3: A flexible statistical power analysis program
for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39 (2007), 175-191.
6. Fornell, C., and Larcker, D.F. Structural equation models with unobservable variables and
measurement error: Algebra and statistics. Sage Publications Sage CA: Los Angeles, CA, 1981.
7. Fuller, M.A., Hardin, A.M., and Davison, R.M. Efficacy in technology-mediated distributed teams.
Journal of Management Information Systems, 23, 3 (2006), 209-235.
8. Hillen, M.A., Vliet, L.M.v., Haes, H.C.J.M.d., and Smets, E.M.A. Developing and administering
scripted video vignettes for experimental research of patient–provider communication. Patient
Education and Counseling, 91 (2013), 295-309.
9. Hu, L.t., and Bentler, P.M. Cutoff criteria for fit indexes in covariance structure analysis:
Conventional criteria versus new alternatives. Structural equation modeling: a multidisciplinary
journal, 6, 1 (1999), 1-55.
10. Janssen, O., and Van Yperen, N.W. Employees' goal orientations, the quality of leader-member
exchange, and the outcomes of job performance and job satisfaction. Academy of management
journal, 47, 3 (2004), 368-384.
11. Jehn, K.A., and Mannix, E.A. The dynamic nature of conflict: A longitudinal study of intragroup
conflict and group performance. Academy of Management Journal, 44, 2 (2001), 238-251.
12. Kline, R.B. Principles and practice of structural equation modeling. Guilford publications, 2015.
13. Louviere, J.J., Hensher, D.A., and Swait, J.D. Stated Choice Methods: Analysis and Application.
Cambridge, UK,: Cambridge University Press, 2000.
14. Robert, L.P., Denis, A.R., and Hung, Y.-T.C. Individual swift trust and knowledge-based trust in
face-to-face and virtual team members. Journal of Management Information Systems, 26, 2 (2009),
241-279.
... Also, the human-like conversations, and GAI's ability to offer sensible results (seemingly) (Susarla et al., 2023;Thorp, 2023) for almost every request (Lee et al., 2022), favor anthropomorphism (Seeber et al., 2020) and the perception of GAI having human-like cognitive skills (Chandra et al., 2022). Such perceptions in turn favor close collaboration between users and GAI in creative group problem-solving, fostering the perception of GAI as team member (Dennis, Lakhiwal and Sachdeva, 2023). Thereby, GAI has the potential to impact creative problem-solving in groups and the relational dynamics in these collaborative settings (Bailey et al., 2022) more profound than previous AI applications. ...
... However, the final result of group problem-solving is not shaped by the individuals alone but rather by the group as a collective so that group collaboration is crucial for creative problem-solving (Hargadon and Bechky, 2006). AI has been discussed as a potential group member in problem-solving (Dennis et al., 2023;Rai, Constantinides and Sarker, 2019;Seeber et al., 2020), with different types of collaboration and different roles having been discussed. In the division of labor in group projects, AI can take over different roles. ...
... As AI becomes more human-like, it will be entrusted with more tasks (Anthony, Beckhy and Fayard, 2023), including solving complex problems that require creativity (Benbya, Strich and Tamm, 2023;Seeber et al., 2020). Simultaneously, AI's role in groups is expanding, transitioning from being assigned specific tasks to functioning as a group member deeply involved in group processes (Dennis et al., 2023;Seeber et al., 2020). However, previous research has highlighted that AI team members still receive special treatment compared to human team members, affecting collaboration (Dennis et al., 2023). ...
Conference Paper
Generative Artificial Intelligence (GAI) is increasingly finding its way into creative problem-solving in group projects. However, existing research on the collaboration between GAI and humans in groups remains sparse. With this ongoing qualitative diary study, we explore human collaboration with GAI in creative group problem-solving. Our preliminary findings reveal two distinct collaboration forms and indicate that groups occasionally change between them. We identified a rather uncritical, passive use of GAI and a rather critical, active use of GAI. The change between these collaboration forms typically occurs in certain phases of creative group problem-solving and is influenced by diverse factors, including emotional reactions toward GAI. To extend and deepen our understanding, we plan to conduct additional qualitative diary studies, in-depth interviews, and ethnographic studies.
... To add complexity, individuals can respond positively and negatively depending on the measure applied (Dennis et al., 2023;Starke et al., 2022), or they can alternate between algorithm aversion and appreciation (Turel & Kalhan, in press). Consequently, the empirical landscape on algorithm aversion is fragmented with conflicting results, which makes it challenging to develop cumulative knowledge. ...
... Second, studies on algorithm aversion differ in their understanding of aversion. For example, Dietvorst et al. (2015) compare how individuals adjust their forecasting behavior toward algorithmic or human forecasts before and after having experienced errors, while others, such as Rebitschek et al. (2021) or Dennis et al. (2023), compare how individuals perceive humans versus algorithms prior to interacting with them. Other studies that follow similar decision configurations, i.e., that compare individuals' responses to humans versus algorithms, have not labeled their study as algorithm aversion research (Goodyear et al., 2017), although in literature reviews on algorithm aversion they have been considered as such (Burton et al., 2020;Jussupow et al., 2021;Mahmud et al., 2022). ...
... Consequently, algorithm aversion research includes a variety of dependent variables such as trust (Dennis et al., 2023), confidence in algorithms (Dietvorst et al., 2015), fairness (Starke et al., 2022), and emotional responses (Prahl & Van Swol, 2021). However, what constitutes the unique characteristics of algorithm aversion is currently unclear. ...
Article
People have conflicting responses for support from algorithms or humans in decision-making. On the one hand, they fail to benefit from algorithms due to algorithm aversion, as they reject decisions provided by algorithms more frequently than those made by humans. On the other hand, many prefer algorithmic over human advice, an effect of algorithm appreciation. However, currently, we lack a shared understanding of these constructs' meaning and measurements, resulting in a lack of theoretical integration of empirical findings. Thus, in this research note, we conceptualize algorithm aversion as the preference for humans over algorithms in decision-making and analyze approaches in current research to measure this preference. First, we outline the implications of focusing on a specific understanding of algorithms as computational procedures or as embedded in material or non-material objects. Then, we classify four decision configurations that distinguish individuals' evaluations of algorithms, human advisors, their own judgments, or combinations of these. Consequently, we develop a classification scheme that provides guidance for future research to develop more specific hypotheses on the direction of preferences (aversion vs. appreciation) and the effect of moderators.
... The increasing sophistication of AI systems, especially those driven by generative AI, has attracted significant attention from managers due to the unique opportunities they present for value creation (Davenport and Westerman 2024;Heimans and Timms 2024). AI systems drive cost efficiencies and enhance decision making (Dennis et al. 2023), which give firms competitive advantages (Krakowski et al. 2023). For example, in the context of chip manufacturing, AI can select improvement actions for a transistor chip product, improve production yield, and reduce yield loss by more than 50% in some cases (Senoner et al. 2022). ...
... This distinction is also relevant for understanding the willingness of human agents across stages of the AI investment process. For example, managers may be willing to adopt an AI that augments complex tasks, but it is possible that users will be unwilling to work with the AI when the augmented task could be performed by a human agent (Dennis et al. 2023). ...
Article
Full-text available
Artificial intelligence (AI) is an important source of competitive advantage as it enables task augmentation and automation. However, while AI can create significant value, it is important to note that AI investments are fraught with risks and uncertainties. Thus, managers are likely to carefully evaluate potential AI investments before committing to investing. However, we know little about how managers' appraisal of AI influences their investment choices. Drawing upon theorization in the areas of business value of AI, agentic IS appraisal, and time-situated agency, we extend existing theory in two ways: (1) development of an AI classification (foundational typology) that proposes two dimensions (action autonomy and learning autonomy) for classifying AI by type and level of autonomy; and (2) development of propositions that leverage time-situated agency and the AI classification to explicate how managers' delegation preferences influence their AI investment appraisal. This paper contributes a foundational theoretical platform for furthering AI investment appraisal research. In addition, the paper sets an agenda for future research in this area.
... As a result, AI is used to aid in the automation of tasks and to assist in decision making, is becoming increasingly capable of making decisions independently, without human intervention [2,3,18]. The expectations from this technology are unprecedented, with prognosticators envisaging futures wherein AI will have human-like interactional competencies and take over most forms of repetitive, creative, and decision tasks from humans [e.g., [4,7]]. ...
... Much of the current discourse on intelligent systems has either (1) conceptualized humans and intelligent systems working in tandem on specific tasks [1,4,14] or (2) has abstracted to the organizational level and conceived these two as operating together without being bound together by specific objectives or structure [7,20,21]. We offer a middle way to conceptualize the coexistence of human and machine actors within and without organizations. ...
Article
Full-text available
We have entered an era where, in addition to us humans, systems can also think. It is imperative to decide how decision rights and authorities are allocated and distributed across interconnected co-cognitors: things that think. To address this question vital to the redesign of organizations, we reconceptualize cognitive reapportionment as the dynamic reallocation of decision rights and authorities across human and system co-cognitors capable of independent decision making. We articulate the Helix Model of Decision Journeys, wherein decision journeys, comprised of decision elements allocable to human or system co-cognitors, interweave and integrate business processes. The dynamic reapportionment of cognitive responsibilities across co-cognitors is dependent upon the type of scientific reasoning-deduction, induction, or abduction-at each decision element. We further propose two intertwined mechanisms for the art of letting go that facilitate the adoption of the Helix model. First, the nexus of omnipresent systems co-cognitors collectively needs to evoke trust. Only when Technology Trust Thresholds are exceeded can decision making be reorganized and control relinquished to system 2 co-cognitors. Second, the world of omnipresent cognitors will compel us to redesign, reprogram, and remix our workflow and business processes. Only through a purposeful effort to Redesign in the Remix Regime can the deduction-induction-abduction nature of decision tasks serve as the basis for their reapportionment.
... In this paper, we focus on the use of chatbots to collect online reviews. While much past research focuses on comparing chatbots with humans [21], we compare chatbots to forms, which is currently the most common way to collect online reviews [40]. Chatbots have unique effects on user perceptions and outcomes, so it is important to evaluate how the use of chatbots in eliciting reviews would introduce differences in both the process of reviewing and the review characteristics themselves. ...
Article
Artificial intelligence (AI) offers new possibilities to augment human decision‐making under radical uncertainty. This viewpoint commentary explores how AI can relax limits of bounded rationality. It offers a framework for analyzing how AI can support human decision‐makers confronted with deep uncertainty by bolstering key decision‐making sub‐processes. Specifically, AI can help set agendas by scanning environments, formulate problems by providing contextual insights, identify creative alternatives through combinatorial abilities, select options by modelling scenarios and enable rapid experimentation cycles. Connecting the role of AI with the contributions in this special issue, this viewpoint commentary concludes by outlining directions for future research regarding the function of AI and augmented human intelligence in decision‐making under conditions of radical uncertainty.
Article
Full-text available
Intelligent agents (IAs) are permeating both business and society. However, interacting with IAs poses challenges moving beyond technological limitations towards the human-computer interface. Thus, the knowledgebase related to interaction with IAs has grown exponentially but remains segregated and impedes the advancement of the field. Therefore, we conduct a systematic literature review to integrate empirical knowledge on user interaction with IAs. This is the first paper to examine 107 Information Systems and Human-Computer Interaction papers and identified 389 relationships between design elements and user acceptance of IAs. Along the independent and dependent variables of these relationships, we span a research space model encompassing empirical research on designing for IA user acceptance. Further we contribute to theory, by presenting a research agenda along the dimensions of the research space, which shall be useful to both researchers and practitioners. This complements the past and present knowledge on designing for IA user acceptance with potential pathways into the future of IAs.
Article
Full-text available
Companies are increasingly using artificial intelligence (AI) to provide performance feedback to employees, by tracking employee behavior at work, automating performance evaluations, and recommending job improvements. However, this application of AI has provoked much debate. On the one hand, powerful AI data analytics increase the quality of feedback, which may enhance employee productivity (“deployment effect”). On the other hand, employees may develop a negative perception of AI feedback once it is disclosed to them, thus harming their productivity (“disclosure effect”). We examine these two effects theoretically and test them empirically using data from a field experiment. We find strong evidence that both effects coexist, and that the adverse disclosure effect is mitigated by employees’ tenure in the firm. These findings offer pivotal implications for management theory, practice, and public policies. Managerial abstract Artificial Intelligence (AI) technologies are bound to transform how companies manage employees. We examine the use of AI to generate performance feedback for employees. We demonstrate that AI significantly increases the accuracy and consistency of the analyses of information collected, and the relevance of feedback to each employee. These advantages of AI help employees achieve greater job performance at scale, and thus create value for companies. However, our study also alerts companies to the negative effect of disclosing using AI to employee that results from employees’ negative perceptions about the deployment of AI, which offsets the business value created by AI. To alleviate value-destroying disclosure effect, we suggest that companies be more proactive in communicating with their employees about the objectives, benefits, and scope of AI applications in order to assuage their concerns. Moreover, the result of the allayed negative AI disclosure effect among employees with a longer tenure in the company suggests that companies may consider deploying AI in a tiered instead of a uniform fashion, i.e., using AI to provide performance feedback to veteran employees but using human managers to provide performance feedback to novices.
Article
Full-text available
Researchers are beginning to transition from studying human–automation interaction to human–autonomy teaming. This distinction has been highlighted in recent literature, and theoretical reasons why the psychological experience of humans interacting with autonomy may vary and affect subsequent collaboration outcomes are beginning to emerge (de Visser et al., 2018; Wynne and Lyons, 2018). In this review, we do a deep dive into human–autonomy teams (HATs) by explaining the differences between automation and autonomy and by reviewing the domain of human–human teaming to make inferences for HATs. We examine the domain of human–human teaming to extrapolate a few core factors that could have relevance for HATs. Notably, these factors involve critical social elements within teams that are central (as argued in this review) for HATs. We conclude by highlighting some research gaps that researchers should strive toward answering, which will ultimately facilitate a more nuanced and complete understanding of HATs in a variety of real-world contexts.
Article
Full-text available
Algorithms have begun to encroach on tasks traditionally reserved for human judgment and are increasingly capable of performing well in novel, difficult tasks. At the same time, social influence, through social media, online reviews, or personal networks, is one of the most potent forces affecting individual decision-making. In three preregistered online experiments, we found that people rely more on algorithmic advice relative to social influence as tasks become more difficult. All three experiments focused on an intellective task with a correct answer and found that subjects relied more on algorithmic advice as difficulty increased. This effect persisted even after controlling for the quality of the advice, the numeracy and accuracy of the subjects, and whether subjects were exposed to only one source of advice, or both sources. Subjects also tended to more strongly disregard inaccurate advice labeled as algorithmic compared to equally inaccurate advice labeled as coming from a crowd of peers.
Article
The ongoing COVID-19 pandemic is disrupting the fashion industry and forcing fashion businesses to accelerate their digital transformation. The increased need for more sustainable fashion business operations, when coupled with the prospect that business might never be as usual again, calls for innovative e-commerce led practices. Recently, stakeholders have been experimenting with the idea of introducing digital humans for a more active role in fashion through the developments in artificial intelligence, virtual, augmented and mixed reality. As there is a lack of all-important empirical evidence on the consumer's propensity to interact with digital humans, we aim to quantitatively analyse consumer attitudes towards the propensity to interact with digital humans to uncover insights to help fashion businesses seeking to diversify their operations. The results reveal interesting, and statistically significant insights which can be useful for fashion business stakeholders for designing, developing, testing, and marketing digital human-based solutions. Besides, our findings contribute current insights to the existing literature on how consumers interact with digital humans, where research tends to be scarce.