Conference PaperPDF Available

Pedagogical Agents for Interactive Learning: A Taxonomy of Conversational Agents in Education Completed Research Paper

Authors:

Abstract

As distance learning and large-scale learning environments continue to grow, interactive knowledge distribution is becoming a more challenging task. Although studies show that active and emotional student engagement is the best way to achieve promising educational outcomes, educational institutions still face challenges in providing students with interactive learning scenarios. Pedagogical conversational agents (PCAs) offer one way for educational settings to create such scenarios. Despite the increasing research interest of PCAs in research, there is a lack of shared knowledge about the different design elements of PCAs. Hence, our goal is to develop a taxonomy to classify PCAs into three main categories (structure, technology, task/people). In addition, we aim to provide preliminary results on possible outcome variables that could result from the presented design elements of the taxonomy. Our findings are intended to provide researchers and practitioners with deeper insight into the field of PCAs to possibly guide design decisions.
Taxonomy of Pedagogical Conversational Agents
Forty-Second International Conference on Information Systems, Austin 2021
1
Pedagogical Agents for Interactive Learning:
A Taxonomy of Conversational Agents in
Education
Completed Research Paper
Florian Weber
University of Kassel
Henschelstraße. 4,
34127 Kassel
weber@uni-kassel.de
Thiemo Wambsganss
University of St. Gallen
Müller-Friedberg-Str. 8,
CH-9000 St. Gallen
thiemo.wambsganss@unisg.ch
Dominic Rüttimann
University of St. Gallen
Müller-Friedberg-Str. 8,
CH-9000 St. Gallen
dominic.ruettimann@student.unisg.ch
Matthias Söllner
University of Kassel
Henschelstraße. 4,
34127 Kassel
soellner@uni-kassel.de
Abstract
As distance learning and large-scale learning environments continue to grow, interactive
knowledge distribution is becoming a more challenging task. Although studies show that
active and emotional student engagement is the best way to achieve promising
educational outcomes, educational institutions still face challenges in providing students
with interactive learning scenarios. Pedagogical conversational agents (PCAs) offer one
way for educational settings to create such scenarios. Despite the increasing research
interest of PCAs in research, there is a lack of shared knowledge about the different design
elements of PCAs. Hence, our goal is to develop a taxonomy to classify PCAs into three
main categories (structure, technology, task/people). In addition, we aim to provide
preliminary results on possible outcome variables that could result from the presented
design elements of the taxonomy. Our findings are intended to provide researchers and
practitioners with deeper insight into the field of PCAs to possibly guide design decisions.
Keywords: Pedagogical Conversational Agents (PCA), Taxonomy, Design Research
Introduction
According to modern learning theories such as the ICAP (Interactive, Constructive, Active, and Passive)
framework, interactive learning environments lead to higher levels of student’s engagement (compared to
passive, active, or constructive environments) and thus to increased learning outcomes (Chi and Wylie
2014). Interactive learning deepens a learner’s interaction, e.g., by comparing the learning materials with
prior knowledge (constructive engagement), discussing with others, or asking and answering questions
(interactive engagement). However, providing interactive learning scenarios for students is still a challenge
and has not been solved in most pedagogical scenarios (e.g., Kulik and Fletcher, 2016). Educational
institutions are limited in providing interactive environments due to financial and organizational
constraints. Especially due to the rise of Massive Open Online Course, where distance-learning scenarios
Taxonomy of Pedagogical Conversational Agents
Forty-Second International Conference on Information Systems, Austin 2021
2
hinder interactive elements, and mass lectures at public universities, students mainly study in passive or
non-interactive learning scenarios. The current Covid-19 pandemic and the accompanying restrictions on
face-to-face instruction further reduce interactive learning and ultimately lead to more isolated learning.
The low supervision ratio and increased mass-taught courses intuitively lead to limited interaction and less
individual supervision. Studies show that this lack of interaction leads to low learning outcomes, high
dropout rates, and increased dissatisfaction with the overall learning experience (Eom and Ashill 2016;
Hone and El Said 2016).
A solution to provide interactive learning environments to students at scale might be using technology-
mediated learning scenarios. Driven by technological advances in Machine Learning and Natural Language
Processing (NLP), pedagogical conversational agents (PCA) have recently evolved as a novel class of
interactive learning tools, which provide students with interactive learning support in their natural
language at scale. In general, PCAs are a sub-class of Conversational Agents (CAs). CAs communicate with
their user with a dialog-based interface. The use of CAs is increasing in many fields such as medicine
(Kowatsch et al. 2017), the service industry (Nuruzzaman and Hussain 2018), or finance (Quah and Chua
2019). In teaching and education, PCAs are used to interact with learners as a peer (Kim, 2018), as a tutor
(Ruan et al. 2019; Wambsganss, Söllner, and Leimeister 2020), an instructor, or as a motivator (Fryer et
al. 2017; Wambsganss, Winkler, Söllner, et al. 2020). The successful application of PCAs to meet the
individual needs of learners and to increase their learning outcomes has been demonstrated for learning
various learning outcomes such as problem-solving skills (Winkler et al. 2020), as well as for learning
factual knowledge (Ruan et al. 2019) and training of argumentation (Wambsganß et al. 2021; Wambsganss,
Kueng, et al. 2021).
Although PCAs have attracted considerable research interest recently (Zierau et al. 2020), there is still no
unified classification of conversational agents (pedagogical agents) in educational environments. A
comprehensive knowledge base that presents dimensions and subordinate characteristics could help
researchers and practitioners better understand, evaluate, and develop PCAs. Although initial literature
reviews on PCAs in education have emerged in recent years (e.g., Hobert and Wolff, 2019; Wellnhammer
et al., 2020), research is scattered across various sociotechnical perspectives, resulting in an acute lack of
an integrative perspective. Hobert and Wolff (2019) have already classified PCAs in their research. This
study focuses on the content aspects rather than how the agent should be used (Hobert and Wolff 2019).
However, there is a lack of a precise classification into dimensions representing design features and thus
can address design decisions (Wellnhammer et al. 2020). In this regard, information systems (IS) research
can provide a promising point of view to look at a given IS from the perspective of interactive learning
support and classify it into relevant characteristics (e.g., the domain of use, interaction design, and technical
design). The classification into different dimensions and characteristics can ultimately yield different
configurations of technological embedding and outcomes for different stakeholders (Bostrom and Heinen
1977).
Consequently, a systematic classification of empirical studies of PCAs from this perspective would allow
researchers to design and evaluate PCAs, more effectively. It can also advance the theorization of how
different technological embeddings may lead to certain outcome variables (such as better grades or
increased motivation). Because there is also a lack of work that names and classifies the different design
elements of PCAs in a holistic view (Hobert and Wolff 2019; Wellnhammer et al. 2020; Winkler and
Soellner 2018), this paper focuses on the following research questions (RQ):
RQ1: What are the dimensions and characteristics of pedagogical conversational agents from a
sociotechnical perspective?
RQ2: What specific learning outcomes and perception measures result from different characteristics and
design elements of pedagogical conversational agents?
To answer the research questions, we develop a taxonomy of design elements for PCAs that characterize
different PCAs in education. We developed the taxonomy in an iterative process following an established
framework (Nickerson et al. 2013). In several steps, we classify and rank the PCA dimensions and
characteristics included in 92 publications. The taxonomy was continuously evaluated and revised through
the iterative process. The revision was based on the recommendations of seven research experts who are
either familiar with the design of PCAs or have a background in education. In a second step, we derived a
taxonomy based on the resulting design elements. We then conduct a descriptive analysis of the results
Taxonomy of Pedagogical Conversational Agents
Forty-Second International Conference on Information Systems, Austin 2021
3
from the taxonomy and the systematic literature review. Finally, building on this, we analyze the
relationships between the different design elements of PCAs and the possible outcome variables. We hope
to classify possible relationships between the dimensions and design elements of the taxonomy and the
outcomes based on empirical data from the relevant papers. In the analysis, we follow the approach of
(Jeyaraj et al. 2006).
Theoretical background
Conversational Agents
CAs are information systems that communicate with a user through natural language. The interaction can
happen either via text, voice (Dahiya 2017) or via the usage of buttons (Segedy et al. 2013). Thus, different
dialog systems such as chatbots, artificial conversation units, and virtual assistants like Amazon's Alexa or
Siri can be united under the term CA. CAs are typically used in conversational systems for various reasons,
including information retrieval or all types of services (Serban et al. 2017). They are successfully embedded
in various areas, including marketing, customer service, technical support, and education (Smutny and
Schreiberova 2020). CAs such as Apple's Siri, Amazon's Alexa, or Google's Assistant are at the forefront of
voice recognition and artificial intelligence technology (Hoy 2018). Text-based CAs typically follow a set of
established rules or flow to respond to questions posed by a user. CAs applications have a long history, with
notable examples being ELIZA, ALICE, Claude, and HeX (Wallace 2009). ELIZA was the very first CA in
the world. It was developed by Joseph Weizenbaum in 1956 and was meant to mimic a psychotherapist
(Weizenbaum 1966). The basic idea of interacting with technological artifacts through natural language
thus emerged as early as the 1960s (Wellnhammer et al. 2020). Today, CAs are already capable of capturing
a wide range of user cases and steering users in the desired direction (Winkler and Soellner 2018).
Additionally, there is a broad field of research on CAs that deals with the interaction of CAs in terms of user
responses (Diederich et al., 2020), such as user trust (Elson et al. 2018) or empathy (McQuiggan and Lester
2007). Other strands of research tend to invoke the context in which CAs are used, e.g., in financial
consulting (Morana et al. 2020) or data analysis (Matsushita et al. 2004).
Pedagogical Conversational Agents to Foster Interactive Learning
CAs have also long been used as pedagogical agents in educational environments. Since the 1970s, PCAs
have been developed in digital learning environments, commonly known as Intelligent Tutoring Systems
(Laurillard 2013). PCAs are essential for education, as they not only leverage technological advances and
understand emotional, cognitive, and social educational concerns (Gulz et al. 2011). The emergence of PCAs
in education also increases IS research interest in how PCAs can be used for teaching and learning (Smutny
and Schreiberova 2020). Advantages of PCAs and CAs systems include around-the-clock availability,
immediate response times (De Keyser et al. 2019; Xu et al. 2017), and the ability to respond naturally
through a conversational interface (Cassell 2000; Wambsganss, Kueng, et al. 2021). Besides, PCAs provide
direct interactions with users (Kim et al., 2019), support for engagement (Lundqvist et al. 2013;
Wambsganß et al. 2021; Wambsganss, Kueng, et al. 2021), and help students to establish goals (Pérez et al.
2016). All these advantages make PCA interesting for learning and underline their growing importance for
the learning environment.
Compared to traditional technology-enhanced learning systems, students become increasingly engaged
when using interactive dialogue-based systems, like PCAs. Chi and Wylie's (2014) ICAP framework
demonstrates that learners' engagement should merge "from passive to active to constructive to interactive.
The change from passive to interactive learning ultimately leads to better learning outcomes (Chi and Wylie
2014). When learners passively engage with learning materials, they merely consume or receive them, such
as when reading a text. However, if they learn actively, they design the learning material themselves by
marking essential sections of a document. According to Chi and Wylie 2014, the two forms of learning that
learners are most engaged with the learning materials are constructive and interactive learning. In
constructive learning, learners compare the content of the learning materials to their prior knowledge. In
contrast, during an interactive engagement, learners discuss or engage with others through questions and
answers. All four components of the ICAP Framework reflect learners' different behaviors and learning
processes. This allows for inferences about different learning outcomes (Chi and Wylie 2014). If this
hypothesis is accepted, then both adaptive and interactive dialogue-based learning systems can be expected
Taxonomy of Pedagogical Conversational Agents
Forty-Second International Conference on Information Systems, Austin 2021
4
to promote learner engagement. The increased engagement and the associated increased learning output,
can be explained with the dialog-based interaction. PCAs allow direct communication with students and,
for example, discussions about the learning content or individual assistance, just as human instructors
would in a face-to-face context. Studies have already applied the ICAP framework and show how students
are interactively engaged in learning problem solving (Winkler et al. 2019) as well as programming skills
(Hobert 2019). Following existing research on PCAs and the ICAP framework, we aim to provide
researchers with a novel classification of PCAs and their design elements to better design the interaction of
PCAs in the future and achieve better learning results using the use of PCAs.
Existing Classifications on Pedagogical Conversational Agents
PCAs are becoming increasingly important in the field of education and could become an integral part of
education in the future (Kowatsch et al. 2017). However, at the same time, much work and fine-tuning is
still needed to fulfill this (Wellnhammer et al. 2020). In order to make this process as simple as possible
and to support both practitioners and researchers, some research work has already been done on PCAs and
their classification to support the design and implementation of PCAs. As mentioned earlier, (Hobert and
Wolff 2019) have already developed a classification of conversational agents in education. This work
provides an excellent initial grounding by looking at the literature base of PCAs in education from temporal,
technical, didactic, and methodological perspectives. With this approach, it was possible to show the state
of the art and outline certain trends present in research projects on PCAs. This already provides a good
overview of the state of the literature on PCAs and the potential of PCAs. Nevertheless, comprehensive
consideration of the detailed aspects of such agents is lacking. Building on this consideration, Wellnhammer
et al., 2020 developed a morphological box, based on the parameters Technical, Didactical, Purpose,
Speech, and Physical, for PCAs to evaluate the application of a PCA and identify outcome-related aspects
of it.
Indeed, current reviews lack a comprehensive and robust structuring for design elements of PCAs. Research
is scattered across various technical and sociotechnical perspectives, resulting in a pressing lack of an
integrative perspective. For example, Hobert and Wolff 2019 did not derive clear characteristics and
dimensions of PCAs in education in their systematic literature review but focused primarily on highlighting
different trends. The morphological approach of Wellnhammer et al., 2020 provides a good overview of
design elements and outcome variables, but the work is subject to some limitations. The paper did not
conduct a systematic literature search (Vom Brocke et al. 2015; Webster and Watson 2002) and identified
relevant papers only through a search engine, which is why some interesting papers are missing. Also, the
focus is only on learning scenarios that complement a classroom interaction, limiting the work's
applicability. Other work, such as that by (Diederich et al. 2019; Janssen et al. 2020), does not focus on any
particular application domain. Therefore, they do not lend themselves to application in education for the
creation of pedagogical agents (Diederich et al. 2019; Janssen et al. 2020). In this regard, we want to follow
an interactive learning perspective based on the sociotechnical system view, as it allows to classify a given
IS into relevant elements (People, Task, Structure, and Technology) that can eventually lead to different
configurations and outcomes (Bostrom and Heinen 1977; Gupta and Bostrom 2009). PCAs can be
intuitively classified as IS systems. However, they represent a novel form of information systems (IS)
characterized by a high degree of interaction and intelligence from a sociotechnical perspective (Maedche
et al. 2019). To obtain a systematic classification of the objects to be analyzed, we propose developing a
taxonomy (Nickerson et al. 2013).
Consequently, a systematic classification of PCAs and their characteristics would allow researchers and
practitioners to design, evaluate, and compare PCA more effectively. Building on this taxonomy, it is
possible to theorize how different technological embeddings of the young field of PCAs in education affect
student learning outcomes in each pedagogical scenario and task. In our opinion, the outcomes of PCAS
use are also based primarily on the interactive learning that PCAs enable. Therefore, we propose developing
the taxonomy based on the ICAP framework and supporting researchers in advancing the theorization of
design elements in the field of PCAs, in view of interactive learning (Chi and Wylie 2014). Therefore, we
aim to fill the presented literature gap by developing a novel taxonomy that provides deeper insights into
PCAs and presents to both practitioners and researchers the outcome variables of different design elements
to specify the outcomes of an educational scenario.
Taxonomy of Pedagogical Conversational Agents
Forty-Second International Conference on Information Systems, Austin 2021
5
Methodology
As has been shown, a fundamental problem in PCA research related to design elements and outcome
variables of PCA use is a precise classification of PCAs into meaningful categories. To systematically classify
the different PCAs, we are guided by an existing taxonomy (Nickerson et al. 2013). Classifications are
helpful for researchers and practitioners because they allow structuring novel and complex domains, which
is especially important for young and emerging research fields such as research with PCAs in education.
Through systematic classification, the relationships between different elements of a phenomenon are
revealed transparently and coherently, and clues to the respective theoretical basis can be derived.
Taxonomies can, therefore, also be used to advance theoretical knowledge (Bailey 1994).
Additionally, taxonomies not only have descriptive or prescriptive value but can also serve as input for
advancing theoretical knowledge. As illustrated by our conceptualization to examine the influence of CA
design elements on outcome variables of PCA use. Consequently, we develop the taxonomy according to a
process that is divided into four different phases (Table 1):
Research Phases
Method
Activities
Sources
1. Taxonomy
database creation
Systematic literature
review (Vom Brocke et
al. 2015; Webster and
Watson 2002)
1. Literature analysis and
search in fields of HCI, IS,
and education
2. Analysis of the literature on
interactive learning,
educational environment,
and primary learning
outcomes
PCA literature
2. Taxonomy
development
Taxonomy
development
(Nickerson et al. 2013)
1. Definition of characteristics
2. Iterative taxonomy
development until
requirements are met
Existing classifications,
Database on PCA primary
outcomes
3. Taxonomy
evaluation
Evaluation
(Szopinski et al. 2019)
1. Evaluation of dimensions and
characteristics with experts
based on different criteria
Semi-structured
interviews with experts
(Galetta 2013)
4. Taxonomy
application
Analysis of the database
(Jeyaraj et al. 2006)
1. Intentification of
relationships between
pedagogical conversational
agent chractersitics / design
elements and outcome
variables.
PCA literature and
taxonomy
Table 1. Overview of the four research phases
Phase 1: Database Creation Through a Systematic Literature Review
To analyze the PCAs literature, a Systematic Literature Review according to the principles of (Webster and
Watson 2002) was used. To specify the search process, the dimensions process, source, coverage, and
technique according to (Cooper 1998) are deployed (Vom Brocke et al. 2015). We used a comprehensive set
of techniques to provide a basis for developing and conceptualizing the taxonomy (i.e., keyword search,
backward search, and forward search). To achieve a high grade of reproducibility and transparency in our
study, we describe in this section the methodological steps we took:
Selection of search string: We choose a board search string to identify a complete base of literature on
PCAs in the education setting. Based on recent literature reviews (Winkler and Soellner 2018), we identified
different keywords that researchers use to describe PCAs. This resulted in the following search string:
(("conversational agent" OR chatbot OR "smart personal assistant") AND (education OR learning OR
pedagogical)). We used all variations of the keywords such as singular, plural, with, or without a hyphen to
generate further input. We identified three areas for deriving studies on PCAs. The Strands are IS, HCI, and
Educational Technology, as in these strands of literature, the significant work on PCAs in education rests.
Table 2 summarizes the database hits and the relevant papers.
Taxonomy of Pedagogical Conversational Agents
Forty-Second International Conference on Information Systems, Austin 2021
6
Search string
("conversational agent" OR
chatbot OR "smart personal
assistant")
AND (education OR learning
OR pedagogical)
Total Hits
EN
Relevant after
examining
abstract
Duplicates
Total Hits
without
duplicates
Relevant
Paper after
deeper
analysis
IEEEXplore
453
38
0
38
28
ACM Digital Library
9
8
0
8
6
ScienceDirect
79
11
0
11
8
AISeL
37
37
0
37
28
EBSCO Business Source
Ultimate
295
34
6
28
22
Sum:
873
128
6
122
92
Table 2. Overview of database hits and the further development
Selection of papers: When searching by title, abstract, and keywords of the papers, the outlet-based
search yields 873 hits. This number still includes literature that is not relevant to this work. In a first
screening process, the identified papers are analyzed based on their abstracts. Only papers related to any
type of PCAs and providing information on primary outcomes from PCA use, as the central focus concept
and unit of analysis of the papers, were considered. Additionally, only papers on PCAs that explicitly address
education were considered so that the scope of PCAs could be as narrow as possible. Of the 122 papers found
to be appropriate, 30 were sorted out during deep analysis. Papers were sorted out that only dealt
theoretically with the use of PCA in teaching (Ondáš et al. 2019). This resulted in 92 papers.
Analysis of papers: The 92 relevant papers are analyzed from a concept-centered perspective based on
an abductive approach. We developed a list to describe the coding to aggregate the findings from the
identified works on PCAs in education. In addition, we first identified design elements of PCAs (i.e., People,
Task, Technology, and Structure) provided by the studies based interactive learning perspective with a
sociotechnical system view (Bostrom and Heinen 1977; Gupta and Bostrom 2009). Three researchers
conducted this iterative process. The process consisted of several coding rounds; two of the three
researchers independently coded the first twenty selected articles in the first coding round. For each of the
20 papers, the researchers derived different design elements assigned to the growing list of descriptions
and variables. Then, these researchers met independently to discuss their findings. If the results differed, a
third researcher was brought in to discuss the differences. Thus, new variables and descriptions were added
in each iteration until all papers were coded.
In some cases, the coding and lists were identical, while other variables required more thought. Both coders
discussed those that were not identified until consistent variables could be determined. Next, we re-
examined the original subset of 20 items. In the subsequent iterations, two researchers independently
coded the rest of the articles and met more frequently with the third researcher to discuss intermediate
results. Different outcome variables of PCA use were also coded and assigned to the growing list of variables
and descriptions. Afterward, these researchers met to discuss the results. If the results differed, a third
researcher was brought in to discuss the differences. Thus, new variables and descriptions were added in
each iteration until all papers were coded.
Phase 2: Taxonomy Development
Our goal is to classify PCAs according to the logic of a sociotechnicalsociotechnical system and thus be able
to obtain evidence about design elements of PCAs and the outcomes of using a PCA in an educational
setting. Therefore, we decided to develop a taxonomy of PCAs to provide a systematic representation of the
existing scientific evidence and compare this evidence with the current literature of PCAs in education to
arrive at a sound conceptual model, which also provides insight into possible design elements and outcome
variables of PCA use. The taxonomy development is based on an existing approach by Nickerson et al., 2013,
as this approach has been used many times in IS research and is widely used. Additionally, it supports a
systematic and step-by-step approach to taxonomy development while ensuring completeness.
Taxonomy of Pedagogical Conversational Agents
Forty-Second International Conference on Information Systems, Austin 2021
7
In the course of taxonomy development, we first determined a meta-characteristic that reflects the purpose
of the taxonomy and on which the selection of dimensions and characteristics is based. Ultimately, we
aimed to classify PCAs in the educational environment. The taxonomy is thus aimed at researchers and
practitioners who want to design PCAs for the educational environment. Our classification should support
them by being able to derive the most important design elements from the taxonomy. All Design
characteristics should be useful to foster interactive learning in PCAs, to be able to fulfill the perspective of
the ICAP framework. To account for the complex nature of PCAs, we further subdivided the dimensions of
the taxonomy into subclasses based on the sociotechnical system perspective. The subclasses are
Technology, People, Task, and Structure. The taxonomy development process continued until different
subjective and objective end conditions (EC) were reached (Nickerson et al. 2013). The objective EC
determined that development only stopped when an object could be subordinated under each dimension
and characteristic; no new dimensions or characteristics were added in the final iteration step, and each
dimension and characteristic was unique in the taxonomy. Figure 1 shows how the presented taxonomy
evolved during the iterative process.
Figure 1. Taxonomy development based on (Nickerson et al. 2013)
Phase 3: Taxonomy Evaluation
For quality evaluation of our taxonomy, we used semi-structured interviews according to the taxonomy
evaluation suggestions of Szopinski et al., 2019 because these suggestions take the researchers
recommended criteria to ensure the quality of taxonomies into account: conciseness, robustness,
comprehensibility, extensibility, and explanatory power (Nickerson et al. 2013). Conciseness describes
whether the number of dimensions is reasonable or whether the dimensions are appearing overwhelming
or confusing. Robustness outline whether the dimensions and characteristics have sufficient differentiation.
Comprehensiveness describes the ability of a taxonomy to classify all objects of a phenomenon.
Extendibility characterizes whether the taxonomy can be simply extended by adding a new dimension or a
new feature. Explanatory power describes whether a taxonomy can transparently show relationships
between different dimensions and characteristics and thus reveal previously unknown aspects of a
phenomenon. We conducted seven interviews with experts drawn from either educational research or IS
research and had expertise in CA research, CA development in practice, or taxonomy development. Table 6
(Appendix) illustrates further information on interviewed experts.
We conducted the interviews between March and April 2022 via video communication platforms with a
minimum duration of 15 minutes up to 49 minutes for the longest interview. The interview guide we used
consisted of 14 open-ended questions considering the five quality criteria elucidated above to evaluate
taxonomies. We proceeded as follows: The current taxonomy version was provided to interviewees prior to
the interview. Interviewees were asked to make notes and comments and analyze improvable areas
beforehand. Then an appointment was set, and the interview performed. Based on our interviews, we found
that experts rated conciseness as positive since the classification of the dimensions according to the
sociotechnical system view was perceived as clear and understandable. Most experts also felt that the
Taxonomy of Pedagogical Conversational Agents
Forty-Second International Conference on Information Systems, Austin 2021
8
individual characteristics were clear. Except for a few suggested changes that we included, the
characteristics were considered concise and not overwhelming. Both dimensions and characteristics were
considered robust, so there was no overlap or inaccuracy. The taxonomy was also rated to be comprehensive
enough to describe design elements against a background of learning interaction. The experts also
confirmed the extensibility of the taxonomy. Some characteristics emerged in the interviews that could be
added to the taxonomy, which also underlines the extensibility. On the experts’ side, it was also confirmed
that the relationships between different dimensions and characteristics are transparent and can thus
provide new aspects for the design of PCAs.
Phase 4: Taxonomy Application
In the taxonomy development phase, we aim to identify the relationship between design elements of PCAs
and the possible outcome variables of their use. Our goal is to show possible connections between the
dimensions and characteristics of the taxonomy and the outcomes presented based on empirical data from
relevant work collected through the literature search and embedded in the taxonomy. Consequently, we
created a pedagogical conversational agent taxonomy and identified the relationships between a design
element and a dependent outcome variable. Following the approach of Jeyaraj et al., 2006, we then assessed
the relationship between a design element (independent variable) and a specific outcome (dependent
variable). A relationship was assigned when the significant positive value of p < 0.01 could be indicated.
Taxonomy of Pedagogical Conversational Agents
This section presents our preliminary final version of the taxonomy after conducting five iterations and
evaluating and revising it based on feedback from the semi-structured expert interviews. According to the
literature reviewed, all the design elements presented are central to the representation of PCAs in the
education setting. In the following, we show the different dimensions and characteristics (Table 3).
Dimensions
Characteristics
Technology
PCA Design
Embodied PCA
Personification
Non-Visual
Programming
(backend)
Rule-Based
Learning-Based
Interaction
Design Input
Text
Speech
Buttons
Combined
Interaction
Design Output
Visual
Speech
Multimodal
PCA
Embedding
Native Application/
Web-based
Embedded in
Social Media
Embedded in Smart
Assistants
Embedded in Local
Desktop App
Task / People
Role of PCA
Acts as a Motivator
Acts as a Tutor
Acts as a Peer
Mixed
Expected
Primary
Outcome
Factual Knowledge
Conceptual
Knowledge
Procedural
Knowledge
Metacognitive
Knowledge
Perception
Measures
Target Group
Kindergarten and
Elementary School
High School
Higher Education
Continuous
Education
Cross-
Level-
Education
Structure
Domain of Use
Linguistics
Psychology
Computer
Science
and
Engineeri
ng
Economics
and social
studies
Humani
ties
Mathem
atics
Law
Natural
Science
Special
needs
Cross-
domain
Facets of
Learning
Process
Preparation
Initial / Actual
Learning
Practice and Repeat
Reflection
Table 3. Taxonomy of Pedagogcial Conversational Agents
The design elements of PCAs in the educational environment can be divided into the dimensions of
Technology, People, Task, and Structure. The technology dimension encompasses the PCA's technological
design, such as the design of a PCA or the design of the interaction with the user. The Task dimension
circumscribes the actual task that the PCA is intended to perform in training, both from the perspective of
the role that the PCA takes in the learning environment and from the perspective of the output that the use
Taxonomy of Pedagogical Conversational Agents
Forty-Second International Conference on Information Systems, Austin 2021
9
of the PCA is intended to provide (Anderson and Bloom 2001). The Structure dimension covers the user's
connection to the systems. It ultimately specifies the place/domain of use and the type of application of the
PCA, e.g., learning a new fact (Initial / Actual Learning). Due to similarities in content and for a better
overview of the taxonomy, the dimensions Task and People were combined. The division into the four
perspectives, which arise against the background of a sociotechnical system, is intended to extend the
taxonomy in terms of theorizing about the effect of these dimensions on the design of a PCA in the learning
environment and the possible outcome variables. Therefore, we aim to provide a precise and unambiguous
description of the different classifications to obtain a robust categorization of the identified design
elements. In the following, the dimensions with the corresponding characteristics will be explained.
Technology
According to our analysis, the dimension Technology can be further divided into the five sub-dimensions
PCA Design, Programming, Interaction Design (Input / Output), and PCA Embedding.
PCA Design - This first dimension shows how a PCA interface can be presented while illustrating the extent
to which PCAs have visual features in the form of static, animated, or reactive avatars (Nunamaker et al.
2011). During our research, we identified two different manifestations of PCAs as Embodied PCAs and
personalized PCAs (Personification). An Embodied PCA includes three-dimensional physical elements of a
character such as a face, body, or extremities (Oker et al. 2020). A PCA belonging to the characteristic
Personification has some form of the visual image of a character to interact with. This two-dimensional
character can be an image of a person, animal, or other figures (Mejbri et al. 2017; Ruan et al. 2020). All
PCAs that do not fit into one of the previously mentioned characteristics were assigned to Non-Visual. This
characteristic describes PCAs that have no visible aspects at all. These PCAs do not have any form of visual
character with which they can interact (Nguyen et al. 2019; Winkler et al. 2021).
Programming This dimension describes the back-end programming of the PCA. It illustrates the
underlying cognitive system design defining the technical principles under which a PCA interacts, processes
information, and/or selects an action or response (Diederich et al. 2019; Knote et al. 2021). In this paper,
we arranged between Rule-Based and Learning-Based PCAs. While Rule-Based PCAs usually are less
adaptive and follow a pre-defined pattern or decision tree to interact with the user, Learning-Based PCAs
are learning and adapting over time through machine learning and artificial intelligence, resulting in a more
flexible and adaptive interaction (Kontogiorgos et al. 2019).
Interaction Design Input The interaction design describes the primary way a user interacts with a PCA
and vice-versa. While the interaction with PCAs is primarily text-based or voice-based, we identified other
forms of communication such as haptics (Pfeuffer et al. 2019) we define as Buttons. The characteristics
Interaction Design Input focuses on the input and describes the different possibilities for the user to enter
information into the PCA. First, information can be entered through text. Second, information can be
entered voice-based. Third, the information can be transmitted through the use of predefined buttons and
text which are presented on the screen (Fadhil and Villafiorita 2017). The fourth method is a combination
of the previously mentioned input possibilities.
Interaction Design Output The dimension focuses on the different possibilities for the PCA to transmit
information to the user. First, the output of the PCA can be presented in a Visual form. The information is
displayed through text or in a visual form, such as emojis or images. Second, the output of the PCA is
transmitted through Speech. This way, the user receives a spoken response from the PCA. Third, there are
Multimodal PCAs, where the output is transferred through a combination of text, visual form as emojis,
images or videos, and speech.
PCA Embedding- This dimension describes the service channel in which the PCA is embedded. While some
PCAs are integrated within social media applications, the vast majority are embedded in websites or stand-
alone native applications (Janssen et al. 2020). The characteristic Native Application/ Web-based
describes PCAs running on a server, platform, or native application. Second, Embedded in Social Media
describes PCAs embedded in a social media platform such as Facebook or Instagram or into a messenger
application like Telegram, WhatsApp, Messenger, or Line (Wu et al. 2020). Third, Embedded in Smart
Assistants describes PCAs embedded in a Smart Assistant as Amazon Alexa, Google Assistant, or Siri
(Winkler and Söllner 2020). Last, we have the characteristic Embedded in a local Desktop App, which are
PCAs installed as a program on a local computer.
Taxonomy of Pedagogical Conversational Agents
Forty-Second International Conference on Information Systems, Austin 2021
10
Task / People
The dimension Task/People is further divided into the three sub-dimensions Role of PCA, Expected
Primary Outcome, and Target Group.
Role of PCA This dimension describes the fundamental function of the PCA. First of all, we have the
characteristic Acts as a Tutor, where the PCA takes the instructor's role to teach the user (Amato et al.
2019). For the second characteristic, we defined Acts as a Peer. PCAs belonging to this characteristic are
used as a transmitter for information (Winkler et al. 2021). The third characteristic is called Acts as a
Motivator, and its main purpose is to encourage the user to engage, learn or participate (Moridis and
Economides 2012). Last we defined Mixed characteristics for PCAs with a combination of the previously
mentioned different roles.
Expected Primary Outcome The dimension Expected Primary Outcome describes a PCA designer's main
area of interest. Learning something requires knowledge to form specific knowledge dimensions and
includes several cognitive processes (Anderson and Krathwohl 2001). Therefore, a designer needs to tackle
the adequate knowledge dimension with their PCA. Anderson and Krathwohl, 2001 defined four
fundamental knowledge domains according to Bloom et al., 1956. These categories are assumed to lie along
a continuum from concrete Factual Knowledge to abstract Metacognitive Knowledge. While the
Conceptual and Procedural categories overlap in terms of abstractness, with some procedural knowledge
being more concrete than the most abstract conceptual knowledge (Anderson and Krathwohl, 2001, p.
5). The characteristic Factual Knowledge describes basic elements within a discipline such as general
knowledge, vocabulary, and signs. The second characteristic is Conceptual Knowledge and describes
interrelationships among basic elements within a larger structure, such as theories, structures, and rules.
Third, we defined Procedural Knowledge as a characteristic of interest, which describes how to do
something and apply a theory. The fourth characteristic is Metacognitive Knowledge, which describes one’s
cognition, situation, and perception (Anderson and Krathwohl 2001). Last, we have the characteristic
Perception Measures, where perceptions as motivation, engagement, productivity, and collaboration are
of interest.
Target Group This dimension describes the intended targeted group for the PCA. First, we have the
characteristic Kindergarten and Elementary School, where we included children from the age of three up
to the end of elementary school. Second, we assigned high school students to the characteristic High School.
The characteristic Higher Education includes college, university, and graduation, school students. Forth,
we described the characteristic Continuous Education, which provides for persons focusing on further
education outside the regular educational system. Last, we defined the characteristic Cross-Level-
Education for papers where we found multiple possible solutions for the application.
Structure
The dimension Structure is further divided into the two sub-dimensions, Domain of Use and Facets of
Learning Process.
Domain of Use This dimension describes the specific domain in which the PCA is used or which the PCA
teaches the domain topic. First, we have the characteristic Linguistics, which includes PCAs for language
learning, Humanities as cultural studies, religion or philosophy, and Natural Science as physics, biology,
or chemistry. Other domains of interest are described in the characteristics Law, Computer Science and
Engineering, Economics and Social Studies, Psychology and Mathematics. There is as well a characteristic
called Special Needs for people with special needs or disabilities. Last, we defined the characteristic Cross
Domain, which includes papers where the domain is not specified, or multiple domains are targeted.
Facets of Learning We built the characteristics of this dimension according to the didactical learning
phases by Roth, 1963. Roth, 1963 focuses on Motivation, Difficulty, Solution, and Practice. To represent the
different learning phases of PCA supported learning in a better way, we redefined and combined the phases
developed by Roth, 1963.
Taxonomy of Pedagogical Conversational Agents
Forty-Second International Conference on Information Systems, Austin 2021
11
Findings and Discussion
In the following, we would like to present the results that emerged from the taxonomy and the literature
research. We use a descriptive analysis to explain the results of the literature search and the presented
taxonomy more in detail. Based on an analysis of our outcome variables, we derive trends regarding the
effectiveness of different design elements and characteristics of PCAs. The different outcome variables
result from the dimensions preception, cognition, performance and technology accaptance. We have
visualized all outcome variables and the composition of them in Table 4. The table visualizes how often the
corresponding outcome variables could be significantly detected (in relation to all analyszed paper. First,
we show the perceptual results. Research supports the view that perceptual constructs such as motivation,
engagement, or productivity found an overall positive effect of perceptions on learning outcomes (Ryan and
Deci 2000). Following Kraiger et al., 1993, we combined engagement, involvement, and motivation into the
affective outcome variables. Outcome variables such as intention to use and ease of use were assigned to
the technology acceptance model following (Venkatesh and Bala 2008). Other variables such as level of
enjoyment, satisfaction, and usefulness were mapped to learning satisfaction. At the same time, willingness
to communicate and cooperation was unified under the variable collaboration
1
. The cognitive outcomes are
composed of the highly relevant characteristics of factual knowledge, conceptual knowledge, procedural
knowledge, and metacognitive knowledge of the Expected Main Outcome Dimension according to Bloom
et al., 1956 and Anderson and Krathwohl (2001). Third, we defined the performance outcome category,
consisting of outcome variables such as grade improvement or overall learning performance. Various
studies indicate that a student's performance could be improved by using PCAs (e.g., Yin et al. 2021).
Outcome Variables **
Percentage of measured outcome variables
Perceptual Measures
Percentage and whole numbers
Affective Outcome
17%
8
Technology Acceptances
9%
4
Satisfaction with the Learning Progress
4%
2
Collaboration
13%
6
Cognitive Outcome Variables
Percentage and whole numbers
Learning Outcome Factual Knowledge
7%
3
Learning Outcome Conceptual Knowledge
4%
2
Learning Outcome Procedural Knowledge
9%
4
Learning Outcome Metacognitive Knowledge
20%
9
Performance Outcome Variables
Percentage and whole numbers
Learning Performance / Grade
17%
8
All outcome variables shown in the table were detected with a significance of p<0.01 **
Table 4. Overview of key outcome variables of PCA use
To answer the second research question (RQ2), we present below the influence of the specific design
elements and characteristics of PCAs on outcome variables from Table 4. The results of the influence of
each desing element and characteristic on outcome variables are presented in Table 5. A descriptive
evaluation shows that 63.3% of all analyzed papers yield significant output on perceptual outcome variables.
The elements Rule-Based Programming, Combined Interaction Design Input, PCAs Embedded in Social
Media, and PCAs Acts as a Motivator significantly positively impact affective outcome variables in seven
out of ten cases. Overall, the use of PCAs seems to positively impact affective outcome variables such as
motivation or engagement through the use of the above-mentioned characteristics. The increase in
engagement is an interesting observation. Following the theory of the ICAP framework by Chi and Wylie
(2014), this increased engagement may have a positive impact on learning outcomes. Based on our results,
we can draw a first preliminary conclusion that the interactive design of PCAs can promote user engagement
and thus improve learning outcomes. For cognitive outcome variables, 56.7% of all relevant papers produce
significant output. Non-Visual PCAs Design, Learning-Based Programming, Text-Based Interaction
Design Input, Multimodal Interaction Design Output, and PCAs Embedded in Smart Assistants
significantly influence the cognitive outcome. An exception is conceptual knowledge which could only be
improved in about 50% of the cases by PCAs on a significant level. In general, however, we can see that
PCAs designed to improve a particular type of knowledge improve that type of knowledge at a significant
1
Most of the works have directly used the term collaboration, so it has been used as a general term (variable).
Taxonomy of Pedagogical Conversational Agents
Forty-Second International Conference on Information Systems, Austin 2021
12
level (see. Table 5). The PCAs that address performance variables can present significant outputs in grades
or higher learning performance. Through our research, we show that 25% of all relevant papers identified
and analyzed lead to significantly higher learning performance. This allows us to strengthen the assumption
that PCAs are fundamentally suitable as learning tools.
Design elements and characteristics
Representation of the characteristics in the outcome
variables
Characteristics and design elements
influencing preceptual measures
Number of research papers showing characteristics and
outcome variables (percent and whole numbers)
Rule-Based (Programming)
73, 3%
11
Combined (Interaction Design Input)
77,8%
7
Embedded in Social Media (PCA Embedment)
100%
3
PCA acts a a Motivator (Role of PCA)
100%
1
Perception Measures (Expected Primary Outcome)
100%
9
Characteristics and design elements
influencing cognitive outcome variables
Number of research papers showing characteristics and
outcome variables (percent and whole numbers)
Factual Knowledge (Expected Primary Outcome)
100%
3
Conceptual Knowledge (Expected Primary Outcome)
50%
2
Procedural Knowledge (Expected Primary Outcome)
100%
4
Metacognitive Knowledge (Expected Primary Outcome)
90%
9
Non-Visual (PCA Design)
73,7%
14
Learning-Based (Programming)
73,7%
11
Text-Based (Interaction Design Input)
76.9%
10
Multimodal (Interaction Design Output)
75%
3
Embedded in Smart Assistants (PCA Embedding)
75%
3
Table 5. Influence of PCA characteristics and design elements on outcome variables
Conclusion and Further Research
The presented work provides a deeper insight into the young research field of PCAs. The main focus of the
following work is to present a taxonomy design elements for PCAs. However, with respect to our work,
several limitations must be mentioned. One limitation arises from the fact that our taxonomy only refers to
content-based learning. Therefore, among others, "FAQ chatbots" or bots for course evaluation were not
considered (Wambsganss, Winkler, Schmid, et al. 2020; Wambsganss, Haas, et al. 2021). Another
limitation arises from our point of view as researchers in IS, the underlying sociotechnical systems model
steers the work in a specific direction. Using a different theoretical view or a different model could produce
other valuable results. Additionally, it must be mentioned that out of the 92 papers identified as relevant,
only 48% presented an empirical analysis of the outcome variables. Thus, out of 92 papers, only 33% were
able to present empirically significant outcome variables. Consequently, the initial promising results must
be viewed against this background. The last limitation is that we did not consider ethical issues in the
development of the taxonomy. Still, these have an impact on the diffusion and adoption of CAs or PCAs
(Wambsganss et al. 2021). Nevertheless, we assume that our taxonomy with its results provides a valuable
overview of the relationship between the design elements of ECs and their outcome variables and should
help researchers and practitioners develop PCAs in the future and stimulate further research. Further
research is needed to derive a theoretical link between the design elements and the outcome variables. A
conceptual model that explains the connection of specific design elements and their output on student
Taxonomy of Pedagogical Conversational Agents
Forty-Second International Conference on Information Systems, Austin 2021
13
engagement could provide a theory-driven explanation for how PCAs improve learning performance could
emerge through these links.
References
Amato, F., Casillo, M., Colace, F., Santo, M. De, Lombardi, M., and Santaniello, D. 2019. “CHAT: A
Cultural Heritage Adaptive Tutor,” International Conference on Engineering, Technology and
Education (TALE), Institute of Electrical and Electronics Engineers Inc., pp. 15.
(https://doi.org/10.1109/TALE48000.2019.9225962).
Anderson, L., and Bloom, B. 2001. A Taxonomy for Learning, Teaching, and Assessing: A Revision of
Bloom’s Taxonomy of Educational Objectives. (http://eduq.info/xmlui/handle/11515/18345).
Anderson, L. W., and Krathwohl, D. R. 2001. “A Taxonomy for Learning, Teaching, and Assessing: A
Revision of Bloom’s Taxonomy of Educational Objectives,” Longman, New York: Addison Wesley
Long- man, Inc. (https://eduq.info/xmlui/handle/11515/18345).
Bailey, D. 1994. Typologies and Taxonomies: An Introduction to Classification Techniques, Newbury
Park: CA: Sage.
Bloom, B., Engelhart, M., Furst, E., and Hill, W. 1956. Taxonomy of Educational Objetives: The
Classification of Educational Goals: Handbook I: Cognitive Domain.
Bostrom, R. P., and Heinen, J. S. 1977. “MIS Problems and Failures: A Socio-Technical Perspective. Part
I: The Causes,” MIS Quarterly (1:3), JSTOR, pp. 1732. (https://doi.org/10.2307/248710).
Vom Brocke, J., Riemer, K., and Niehaves, B. 2015. “Standing on the Shoulders of Giants: Challenges and
Recommendations of Literature Search in Information Systems Research,” Communications of the
Association for Information Systems (37:1), pp. 205224. (https://doi.org/10.17705/1CAIS.03709).
Cassell, J. 2000. “Embodied Conversational Interface Agents,” Communications of the ACM (43:4),
Association for Computing Machinery (ACM), pp. 7078. (https://doi.org/10.1145/332051.332075).
Chi, M. T. H., and Wylie, R. 2014. “The ICAP Framework: Linking Cognitive Engagement to Active
Learning Outcomes,” Educational Psychologist (49:4), pp. 219243.
(https://doi.org/10.1080/00461520.2014.965823).
Cooper, H. M. 1998. “Organizing Knowledge Syntheses: A Taxonomy of Literature Reviews,” Knowledge
in Society (1:1), pp. 104126. (https://link.springer.com/article/10.1007%252FBF03177550).
Dahiya, M. 2017. “(PDF) A Tool of Conversation: Chatbot,” International Journal of Computer Sciences
and Engineering (5:5), pp. 158161.
(https://www.researchgate.net/publication/321864990_A_Tool_of_Conversation_Chatbot).
Diederich, S., Brendel, A. B., Morana, S., and Kolbe, L. 2020. “On the Design of and Interaction with
Conversational Agents: An Organizing and Assessing Review of Human-Computer Interaction
Research,” Journal of the Association for Information Systems, pp. 169.
Diederich, S., Brendel, A., and Kolbe, L. 2019. “Towards a Taxonomy of Platforms for Conversational
Agent Design,” Wirtschaftsinformatik 2019 Proceedings, pp. 11001114.
(https://aisel.aisnet.org/wi2019/track10/papers/1).
Elson, J. S., Derrick, D., and Ligon, G. 2018. “Examining Trust and Reliance in Collaborations between
Humans and Automated Agents,” Proceedings of the 51st Hawaii International Conference on
System Sciences, Hawaii International Conference on System Sciences, pp. 430439.
(https://doi.org/10.24251/hicss.2018.056).
Eom, S. B., and Ashill, N. 2016. “The Determinants of Students’ Perceived Learning Outcomes and
Satisfaction in University Online Education: An Update*,” Decision Sciences Journal of Innovative
Education (14:2), Wiley-Blackwell, pp. 185215. (https://doi.org/10.1111/dsji.12097).
Fadhil, A., and Villafiorita, A. 2017. “An Adaptive Learning with Gamification & Conversational UIs: The
Rise of CiboPoliBot,” in UMAP 2017 - Adjunct Publication of the 25th Conference on User Modeling,
Adaptation and Personalization, New York, NY, USA: Association for Computing Machinery, Inc,
July 9, pp. 408412. (https://doi.org/10.1145/3099023.3099112).
Fryer, L., Ainley, M., Thompson, A., and Gibson, A. 2017. “Stimulating and Sustaining Interest in a
Language Course: An Experimental Comparison of Chatbot and Human Task Partners,” Humans and
Computers (75), pp. 461468.
Galetta, A. 2013. Mastering the Semi-Structured Interview and Beyond: From Research Design to
Analysis and Publication, (Vol. 18), New York: New York University Press.
Gulz, A., Haake, M., Silvervarg, A., Sjödén, B., and Veletsianos, G. 2011. “Building a Social Conversational
Taxonomy of Pedagogical Conversational Agents
Forty-Second International Conference on Information Systems, Austin 2021
14
Pedagogical Agent,” in Conversational Agents and Natural Language Interaction, D. Perez-Marin
and I. Marin Pascual-Nieto (eds.), IGI Global, pp. 128155. (https://doi.org/10.4018/978-1-60960-
617-6.ch006).
Gupta, S., and Bostrom, R. P. 2009. “Technology-Mediated Learning: A Comprehensive Theoretical
Model,” Journal of the Association for Information Systems (10:9), Association for Information
Systems, pp. 686714. (https://doi.org/10.17705/1jais.00207).
Hobert, S. 2019. “Say Hello to ‘Coding Tutor’! Design and Evaluation of a Chatbot-Based Learning System
Supporting Students to Learn to Program,” ICIS 2019 Proceedings, pp. 117.
(https://aisel.aisnet.org/icis2019/learning_environ/learning_environ/9).
Hobert, S., and Wolff, R. M. von. 2019. “Say Hello to Your New Automated Tutor – A Structured Literature
Review on Pedagogical Conversational Agents,” Wirtschaftsinformatik 2019 Proceedings, pp. 301
314. (https://aisel.aisnet.org/wi2019/track04/papers/2).
Hone, K. S., and El Said, G. R. 2016. “Exploring the Factors Affecting MOOC Retention: A Survey Study,”
Computers and Education (98), Elsevier Ltd, pp. 157168.
(https://doi.org/10.1016/j.compedu.2016.03.016).
Hoy, M. B. 2018. “Alexa, Siri, Cortana, and More: An Introduction to Voice Assistants,” Medical Reference
Services Quarterly (37:1), Routledge, pp. 8188.
(https://doi.org/10.1080/02763869.2018.1404391).
Janssen, A., Passlick, J., Cordona, D. R., and Breitner, M. H. 2020. “Virtual Assistance in Any Context: A
Taxonomy of Desgin Elements for Domain-Specific Chatbots.,” Business & Information Systems
Engineering.
Jeyaraj, A., Rottman, J. W., and Lacity, M. C. 2006. “A Review of the Predictors, Linkages, and Biases in
IT Innovation Adoption Research,” Journal of Information Technology, Palgrave, pp. 123.
(https://doi.org/10.1057/palgrave.jit.2000056).
De Keyser, A., Köcher, S., Alkire (née Nasr), L., Verbeeck, C., and Kandampully, J. 2019. “Frontline Service
Technology Infusion: Conceptual Archetypes and Future Research Directions,” Journal of Service
Management (30:1), Emerald Group Publishing Ltd., pp. 156183. (https://doi.org/10.1108/JOSM-
03-2018-0082).
Kim, N.-Y. 2018. “A Study on Chatbots for Developing Korean College Students’ English Listening and
Reading Skills,” Journal of Digital Convergence (16:8), The Society of Digital Policy and
Management, pp. 1926. (https://doi.org/10.14400/JDC.2018.16.8.019).
Kim, S., Lee, J., and Gweon, G. 2019. “Comparing Data from Chatbot and Web Surveys: Effects of Platform
and Con-Versational Style on Survey Response Quality,” ACM Reference Format: Soomin Kim, ACM,
pp. 112. (https://doi.org/10.1145/3290605.3300316).
Knote, R., Janson, A., Söllner, M., and Leimeister, J. M. 2021. “Value Co-Creation in Smart Services: A
Functional Affordances Perspective on Smart Personal Assistants,” Journal of the Association for
Information Systems (22:2), Association for Information Systems, pp. 418458.
(https://doi.org/10.17705/1jais.00667).
Kontogiorgos, D., Pereira, A., Andersson, O., Koivisto, M., Rabal, E. G., Vartiainen, V., and Gustafson, J.
2019. “The Effects of Anthropomorphism and Non-Verbal Social Behaviour in Virtual Assistants,” in
IVA 2019 - Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, New
York, NY, USA: Association for Computing Machinery, Inc, July 1, pp. 133140.
(https://doi.org/10.1145/3308532.3329466).
Kowatsch, T., Nißen, M., Shih, C.-H. I., Rüegger, D., Volland, D., Filler, A., Künzler, F., Barata, F., Büchter,
D., Brogle, B., Heldt, K., Gindrat, P., Farpour-Lambert, N., and l’Allemand, D. 2017. Text-Based
Healthcare Chatbots Supporting Patient and Health Professional Teams: Preliminary Results of a
Randomized Controlled Trial on Childhood Obesity.
(https://www.researchgate.net/publication/320161507).
Kraiger, K., Ford, J. K., and Salas, E. 1993. “Application of Cognitive, Skill-Based, and Affective Theories
of Learning Outcomes to New Methods of Training Evaluation,” Journal of Applied Psychology
(78:2), pp. 311328. (https://doi.org/10.1037/0021-9010.78.2.311).
Kulik, J. A., and Fletcher, J. D. 2016. “Effectiveness of Intelligent Tutoring Systems,” Review of
Educational Research (86:1), SAGE Publications Inc., pp. 4278.
(https://doi.org/10.3102/0034654315581420).
Laurillard, D. 2013. “Rethinking University Teaching: A Conversational Framework for the Effective Use
of Learning Technologies,” Rethinking University Teaching: A Conversational Framework for the
Effective Use of Learning Technologies (2nd Edition.), London: Routledge.
Taxonomy of Pedagogical Conversational Agents
Forty-Second International Conference on Information Systems, Austin 2021
15
(https://doi.org/10.4324/9781315012940).
Lundqvist, K. O., Pursey, G., and Williams, S. 2013. “Design and Implementation of Conversational Agents
for Harvesting Feedback in ELearning Systems,” in Lecture Notes in Computer Science (Including
Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8095
LNCS), pp. 617618. (https://doi.org/10.1007/978-3-642-40814-4_79).
Maedche, A., Legner, C., Benlian, A., Berger, B., Gimpel, H., Hess, T., Hinz, O., Morana, S., and Söllner,
M. 2019. “AI-Based Digital Assistants: Opportunities, Threats, and Research Perspectives,” Business
and Information Systems Engineering (61:4), Gabler Verlag, pp. 535544.
(https://doi.org/10.1007/s12599-019-00600-8).
Matsushita, M., Maeda, E., and Kato, T. 2004. “An Interactive Visualization Method of Numerical Data
Based on Natural Language Requirements,” International Journal of Human Computer Studies
(60:4), Academic Press, pp. 469488. (https://doi.org/10.1016/j.ijhcs.2003.11.004).
McQuiggan, S. W., and Lester, J. C. 2007. “Modeling and Evaluating Empathy in Embodied Companion
Agents,” International Journal of Human Computer Studies (65:4), Academic Press, pp. 348360.
(https://doi.org/10.1016/j.ijhcs.2006.11.015).
Mejbri, N., Essalmi, F., and Rus, V. 2017. “Educational System Based on Simulation and Intelligent
Conversation,” in International Conference on Information and Communication Technology and
Accessbility, ICTA (Vol. 2017-Decem), Institute of Electrical and Electronics Engineers Inc., April 10,
pp. 16. (https://doi.org/10.1109/ICTA.2017.8336020).
Morana, S., Gnewuch, U., and Jung, D. 2020. “The Effect of Anthropomorphism on Investment Decision-
Making with Robo-Advisor Chatbots,” Twenty-Eigth European Conference on Information Systems,
pp. 118. (https://www.researchgate.net/publication/341277570).
Moridis, C. N., and Economides, A. A. 2012. “Affective Learning: Empathetic Agents with Emotional Facial
and Tone of Voice Expressions,” IEEE Transactions on Affective Computing (3:3), pp. 260272.
(https://doi.org/10.1109/T-AFFC.2012.6).
Nguyen, H. D., Pham, V. T., Tran, D. A., and Le, T. T. 2019. “Intelligent Tutoring Chatbot for Solving
Mathematical Problems in High-School,” in International Conference on Knowledge and Systems
Engineering, Institute of Electrical and Electronics Engineers Inc., October 1, pp. 16.
(https://doi.org/10.1109/KSE.2019.8919396).
Nickerson, R. C., Varshney, U., and Muntermann, J. 2013. “A Method for Taxonomy Development and Its
Application in Information Systems,” European Journal of Information Systems (22:3), Palgrave
Macmillan Ltd., pp. 336359. (https://doi.org/10.1057/ejis.2012.26).
Nunamaker, J., Derrick, D., Elkins, A., Burgoon, J., and Patton, M. 2011. “Embodied Conversational
Agent-Based Kiosk for Automated Interviewing,” Journal of Management Information Systems
(28:1), Routledge , pp. 1748. (https://doi.org/10.2753/MIS0742-1222280102).
Nuruzzaman, M., and Hussain, O. K. 2018. “A Survey on Chatbot Implementation in Customer Service
Industry through Deep Neural Networks,” in Proceedings - 2018 IEEE 15th International Conference
on e-Business Engineering, ICEBE 2018, Institute of Electrical and Electronics Engineers Inc.,
December 27, pp. 5461. (https://doi.org/10.1109/ICEBE.2018.00019).
Oker, A., Pecune, F., and Declercq, C. 2020. “Virtual Tutor and Pupil Interaction: A Study of Empathic
Feedback as Extrinsic Motivation for Learning,” Education and Information Technologies (25),
Springer, pp. 36433658. (https://doi.org/10.1007/s10639-020-10123-5).
Ondáš, S., Pleva, M., and Hládek, D. 2019. “How Chatbots Can Be Involved in the Education Process,”
International Conference on Emerging ELearning Technologies and Applications (ICETA), pp. 575
580.
(https://ieeexplore.ieee.org/abstract/document/9040095/?casa_token=b_7HaAK_Y6EAAAAA:jpd
1zuLYG508aZtw4xo7F9s2XMwARoHpjJ1VElTJFwF4Ipn2AtQrQhZJeocCo79dp-_mH8j3Nw).
Pérez, J., Cerezo, E., Seron, F., and Rodríguez, L.-F. 2016. “A Cognitive-Affective Architecture for ECAs,”
Biologically Inspired Cognitive Architectures (18), pp. 3340.
Pfeuffer, N., Benlian, A., Gimpel, H., and Hinz, O. 2019. “Anthropomorphic Information Systems,”
Business and Information Systems Engineering (61:4), Gabler Verlag, pp. 523533.
(https://doi.org/10.1007/s12599-019-00599-y).
Quah, J. T. S., and Chua, Y. W. 2019. “Chatbot Assisted Marketing in Financial Service Industry,” in
Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and
Lecture Notes in Bioinformatics) (Vol. 11515 LNCS), Springer Verlag, June 25, pp. 107114.
(https://doi.org/10.1007/978-3-030-23554-3_8).
Roth, H. 1963. Pädagogische Psychologie Des Lehrens Und Lernens, (7. Aufl.), Schroedel.
Taxonomy of Pedagogical Conversational Agents
Forty-Second International Conference on Information Systems, Austin 2021
16
Ruan, S., He, J., Ying, R., Burkle, J., Hakim, D., Wang, A., Yin, Y., Zhou, L., Xu, Q., Abuhashem, A., Dietz,
G., Murnane, E. L., Brunskill, E., and Landay, J. A. 2020. “Supporting Children’s Math Learning with
Feedback-Augmented Narrative Technology,” in Proceedings of the Interaction Design and Children
Conference, IDC 2020, Association for Computing Machinery, Inc, June 21, pp. 567580.
(https://doi.org/10.1145/3392063.3394400).
Ruan, S., Jiang, L., Xu, J., Tham, B. J. K., Qiu, Z., Zhu, Y., Murnane, E. L., Brunskill, E., and Landay, J. A.
2019. “QuizBot: A Dialogue-Based Adaptive Learning System for Factual Knowledge,” in Conference
on Human Factors in Computing Systems - Proceedings, Association for Computing Machinery, May
2, pp. 113. (https://doi.org/10.1145/3290605.3300587).
Ryan, R., and Deci, E. 2000. “Self-Determination Theory and the Facilitation of Intrinsic Motivation,
Social Development, and Well-Being.,” American Psychologist (55:1), Ryan, pp. 6878.
(https://psycnet.apa.org/journals/amp/55/1/68.html?uid=2000-13324-007).
Segedy, J. R., Kinnebrew, J. S., and Biswas, G. 2013. “The Effect of Contextualized Conversational
Feedback in a Complex Open-Ended Learning Environment,” Educational Technology Research and
Development (61:1), Springer Boston, pp. 7189. (https://doi.org/10.1007/s11423-012-9275-0).
Serban, I. V., Sankar, C., Germain, M., Zhang, S., Lin, Z., Subramanian, S., Kim, T., Pieper, M., Chandar,
S., Ke, N. R., Rajeshwar, S., de Brebisson, A., Sotelo, J. M. R., Suhubdy, D., Michalski, V., Nguyen, A.,
Pineau, J., and Bengio, Y. 2017. “A Deep Reinforcement Learning Chatbot,” Computation and
Language, pp. 140. (http://arxiv.org/abs/1709.02349).
Smutny, P., and Schreiberova, P. 2020. “Chatbots for Learning: A Review of Educational Chatbots for the
Facebook Messenger,” Computers and Education (151), Elsevier Ltd, pp. 111.
(https://doi.org/10.1016/j.compedu.2020.103862).
Szopinski, D., Schoormann, T., and Kundisch, D. 2019. “Because Your Taxonomy Is Worth It: Towards a
Framework for Taxonomy Evaluation.” (https://www.researchgate.net/publication/332711034).
Venkatesh, V., and Bala, H. 2008. “Technology Acceptance Model 3 and a Research Agenda on
Interventions,” Decision Sciences (39:2), Decision Sciences Institute, pp. 273315.
(https://doi.org/10.1111/j.1540-5915.2008.00192.x).
Wallace, R. S. 2009. “The Anatomy of A.L.I.C.E.,” in Parsing the Turing Test: Philosophical and
Methodological Issues in the Quest for the Thinking Computer, Springer Netherlands, pp. 181210.
(https://doi.org/10.1007/978-1-4020-6710-5_13).
Wambsganß, T., Guggisberg, S., and Söllner, M. 2021. “ArgueBot: A Conversational Agent for Adaptive
Argumentation Feedback Understanding and Designing Trust in Information Systems View Project
Trustworthy Conversational AI (TCAI) View Project,” Wirtschaftsinformatik , pp. 118.
(https://slack.com/).
Wambsganss, T., Haas, L., and Soellner, M. 2021. “Towards the Design of a Student-Centered Question-
Answering System in Educational Settings,” European Conference on Information Systems, pp. 112.
Wambsganss, T., Höch, A., and Zierau, N. 2021. “Ethical Design of Conversational Agents: Towards
Principles for a Value-Sensitive Design,” In Proceedings of the 16th International Conference on
Wirtschaftsinformatik (WI), pp. 117.
Wambsganss, T., Kueng, T., and Soellner, M. 2021. “Arguetutor: An Adaptive Dialog-Based Learning
System for Argumentation Skills,” In Proceedings of the 2021 CHI Conference on Human Factors in
Computing Systems , Association for Computing Machinery, pp. 113.
(https://doi.org/10.1145/3411764.3445781).
Wambsganss, T., Söllner, M., and Leimeister, J. 2020. “Design and Evaluation of an Adaptive Dialog-
Based Tutoring System for Argumentation Skills,” International Conference on Information Systems
(ICIS), pp. 117. (https://aisel.aisnet.org/icis2020/hci_artintel/hci_artintel/2/).
Wambsganss, T., Winkler, R., Schmid, P., and Söllner, M. 2020. “Designing a Conversational Agent as a
Formative Course Evaluation Tool,” 15th International Conference on Wirtschaftsinformatik, pp. 1
16. (https://doi.org/10.30844/wi_2020_k7-wambsganss).
Wambsganss, T., Winkler, R., Söllner, M., and Leimeister, J. M. 2020. “A Conversational Agent to Improve
Response Quality in Course Evaluations,” Extended Abstracts of the 2020 CHI Conference on Human
Factors in Computing Systems , Association for Computing Machinery, pp. 19.
(https://doi.org/10.1145/3334480.3382805).
Webster, J., and Watson, R. T. 2002. “Analyzing the Past to Prepare for the Future: Writing aLiterature
Review,” MIS Quarterly (26:2), xiiixxiii. (http://www.misq.org/misreview/announce.html).
Weizenbaum, J. 1966. “ELIZA-A Computer Program for the Study of Natural Language Communication
between Man and Machine,” Communications of the ACM (9:1), pp. 3645.
Taxonomy of Pedagogical Conversational Agents
Forty-Second International Conference on Information Systems, Austin 2021
17
(https://doi.org/10.1145/365153.365168).
Wellnhammer, N., Dolata, M., Steigler, S., and Schwabe, G. 2020. “Studying with the Help of Digital
Tutors: Design Aspects of Conversational Agents That Influence the Learning Process,” in
Proceedings of the 53rd Hawaii International Conference on System Sciences, Hawaii International
Conference on System Sciences. (https://doi.org/10.24251/HICSS.2020.019).
Winkler, R., Büchi, C., and Söllner, M. 2019. “Improving Problem-Solving Skills with Smart Personal
Assistants: Insights from a Quasi Field Experiment,” Fortieth International Conference on
Information Systems (ICIS) (Oecd 2014), pp. 117.
Winkler, R., and Soellner, M. 2018. “Unleashing the Potential of Chatbots in Education: A State-Of-The-
Art Analysis,” Academy of Management Proceedings (2018:1), Academy of Management, p. 15903.
Winkler, R., Soellner, M., and Leimeister, J. M. 2020. “Improving Students’ Problem-Solving Skills with
Smart Personal Assistants,” Academy of Management Proceedings (2020:1), Academy of
Management, p. 11496. (https://doi.org/10.5465/ambpp.2020.11496abstract).
Winkler, R., and Söllner, M. 2020. “Towards Empowering Educators to Create Their Own Smart Personal
Assistants,” in Proceedings of the 53rd Annual Hawaii International Conference on System
Sciences., pp. 2231. (https://hdl.handle.net/10125/63744).
Winkler, R., Söllner, M., and Leimeister, J. M. 2021. “Enhancing Problem-Solving Skills with Smart
Personal Assistant Technology,” Computer & Education.
Wu, E. H. K., Lin, C. H., Ou, Y. Y., Liu, C. Z., Wang, W. K., and Chao, C. Y. 2020. “Advantages and
Constraints of a Hybrid Model K-12 E-Learning Assistant Chatbot,” IEEE Access (8), Institute of
Electrical and Electronics Engineers Inc., pp. 7778877801.
(https://doi.org/10.1109/ACCESS.2020.2988252).
Xu, A., Liu, Z., Akkiraju, R., Guo, Y., and Sinha, V. 2017. “A New Chatbot for Customer Service on Social
Media Web Searching View Project Chatbot View Project A New Chatbot for Customer Service on
Social Media,” Association for Computing Machinery, Association for Computing Machinery, pp.
35063510. (https://doi.org/10.1145/3025453.3025496).
Yin, J., Goh, T.-T., Yang, B., and Xiaobin, Y. 2021. “Conversation Technology With Micro-Learning: The
Impact of Chatbot-Based Learning on Students’ Learning Motivation and Performance,” Journal of
Educational Computing Research (59:1), SAGE Publications Inc., pp. 154177.
(https://doi.org/10.1177/0735633120952067).
Zierau, N., Wambsganss, T., Janson, A., and Schöbel, S. 2020. “The Anatomy of User Experience with
Conversational Agents: A Taxonomy and Propositions of Service Clues,” International Conference on
Information Systems (ICIS), pp. 118. (https://www.alexandria.unisg.ch/261080/).
Appendix
Interview
number
Function
Expertise
1
Student
Conversational Agent Programming Programming of conversational agents in
various domains
2
Researcher
Conversational Agent Research Design of chatbots
3
Researcher
Conversational Agent Research Design of chatbots
4
Researcher
Education Research Pedagogy and education
5
Researcher
Smart Personal Assistants Research
6
Researcher
Conversational Agent Research
7
Researcher
Psychology Psychology especially in the domain of Business Psychology
Table 6. Interview partners of the evaluation
... Conversational tutoring systems (CTSs) are one promising way to address this challenge of scalability and to engage students in meaningful and individual interactions with an artificial tutor (Weber et al., 2021;Han et al., 2023). CTSs are learning tools that communicate with users through dialog-based interfaces using natural language (Weber et al., 2021;Winkler & Söllner, 2018). ...
... Conversational tutoring systems (CTSs) are one promising way to address this challenge of scalability and to engage students in meaningful and individual interactions with an artificial tutor (Weber et al., 2021;Han et al., 2023). CTSs are learning tools that communicate with users through dialog-based interfaces using natural language (Weber et al., 2021;Winkler & Söllner, 2018). They have been successfully designed and deployed to accompany learners individually on their learning paths in various domains, such as developing factual knowledge (Ruan et al., 2019), programming skills (Winkler et al., 2020), or problem-solving (Winkler et al., 2021). ...
... com/), a novel generative pretrained trans-former model with a conversational interface, research and practice have been calling for more in-depth empirical investigations of the influence of CTS on students writing assignments and learning processes (e.g., (Sharples, 2023;Holden Thorp, 2023;Roscoe et al., 2014;Baidoo-Anu & Owusu Ansah, 2023)). Despite the growing research interest, many CTSs still yield only mixed effects on students' learning outcomes and experiences in empirical studies (Weber et al., 2021;Han et al., 2023;Winkler & Söllner, 2018;Følstad & Brandtzaeg, 2017;Zierau et al., 2020). ...
Article
Full-text available
Conversational tutoring systems (CTSs) offer a promising avenue for individualized learning support, especially in domains like persuasive writing. Although these systems have the potential to enhance the learning process, the specific role of learner control and inter- activity within them remains underexplored. This paper introduces WritingTutor , a CTS designed to guide students through the pro- cess of crafting persuasive essays, with a focus on varying levels of learner control. In an experimental study involving 96 students, we evaluated the effects of high-level learner control, encompassing con- tent navigation and interface appearance control, against a benchmark version of WritingTutor without these features and a static, non- interactive tutoring group. Preliminary findings suggest that tutoring and learner control might enhance the learning experience in terms of enjoyment, ease-of-use, and perceived autonomy. However, these differences are not significant after pair-wise comparison and appear not to translate to significant differences in learning outcomes. This research contributes to the understanding of learner control in CTS, offering empirical insights into its influence on the learning experience.
... Related emotions: Almost all the sources used as references in this RQ1 discuss the emotions involved in the pedagogical agent model or framework. The emotions discussed also vary, ranging from those related to learner focus such as attention or engagement [67], [52], [64], emotion-related to the intrinsic motivation of learners such as enthusiasm, sympathy, reassuring, and self-confidence [67], [50], emotions related learning satisfaction [67], [13], to anxiety-related emotions [42]. Among all the sources discussing models or frameworks of pedagogical agents with emotions, none of them explore emotions based on widely recognized psychological theories, such as basic emotions, continuous emotions, or academic emotions [68]. ...
... Related emotions: Almost all the sources used as references in this RQ1 discuss the emotions involved in the pedagogical agent model or framework. The emotions discussed also vary, ranging from those related to learner focus such as attention or engagement [67], [52], [64], emotion-related to the intrinsic motivation of learners such as enthusiasm, sympathy, reassuring, and self-confidence [67], [50], emotions related learning satisfaction [67], [13], to anxiety-related emotions [42]. Among all the sources discussing models or frameworks of pedagogical agents with emotions, none of them explore emotions based on widely recognized psychological theories, such as basic emotions, continuous emotions, or academic emotions [68]. ...
... Related emotions: Almost all the sources used as references in this RQ1 discuss the emotions involved in the pedagogical agent model or framework. The emotions discussed also vary, ranging from those related to learner focus such as attention or engagement [67], [52], [64], emotion-related to the intrinsic motivation of learners such as enthusiasm, sympathy, reassuring, and self-confidence [67], [50], emotions related learning satisfaction [67], [13], to anxiety-related emotions [42]. Among all the sources discussing models or frameworks of pedagogical agents with emotions, none of them explore emotions based on widely recognized psychological theories, such as basic emotions, continuous emotions, or academic emotions [68]. ...
Article
Full-text available
The purpose of this systematic literature review (SLR) is to examine more deeply the various learning interventions provided by pedagogical agents and their relation to emotional aspects. A pedagogical agent is a software agent that provides guidance, feedback, or intervention to learners in digital environments. Pedagogical agents have the potential to address issues in computer-based learning, particularly online learning, which often neglects affective aspects, such as the emotions of its users. Most research on pedagogical agents has focused on their visual design, types of feedback, and other empirical aspects. However, this research has not examined what underlies these agents’ ability to provide interventions personalized to learner’s emotions. This paper explores the extent to which pedagogical agents have addressed learners’ emotional needs. It also identifies opportunities and challenges for further research on interventions by pedagogical agents personalized to learners’ emotional states. The study’s research questions include: 1) To what extent does research exist on models, frameworks, or architectures for pedagogical agents, especially those related to emotions? 2) How are pedagogical agents represented, what types of interventions do they use, and how do these interventions affect learners’ emotions? 3) What kinds of inputs are used to activate the pedagogical agent’s functions? This SLR applied the Kitchenham method to select reference sources from 2013 to 2023 and was indexed by Scopus in the Q1 to Q4 range. Our review revealed the absence of a specific model for mapping out interventions tailored to the learner’s emotional needs. Most existing pedagogical agents provide learning interventions that are less adaptive and personalized based on the learner’s emotional state and are applied to asynchronous learning systems such as e-learning. There are still very few pedagogical agents that use real-time input technology by utilizing artificial intelligence to recognize the emotional state of learners so that they can trigger an adaptive and personalized intervention.
... Therefore, existing design guidelines and taxonomies (Feine et al., 2019;Zierau et al., 2020) for the development of CAs can offer guidance in design choices; however, the design of PCAs must also specifically account for the characteristics of the learning context. To guide design choices, Wellnhammer et al. (2020) and Weber et al. (2021) provide an overview of different design elements that need to be considered. Additionally, the Learning with Pedagogical Agents (LPAM) model by Dolata et al. (2023) illustrates design considerations for the PCA and their relation to the learning environment based on the activity theory. ...
Preprint
Full-text available
Workplace learning is used to train employees systematically, e.g., via e-learning or in 1:1 training. However, this is often deemed ineffective and costly. Whereas pure e-learning lacks the possibility of conversational exercise and personal contact, 1:1 training with human instructors involves a high level of personnel and organizational costs. Hence, pedagogical conversational agents (PCAs), based on generative AI, seem to compensate for the disadvantages of both forms. Following Action Design Research, this paper describes an organizational communication training with a Generative PCA (GenPCA). The evaluation shows promising results: the agent was perceived positively among employees and contributed to an improvement in self-determined learning. However, the integration of such agent comes not without limitations. We conclude with suggestions concerning the didactical methods, which are supported by a GenPCA, and possible improvements of such an agent for workplace learning.
... Several taxonomies have already been developed to classify CAs approaches from different perspectives. Janssen et al. (2020) focus on domain-specific chatbot applications, whereas Weber et al. (2021) present a taxonomy for CAs in education. Additional contributions provide conceptualizations of certain aspects of these systems like the taxonomy for social cues in CAs of Feine et al. (2019), as well as the exploration of chatbot relationship archetypes based on the time of interaction of Nißen et al. (2022). ...
Conference Paper
Full-text available
Conversational agents are a technology that is used today in many different ways, for example as chatbots or voice dialog systems. While they are mostly used for applications in the business sector, research is also focusing on their use in other areas, such as medicine or disaster management. Events in recent years, such as the global Covid pandemic and advances in the field of language learning models, have led to many new approaches. Taxonomies are a good way to provide researchers and practitioners with a good overview of this growing field of research by classifying new and existing approaches. In this paper we present the current results in a methodological approach to develop a taxonomy for the classification of conversational agent approaches in disaster management. We describe the data basis of a structured literature search, the implementation of the method and the current dimensions and characteristics of the emerging taxonomy.
... Additionally, similar to the approach taken in [17], which focuses on structuring and guiding peer interaction with an emphasis on knowledge building, our proposed conversational module can be employed in Massive Open Online Courses (MOOCs) to support students and enhance knowledge acquisition by incorporating argumentative conversations through conversational agents. Following Weber et al.'s taxonomy of educational conversational agents [18], we understand our conversational agent to be unspecific to different target groups, to support learning factual knowledge and applying it, and thereby to support both practice at these levels, and preparation for subsequent learning phases. To evaluate the workflow, we created three different conversational modules. ...
Article
Full-text available
In this work, we investigate a systematic workflow that supports the learning engineering process of 1) formulating the starting question for a conversational module based on existing learning materials, 2) specifying the input that transformer-based language models need to function as classifiers, and 3) specifying the adaptive dialogue structure, i.e., the turns the classifiers can choose between. Our primary purpose is to evaluate the effectiveness of conversational modules if a learning engineer follows our workflow. Notably, our workflow is technically lightweight, in the sense that no further training of the models is expected. To evaluate the workflow, we created three different conversational modules. For each, we assessed classifier quality and how coherent the follow-up question asked by the agent was based on the classification results of the user response. The classifiers reached F1-macro scores between 0.66 and 0.86, and the percentage of coherent follow-up questions asked by the agent was between $79\%$ and $84\%$ . These results highlight, firstly, the potential of transformer-based models to support learning engineers in developing dedicated conversational agents. Secondly, it highlights the necessity to consider the quality of the adaptation mechanism together with the adaptive dialogue. As such models continue to be improved, their benefits for learning engineering will rise. Future work would be valuable to investigate the usability of this workflow by learning engineers with different backgrounds and prior knowledge on the technical and pedagogical aspects of learning engineering.
... A utilização de agentes conversacionais, também conhecidos como chatbots, na educação tem despertado um crescente interesse de pesquisadores, educadores e instituições de ensino em todo o mundo [Weber et al. 2021, Tlili et al. 2023, Kasneci et al. 2023. A capacidade desses sistemas em compreender e processar grandes volumes de dados, além de sua habilidade em aprender e adaptar-se a novas informações, oferece oportunidades promissoras para aprimorar o processo de ensino-aprendizagem. ...
Conference Paper
A utilização de agentes conversacionais, também conhecidos como chatbots, na educação tem despertado um crescente interesse de pesquisadores, educadores e instituições de ensino em todo o mundo. Esses sistemas têm a capacidade de compreender e processar grandes volumes de dados, oferecendo suporte individualizado aos alunos. No entanto, é importante considerar que esses sistemas podem gerar respostas incorretas em tarefas que envolvem raciocínio lógico. Este artigo tem como objetivo avaliar a habilidade do agente conversacional ChatGPT na resolução de exercícios de Dedução Natural em lógica proposicional. O estudo busca verificar se o ChatGPT é uma ferramenta adequada para essa tarefa. Para isso, são realizados experimentos utilizando uma base de dados de exercícios de dedução natural em lógica proposicional. Esse estudo busca contribuir para a compreensão das capacidades e limitações dos agentes conversacionais em habilidades de raciocínio lógico.
... To address the structuration of design dimensions and characteristics of writing support systems systematically, we adopt the procedure for developing a taxonomy from Nickerson et al. (2013), as this has already been used in IS research and has been used to classify several taxonomies (Weber et al. 2021;Zierau et al. 2020). The use of classifications is valuable for both researchers and practitioners as it allows for the organization of complex domains, which is especially crucial in emerging fields like writing support systems. ...
Conference Paper
Full-text available
In the field of natural language processing (NLP), advances in transformer architectures and large-scale language models have led to a plethora of designs and research on a new class of information systems (IS) called writing support systems, which help users plan, write, and revise their texts. Despite the growing interest in writing support systems in research, there needs to be more common knowledge about the different design elements of writing support systems. Our goal is, therefore, to develop a taxonomy to classify writing support systems into three main categories (technology, task/structure, and user). We evaluated and refined our taxonomy with seven interviewees with domain expertise, identified three clusters in the reviewed literature, and derived five archetypes of writing support system applications based on our categorization. Finally, we formulate a new research agenda to guide researchers in the development and evaluation of writing support systems.
... However, we lack a holistic understanding of the role of emotions in human-CA interactions cumulated from studies in this area. In this regard, several literature reviews have examined CA-related studies broadly (Diederich et al. 2022;Zierau et al. 2020), while some reviews have addressed specific CA application contexts, like healthcare (e.g., ter Stal et al. 2020) and education (Weber et al. 2021). Others have reviewed research on specific CA characteristics such as their social cues (Feine et al. 2019) or text-based communication (e.g., Rapp et al. 2021). ...
Article
Conversational agents (CA), powered by natural language processing, have become increasingly popular across multiple domains. However, these agents often fail to communicate effectively with users, leading to poor adoption and task outcomes. Emotions are a fundamental aspect of such interactions, influencing use and adoption of digital artifacts. Despite their salience, understanding of the role of emotions in human-CA collaboration remains fragmented. Motivated thus, we review empirical studies on emotions in human-CA interactions. We synthesize the findings from the reviewed studies in terms of antecedents, emotion-related outcomes, and their relationships in the form of a descriptive model. Based on the synthesis, we identify knowledge gaps and propose directions for future research. Our analysis provides insights into the role of emotions in human-CA interactions and contributes to research in this area.
Article
Full-text available
Although training evaluation is recognized as an important component of the instructional design model, there are no theoretically based models of training evaluation. This article attempts to move toward such a model by developing a classification scheme for evaluating learning outcomes. Learning constructs are derived from a variety of research domains, such as cognitive, social, and instructional psychology and human factors. Drawing from this research, we propose cognitive, skill-based, and affective learning outcomes (relevant to training) and recommend potential evaluation measures. The learning outcomes and associated evaluation measures are organized into a classification scheme. Requirements for providing construct-oriented evidence of validity for the scheme are also discussed.
Conference Paper
Full-text available
Enrollments in distance-learning scenarios have been tremendously rising. Here, the ability of students to receive answers to questions is hindered due to an uneven educator-student ratio. Students often do not receive quick answers to simple questions, and educators feel stressed by answering the same questions repeatedly. However, advances in Natural-Language-Processing and Machine Learning bear the opportunity to design new forms of human-computer interaction by embedding question-answering (Q&A) models in conversational agents. Such a system enables students to receive personalized answers independent of an instructor, time, and location. This paper presents the first steps of our design science research project on designing a student-centered Q&A system that helps learners receive personalized answers in large-scale settings. Based on social response theory and user interviews, we propose five design principles for the design of a conversational Q&A system. Furthermore, we instantiate those principles as design features in a natively built prototype.
Article
Full-text available
Conversational agents (CAs), described as software with which humans interact through natural language, have increasingly attracted interest in both academia and practice, due to improved capabilities driven by advances in artificial intelligence and, specifically, natural language processing. CAs are used in contexts like people's private life, education, and healthcare, as well as in organizations, to innovate and automate tasks, for example in marketing and sales or customer service. In addition to these application contexts, such agents take on different forms concerning their embodiment, the communication mode, and their (often human-like) design. Despite their popularity, many CAs are not able to fulfill expectations and to foster a positive user experience is a challenging endeavor. To better understand how CAs can be designed to fulfill their intended purpose, and how humans interact with them, a multitude of studies focusing on human-computer interaction have been carried out. These have contributed to our understanding of this technology. However, currently a structured overview of this research is missing, which impedes the systematic identification of research gaps and knowledge on which to build on in future studies. To address this issue, we have conducted an organizing and assessing review of 262 studies, applying a socio-technical lens to analyze CA research regarding the user interaction, context, agent design, as well as perception and outcome. We contribute an overview of the status quo of CA research, identify four research streams through a cluster analysis, and propose a research agenda comprising six avenues and sixteen directions to move the field forward.
Article
Full-text available
Smart Personal Assistants (SPAs; such as Amazon’s Alexa or Google’s Assistant) let users interact with computers in a more natural and sophisticated way that was not possible before. Although there exists an increasing amount of research of SPA technology in education, empirical evidence of its ability to offer dynamic scaffolding to enhance students problem-solving skills is still scarce. To fill this gap, the aim of this paper is to find out whether interactions with scaffolding-based SPA technology enable students to internalize and apply problem-solving steps on their own in a 10th grade high school and a vocational business school class. Students in the experiment classes completed their assignments using Smart Personal Assistants, whereas students in the control classes completed the same assignments using traditional methods. This study used a mixed-method approach consisting of two field quasi-experiments and one post-experiment focus group discussion. The empirical results revealed that students in the experiment classes acquired significantly more problem-solving skills than those in the control classes (Study 1: p = 0.0396, study 2: p < 0.001), and also uncovered several changes in students’ learning processes. The findings provide first empirical evidence for the value of using SPA technology on skill development in general, and on problem-solving skill development in particular.
Conference Paper
Full-text available
Conversational Agents (CAs) have become a new paradigm for human-computer interaction. Despite the potential benefits, there are ethical challenges to the widespread use of these agents that may inhibit their use for individual and social goals. However, besides a multitude of behavioral and design-oriented studies on CAs, a distinct ethical perspective falls rather short in the current literature. In this paper, we present the first steps of our design science research project on principles for a value-sensitive design of CAs. Based on theoretical insights from 87 papers and eleven user interviews, we propose preliminary requirements and design principles for a value-sensitive design of CAs. Moreover, we evaluate the preliminary principles with an expert-based evaluation. The evaluation confirms that an ethical approach for design CAs might be promising for certain scenarios.
Conference Paper
Full-text available
Recent advances in Natural Language Processing not only bear the opportunity to design new dialog-based forms of human-computer interaction but also to analyze the argumentation quality of texts. Both can be leveraged to provide students with individual and adaptive tutoring in their personal learning journey to develop argumentation skills. Therefore, we present the results of our design science research project on how to design an adaptive dialog-based tutoring system to help students to learn how to argue. Our results indicate the usefulness of an adaptive dialog-based tutoring system to support students individually, independent of a human instructor, time and place. In addition to providing our embedded software artifact, we document our evaluated design knowledge as a design theory. Thus, we provide the first step toward a nascent design theory for adaptive conversational tutoring systems to individual support metacognition skill education of students in traditional learning scenarios.