1: Schematic drawing of a centrifugal governor that keeps the rotational speed of a steam engine constant under varying load conditions based on feedback principles. Changes in engine speed cause the centrifugal pendulum (a) to swing inor outward mechanically moving a lever (b) which opens or closes the inlet valve for steam (c) thus regulating the amount of steam getting into the engine (Mayr 1970, pp. 2-3, 109-113).

1: Schematic drawing of a centrifugal governor that keeps the rotational speed of a steam engine constant under varying load conditions based on feedback principles. Changes in engine speed cause the centrifugal pendulum (a) to swing inor outward mechanically moving a lever (b) which opens or closes the inlet valve for steam (c) thus regulating the amount of steam getting into the engine (Mayr 1970, pp. 2-3, 109-113).

Source publication
Thesis
Full-text available
Dialogue is an interactive endeavour in which participants jointly pursue the goal of reaching understanding. Since participants enter the interaction with their individual conceptualisation of the world and their idiosyncratic way of using language, understanding cannot, in general, be reached by exchanging messages that are encoded when speaking...

Citations

... Indeed, evidence of communicative efficiency in human language can be found at different linguistic levels, such as pragmatic language understanding [19], the lexicon and word length [38], the semantics [23,26], the syntax and word order [11,22] and the morphology [6,14]. Likewise, looking at efficiency to assess the effects of adapting communicative behavior is an established approach [7]. However, it is not well understood which adaptations of a verbal explanation, especially when generated by artificial agents, increase communicative efficiency with human users. ...
... Feedback can also differ in polarity (positive or negative) and can signal four levels of involvement: attention, hearing, understanding or acceptance [10]. So-called attentive speaker agents were proposed to process these aspects of feedback to enable online adaptation of own communicative behavior [7,8]. These systems can even actively elicit feedback. ...
Conference Paper
Full-text available
It is commonly assumed that explanations should be tailored to the addressee in order to yield higher understanding. Consequently, much work on explainable intelligent agents has been directed to user-adapted explanations. However, recent studies show ambiguous results with regard to the efficiency of adaptive and non-adaptive explanations. This raises the question whether an explanation , generated by a socially interactive agent, should be adapted. In this paper, we present a general approach to adaptive explanation generation as a non-stationary decision process, and we study the benefits and pitfalls of adapting explanations in an ongoing interaction with a user. Specifically, we report results from a between-subject online evaluation in a game explanation domain with three conditions (non-interactive, interactive but non-adaptive, adaptive). Results show that the decision for or against adaptivity depends on the goal of the explanation, the complexity of the domain and external constraints. Based on the collected data we discuss challenges that arise from the individuality of adaptive dialogues, such as comparability and the tendency to produce results with a large variance.
... In evaluation studies with human users [66,67], the attentive speaker agent was compared with agents that either do not adapt their social behaviour to their beliefs about listeners' needs (lower bound) or do it incessantly and explicitly ask about it (upper bound). That is, the first condition decouples the social loop from the cognitive one, while the second couples them excessively but in nonadaptive, socially non-resonant ways. ...
... (a) An 'attentive speaker' agent communicates with a human user while interpreting the interlocutor's communicative feedback and adapting to it online. (b) Results demonstrate that humans engage in this and produce significantly more feedback with the attentive speaker (AS) than with agents that do not attend (NA) or explicitly ask for confirmation all the time (EA)[67]. Datapoints are light grey, black dots are medians, black lines are whiskers representing 1.5 × interquartile range, and mid gaps are quartiles. ...
Article
Full-text available
It is increasingly important for technical systems to be able to interact flexibly, robustly and fluently with humans in real-world scenarios. However, while current AI systems excel at narrow task competencies, they lack crucial interaction abilities for the adaptive and co-constructed social interactions that humans engage in. We argue that a possible avenue to tackle the corresponding computational modelling challenges is to embrace interactive theories of social understanding in humans. We propose the notion of socially enactive cognitive systems that do not rely solely on abstract and (quasi-)complete internal models for separate social perception, reasoning and action. By contrast, socially enactive cognitive agents are supposed to enable a close interlinking of the enactive socio-cognitive processing loops within each agent, and the social-communicative loop between them. We discuss theoretical foundations of this view, identify principles and requirements for according computational approaches, and highlight three examples of our own research that showcase the interaction abilities achievable in this way. This article is part of a discussion meeting issue ‘Face2face: advancing the science of social interaction’.
... A different approach is taken by Buschmeier and Kopp (2014), who model feedback interpretation and representation in terms of an "attributed listener state" (ALS). In this model of feedback understanding, the user's feedback behaviors, relevant features of the agent's utterances, and the dialogue context are used in a Bayesian network to reason about the user's likely mental state of listening, more specifically whether contact, perception, understanding, acceptance and agreement (see Section 2) are believed to be low, medium, or high (Buschmeier andKopp, 2012, 2014;Buschmeier, 2018). As Figure 3 illustrates, this inference can be done incrementally while the agent is speakinge.g., at every backchannel relevance place-so that the agent always has an up-to date idea of how well the user is following its presentation. ...
... A different approach is taken by Buschmeier and Kopp (2014), who model feedback interpretation and representation in terms of an "attributed listener state" (ALS). In this model of feedback understanding, the user's feedback behaviors, relevant features of the agent's utterances, and the dialogue context are used in a Bayesian network to reason about the user's likely mental state of listening, more specifically whether contact, perception, understanding, acceptance and agreement (see Section 2) are believed to be low, medium, or high (Buschmeier andKopp, 2012, 2014;Buschmeier, 2018). As Figure 3 illustrates, this inference can be done incrementally while the agent is speakinge.g., at every backchannel relevance place-so that the agent always has an up-to date idea of how well the user is following its presentation. ...
... As Figure 3 illustrates, this inference can be done incrementally while the agent is speakinge.g., at every backchannel relevance place-so that the agent always has an up-to date idea of how well the user is following its presentation. Buschmeier (2018) calls this process a "minimal" form of mentalizing (following the concept of a "most minimal partner model" Galati and Brennan, 2010), which enables the agent to adapt its presentation in a high-level fashion, e.g., by FIGURE 2 | An example of the knowledge graph approach for grounding mentioned in Section 4.2.3, adapted from Axelsson and Skantze (2020). ...
Article
Full-text available
Intelligent agents interacting with humans through conversation (such as a robot, embodied conversational agent, or chatbot) need to receive feedback from the human to make sure that its communicative acts have the intended consequences. At the same time, the human interacting with the agent will also seek feedback, in order to ensure that her communicative acts have the intended consequences. In this review article, we give an overview of past and current research on how intelligent agents should be able to both give meaningful feedback toward humans, as well as understanding feedback given by the users. The review covers feedback across different modalities (e.g., speech, head gestures, gaze, and facial expression), different forms of feedback (e.g., backchannels, clarification requests), and models for allowing the agent to assess the user's level of understanding and adapt its behavior accordingly. Finally, we analyse some shortcomings of current approaches to modeling feedback, and identify important directions for future research.
... You know what I mean?) to understand if B is following, or she can elicit attentive listener feedback in B. On the other hand, B could also want to show his attention by using communicative feedback. Positive evidence of understanding, thus, is provided by communicative feedback and comes with attention that is unbroken or undisturbed (Buschmeier 2018;Buschmeier and Kopp 2018). Furthermore, according to (Clark 1996, p. 147-148), these actions are processed following the concepts of upward completion, i.e., in a ladder of actions, it is only possible to complete actions from the bottom level u through any level in the ladder, and downward evidence, i.e., in a ladder of actions, evidence that one level is complete is also evidence that all levels below it are complete. ...
... The use of understanding feedback were also studied in incremental models as signals used to update the grounding state (Visser et al. 2012(Visser et al. , 2014Eshghi et al. 2015). In (Buschmeier and Kopp 2018;Buschmeier 2018), acknowledgement acts are studied as attentiveness markers: "artificial conversational agents should have the capability to use such a mechanism, too, because it would allow them to approach potential or upcoming problems in understanding (and other listening related communicative functions) before they become more serious and require costly repair actions" (Buschmeier andKopp 2018, p. 1220). Acknowledgement acts are important for collaborative goals, as also pointed out in Schlangen (2019), and more generally also in Benotti and Blackburn (2021). ...
... The use of understanding feedback were also studied in incremental models as signals used to update the grounding state (Visser et al. 2012(Visser et al. , 2014Eshghi et al. 2015). In (Buschmeier and Kopp 2018;Buschmeier 2018), acknowledgement acts are studied as attentiveness markers: "artificial conversational agents should have the capability to use such a mechanism, too, because it would allow them to approach potential or upcoming problems in understanding (and other listening related communicative functions) before they become more serious and require costly repair actions" (Buschmeier andKopp 2018, p. 1220). Acknowledgement acts are important for collaborative goals, as also pointed out in Schlangen (2019), and more generally also in Benotti and Blackburn (2021). ...
Article
Full-text available
This work reports on the literature on grounding in conversational agents, as one of the pragmatic aspects adopted to ensure a better communicative efficiency in dialogue systems. The paper starts with a general description of the theory of grounding. As far as its computational implications are concerned, grounding phenomena are firstly framed in the common grounding processes described in terms of grounding acts. Secondly, they are considered in the argumentation-related framework within which already grounded information are processed. Open issues and application gaps are finally highlighted.
... For example, Clark and Krych [20] demonstrated that dyads (i.e., pairs of interactants) who could not monitor each other at all made eight times as many errors as dyads that could take advantage of monitoring each other. Studies that have been performed on other than explanatory tasks have shown that the understanding displayed by the interlocutor [19], [60]- [63] and the modalities via which it is expressed [64] are informative to the speaker when, for example, reformulating an utterance [51], adjusting the modalities [64], or addressing the satisfaction and motivation of the interaction partner [65]. Findings such as this led scholars to claim that a function of a conversation cannot be defined on the level of the individual [24]. ...
Article
Full-text available
The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning, but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explainability and identify three limitations: passive explainee, narrow view on the social process, and undifferentiated assessment of understanding. In order to overcome these limitations, we present explanation as a social practice in which explainer and explainee co-construct understanding on the microlevel. We view the co-construction on a microlevel as embedded into a macrolevel, yielding expectations concerning, e.g., social roles or partner models: Typically, the role of the explainer is to provide an explanation and to adapt it to the current level of understanding of the explainee; the explainee, in turn, is expected to provide cues that guide the explainer. Building on explanations being a social practice, we present a conceptual framework that aims to guide future research in XAI. The framework relies on the key concepts of monitoring and scaffolding to capture the development of interaction. We relate our conceptual framework and our new perspective on explaining to transparency and autonomy as objectives considered for XAI.