Figure 3 - uploaded by Lina Mavrina
Content may be subject to copyright.
An example of an argument graph. Edges labelled with -and + represent attack and support respectively.

An example of an argument graph. Edges labelled with -and + represent attack and support respectively.

Source publication
Article
Full-text available
The establishment and maintenance of common ground, i.e. mutual knowledge, beliefs and assumptions, is important for dialogue systems in order to be seen as valid interlocutors in both task-oriented and open-domain dialogue. It is therefore important to provide these systems with knowledge models, so that their conversations could be grounded in th...

Contexts in source publication

Context 1
... and colleagues (Hunter, Polberg, and Thimm 2020) aim to create a new formalism for argumentation dialogues and reasoning that could provide solutions to these challenges: the epistemic graph. They describe an epistemic language that can be used to define logical formulae to specify belief in arguments and relations between them given a directed argument graph, e.g., as seen in figure 3. ...
Context 2
... and colleagues (Hunter, Polberg, and Thimm 2020) aim to create a new formalism for argumentation dialogues and reasoning that could provide solutions to these challenges: the epistemic graph. They describe an epistemic language that can be used to define logical formulae to specify belief in arguments and relations between them given a directed argument graph, e.g., as seen in figure 3. ...

Similar publications

Article
Full-text available
During dialogue, speakers attempt to adapt messages to their addressee appropriately by taking into consideration their common ground (i.e., all the information mutually known by the conversational partners) to ensure successful communication. Knowing and remembering what information is part of the common ground shared with a given partner and usin...

Citations

... "Knowledge is best shared through conversation" [24] and modeling this knowledge is important in developing highly efficient dialogue systems (e.g., [25]). In human dialogues extracted from co-design sessions we expect to find knowledge expressed with various degrees of context-dependency and also using more than one modalities (typically language and visuals). ...
Chapter
Design is a highly creative and challenging task and research has already explored possible ways for using conversational agents (CAs) to support humans participating in co-design sessions. However, research reports that a) humans in these sessions expect more essential support from CAs, and b) it is important to develop CAs that continually learn from communication -like humans do- and not simply from labeled datasets. Addressing the above needs, this paper explores the specific question of how to extract useful knowledge from human dialogues observed during co-design sessions and make this knowledge available through a CA supporting humans in similar design activities. In our approach we explore the potential of the GPT-3 Large Language Model (LLM) to provide useful output extracted from unstructured data such as free dialogues. We provide evidence that by implementing an appropriate “extraction task” on the LLM it is possible to efficiently (and without human-in-the-loop) extract knowledge that can then be embedded in the cognitive base of a CA. We identify at least four major steps/assumptions in this process that need to be further researched, namely: A1) Knowledge modeling, A2) Extraction task, A3) LLM-based facilitation, and A4) Humans’ benefit. We provide demonstrations of the extraction and facilitation steps using the GPT-3 model and we also identify and comment on various worth exploring open research questions.KeywordsConversational agentLarge language model (LLM)Design thinking
Chapter
The paper considers the problem associated with the possibility of functional programming of intelligent systems, which are based on the definition of intelligence as the ability to model the environment around the system in order to use this model to form the specified behavior of the system in this environment. Such behavior is considered as the result of a consistent solution of intermediate tasks, into which the general task is divided, determined by the goal set for the system. In the variant under consideration, the environment model is built on the basis of knowledge collected by the system or obtained from its knowledge base. Separate knowledge has a multi-element representation, making available for the user several tools for solving problems. The options proposed in his paper are: sets of properties, logical and ontological representations of individual components of the environment surrounding the system, and related associations of these components. It should be noted that various variants of logics can be incorporated into the system, including non-classical ones, on which the system builds its conclusions. In addition, the system can use various variants of mathematical structures that are stored in its knowledge base when building a model.When developing an intelligent system, the methods and tools of functional design can be applied as a way to develop a specific system. In this work, this approach is applied on the example of the development of an intelligent military robot that operates in a specific subject area and solves the problem of defending and attacking a specific enemy.Keywordsintelligencemodelingintelligent systemknowledge representationintelligent robots