Chapter

A Method for Efficient Argument-Based Inquiry

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this paper we describe a method for efficient argument-based inquiry. In this method, an agent creates arguments for and against a particular topic by matching argumentation rules with observations gathered by querying the environment. To avoid making superfluous queries, the agent needs to determine if the acceptability status of the topic can change given more information. We define a notion of stability, where a structured argumentation setup is stable if no new arguments can be added, or if adding new arguments will not change the status of the topic. Because determining stability requires hypothesizing over all future argumentation setups, which is computationally very expensive, we define a less complex approximation algorithm and show that this is a sound approximation of stability. Finally, we show how stability (or our approximation of it) can be used in determining an optimal inquiry policy, and discuss how this policy can be used to, for example, determine a strategy in an argument-based inquiry dialogue.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In a particular structured argumentation setting, the notion of stability has been defined recently (Testerink, Odekerken, and Bex 2019). ...
... Now we briefly introduce the argumentation setting from (Testerink, Odekerken, and Bex 2019), based on ASPIC + (Modgil and Prakken 2014). ...
... Our preliminary complexity results, as well as the translation of stability into reasoning with IAFs pave the way to the development of efficient computational approaches for stability, taking benefit from SAT-based techniques. Finally, we have shown that, besides the existing application of stability to Internet fraud inquiry (Testerink, Odekerken, and Bex 2019), this concept has other potential applications, like automated negotiation. ...
Preprint
The notion of stability in a structured argumentation setup characterizes situations where the acceptance status associated with a given literal will not be impacted by any future evolution of this setup. In this paper, we abstract away from the logical structure of arguments, and we transpose this notion of stability to the context of Dungean argumentation frameworks. In particular, we show how this problem can be translated into reasoning with Argument-Incomplete AFs. Then we provide preliminary complexity results for stability under four prominent semantics, in the case of both credulous and skeptical reasoning. Finally, we illustrate to what extent this notion can be useful with an application to argument-based negotiation.
... Stability was defined in a context of structured argumentation: it consists in determining whether a given literal will keep the same acceptance status with respect to the grounded semantics, whatever the argumentation setting evolutions [74,81]. It has been applied in the design of an AI agent that helps the Dutch police in investigations about Internet trade frauds. ...
... Stability is still a recent research topic. As illustrated by [74,81], it can be used to improve the communication between a software agent and a human being, stopping the dialogue when the argumentation system is stable with respect to the issue under discussion. More generally, the notion of stability is promising in argument-based dialogue systems, where it can be used either to stop the dialogue when there is no need to continue (thus saving communication resources), or for adapting an agent's strategy. ...
Article
Full-text available
argumentation, as originally defined by Dung, is a model that allows the description of certain information about arguments and relationships between them: in an abstract argumentation framework (AF), the agent knows for sure whether a given argument or attack exists. It means that the absence of an attack between two arguments can be interpreted as “we know that the first argument does not attack the second one”. But the question of uncertainty in abstract argumentation has received much attention in the last years. In this paper, we survey approaches that allow to express information like “There may (or may not) be an attack between these arguments”. We describe the main models that incorporate qualitative uncertainty (or ignorance) in abstract argumentation, as well as some applications of these models. We also highlight some open questions that deserve some attention in the future.
... ADF is regarded as a powerful generalization of Dung's AFs. An ADF is a directed graph whose nodes represent arguments, Some strategies [60,61,62,63] were proposed to efficiently construct dialectical trees or structured argumentation in general by pruning the search space and speed up the inference process. This is realized by only expanding the dialectical tree so far until the evaluation status of the query is decided. ...
Article
Full-text available
A rule based knowledge system consists of three main components: a set of rules, facts to be fed to the reasoning corresponding to the data of a case, and an inference engine. In general, facts are stored in (relational) databases that represent knowledge in a first-order based formalism. However, legal knowledge uses defeasible deontic logic for knowledge representation due to its particular features that cannot be supported by first-order logic. In this work, we present a unified framework that supports efficient legal reasoning. In the framework, a novel inference engine is proposed in which the Semantic Rule Index can identify candidate rules with their corresponding semantic rules if any, and an inference controller is able to guide the executions of queries and reasoning. It can eliminate rules that cannot be fired to avoid unnecessary computations in early stages. The experiments demonstrated the effectiveness and efficiency of the proposed framework.
Article
Full-text available
We explore the computational complexity of justification, stability and relevance in incomplete argumentation frameworks (IAFs). IAFs are abstract argumentation frameworks that encode qualitative uncertainty by distinguishing between certain and uncertain arguments and attacks. These IAFs can be completed by deciding for each uncertain argument or attack whether it is present or absent. Such a completion is an abstract argumentation framework, for which it can be decided which arguments are acceptable under a given semantics. The justification status of an argument in a completion then expresses whether the argument is accepted (in), not accepted because it is attacked by an accepted argument (out) or neither (undec). For a given IAF and certain argument, the justification status of that argument need not be the same in all completions. This is the issue of stability, where an argument is stable if its justification status is the same in all completions. For arguments that are not stable in an IAF, the relevance problem is of interest: which uncertain arguments or attacks should be investigated for the argument to become stable? In this paper, we define justification, stability and relevance for IAFs and provide a complexity analysis for these problems under grounded, complete, preferred and stable semantics.
Article
In argument-based inquiry, agents jointly construct arguments supporting or attacking a topic claim to find out if the claim can be accepted given the agents’ knowledge bases. While such inquiry systems can be used for various forms of automated information intake, several efficiency issues have so far prevented widespread application. In this paper, we aim to tackle these efficiency issues by exploring the notion of stability: can additional information change the justification status of the claim under discussion? Detecting stability is not tractable for every input, since the problem is CoNP-complete, yet in practical applications it is essential to guarantee efficient computation. This makes approximation a viable alternative. We present a sound approximation algorithm that recognises stability for many inputs in polynomial time and discuss several of its properties. In particular, we show that the algorithm is sound and identify constraints on the input under which it is complete. As a final contribution of this paper, we describe how the proposed algorithm is used in three different case studies at the Netherlands Police.
Article
This is a report on the Doctoral Consortium co-located with the 17th International Conference on Artificial Intelligence and Law in Montreal.
Article
Full-text available
In this paper, we propose a multi-agent framework to deal with situations involving uncertain or inconsistent information located in a distributed environment which cannot be combined into a single knowledge base. To this end, we introduce an inquiry dialogue approach based on a combination of possibilistic logic and a formal argumentation-based theory, where possibilistic logic is used to capture uncertain information, and the argumentation-based approach is used to deal with inconsistent knowledge in a distributed environment. We also modify the framework of earlier work, so that the system is not only easier to implement but also more suitable for educational purposes. The suggested approach is implemented in a clinical decision-support system in the domain of dementia diagnosis. The approach allows the physician to suggest a hypothetical diagnosis in a patient case, which is verified through the dialogue if sufficient patient information is present. If not, the user is informed about the missing information and potential inconsistencies in the information as a way to provide support for continuing medical education. The approach is presented, discussed, and applied to one scenario. The results contribute to the theory and application of inquiry dialogues in situations where the data are uncertain and inconsistent.
Conference Paper
Full-text available
Argumentation is a dynamic process. The enforcing problem in argumentation, i.e. the question whether it is possible to modify a given argumentation framework (AF) in such a way that a desired set of arguments becomes an extension or a subset of an extension, was first studied in and positively answered under certain conditions. In this paper, we take up this research and study the more general problem of minimal change. That is, in brief, i) is it possible to enforce a desired set of arguments, and if so, ii) what is the minimal number of modifications (additions or removals of attacks) to reach such an enforcement, the so-called characteristic. We show for several Dung semantics that this problem can be decided by local criteria encoded by the so-called value functions. Furthermore, we introduce the corresponding equivalence notions between two AFs which guarantee equal minimal efforts needed to enforce certain subsets, namely minimal-E-equivalence and the more general minimal change equivalence. We present characterization theorems for several Dung semantics and finally, we show the relations to standard and the recently proposed strong equivalence for a whole range of semantics.
Conference Paper
Full-text available
One prominent way to deal with conflicting view-points among agents is to conduct an argumentative debate: by exchanging arguments, agents can seek to persuade each other. In this paper we investigate the problem, for an agent, of optimizing a sequence of moves to be put forward in a debate, against an opponent assumed to behave stochastically, and equipped with an unknown initial belief state. Despite the prohibitive number of states induced by a naive mapping to Markov models, we show that exploiting several features of such interaction settings allows for optimal resolution in practice, in particular: (1) as debates take place in a public space (or common ground), they can readily be modelled as Mixed Observability Markov Decision Processes, (2) as argumentation problems are highly structured, one can design optimization techniques to prune the initial instance. We report on the experimental evaluation of these techniques.
Article
Full-text available
Argumentation-based negotiation describes the process of decision-making in multi-agent systems through the exchange of arguments. If agents only have partial knowledge about the subject of a dialogue strategic argumentation can be used to exploit weaknesses in the argumentation of other agents and thus to persuade other agents of a specific opinion and reach a certain outcome. This paper gives an overview of the field of strategic argumentation and surveys recent works and developments. We provide a general discussion of the problem of strategic argumentation in multi-agent settings and discuss approaches to strategic argumentation, in particular strategies based on opponent models.
Article
Full-text available
The majority of existing work on agent dialogues considers negotiation, persuasion or deliberation dialogues; we focus on inquiry dialogues, which allow agents to collaborate in order to find new knowledge. We present a general framework for representing dialogues and give the details necessary to generate two subtypes of inquiry dialogue that we define: argument inquiry dialogues allow two agents to share knowledge to jointly construct arguments; warrant inquiry dialogues allow two agents to share knowledge to jointly construct dialectical trees (essentially a tree with an argument at each node in which a child node is a counter argument to its parent). Existing inquiry dialogue systems only model dialogues, meaning they provide a protocol which dictates what the possible legal next moves are but not which of these moves to make. Our system not only includes a dialogue-game style protocol for each subtype of inquiry dialogue that we present, but also a strategy that selects exactly one of the legal moves to make. We propose a benchmark against which we compare our dialogues, being the arguments that can be constructed from the union of the agents’ beliefs, and use this to define soundness and completeness properties that we show hold for all inquiry dialogues generated by our system.
Article
The purpose of this paper is to study the fundamental mechanism, humans use in argumentation, and to explore ways to implement this mechanism on computers.We do so by first developing a theory for argumentation whose central notion is the acceptability of arguments. Then we argue for the “correctness” or “appropriateness” of our theory with two strong arguments. The first one shows that most of the major approaches to nonmonotonic reasoning in AI and logic programming are special forms of our theory of argumentation. The second argument illustrates how our theory can be used to investigate the logical structure of many practical problems. This argument is based on a result showing that our theory captures naturally the solutions of the theory of n-person games and of the well-known stable marriage problem.By showing that argumentation can be viewed as a special form of logic programming with negation as failure, we introduce a general logic-programming-based method for generating meta-interpreters for argumentation systems, a method very much similar to the compiler-compiler idea in conventional programming.
Conference Paper
This paper studies argumentation-based dialogues between agents. It takes a previously defined system by which agents can trade arguments and examines in detail what locutions are passed between agents. This makes it possible to identify finer-grained protocols than has been previously possible, exposing the relationships between differ- ent kinds of dialogue, and giving a deeper understanding of how such dialogues could be automated.
Article
An abstract framework for structured arguments is presented, which instantiates Dung's (‘On the Acceptability of Arguments and its Fundamental Role in Nonmonotonic Reasoning, Logic Programming, and n-Person Games’, Artificial Intelligence, 77, 321–357) abstract argumentation frameworks. Arguments are defined as inference trees formed by applying two kinds of inference rules: strict and defeasible rules. This naturally leads to three ways of attacking an argument: attacking a premise, attacking a conclusion and attacking an inference. To resolve such attacks, preferences may be used, which leads to three corresponding kinds of defeat: undermining, rebutting and undercutting defeats. The nature of the inference rules, the structure of the logical language on which they operate and the origin of the preferences are, apart from some basic assumptions, left unspecified. The resulting framework integrates work of Pollock, Vreeswijk and others on the structure of arguments and the nature of defeat and extends it in several respects. Various rationality postulates are proved to be satisfied by the framework, and several existing approaches are proved to be a special case of the framework, including assumption-based argumentation and DefLog.
Article
This paper describes an approach to legal logic based on the formal analysis of argumentation schemes. Argumentation schemes a notion borrowed from the .eld of argumentation theory - are a kind of generalized rules of inference, in the sense that they express that given certain premises a particular conclusion can be drawn. However, argumentation schemes need not concern strict, abstract, necessarily valid patterns of reasoning, but can be defeasible, concrete and contingently valid, i.e., valid in certain contexts or under certain circumstances. A method is presented to analyze argumentation schemes and it is shown how argumentation schemes can be embedded in a formal model of dialectical argumentation.
Goal selection in argumentation processes
  • S Ballnat
  • T F Gordon
Tractable inquiry in information-rich environments
  • B Dunin-Keplicz
  • A Strachocka