Conference PaperPDF Available

PADUA Protocol: Strategies and Tactics

Authors:

Abstract

In this paper we describe an approach to classifying objects in a domain where classifications are uncertain using a novel combina- tion of argumentation and data mining. Classification is the topic of a dialogue game between two agents, based on an argument scheme and critical questions designed for use by agents whose knowledge of the do- main comes from data mining. Each agent has its own set of examples which it can mine to find arguments based on association rules for and against a classification of a new instance. These arguments are exchanged in order to classify the instance. We describe the dialogue game, and in particular discuss the strategic considerations which agents can use to select their moves. Dierent strategies give rise to games with dierent characteristics, some having the flavour of persuasion dialogues and other deliberation dialogues.
PADUA Protocol: Strategies and Tactics
Wardeh, M., Bench-Capon, T., and Coenen, F.
Department of Computer Science,
The University of Liverpool,
Liverpool L69 3BX, UK
{maya,tbc,frans}@csc.liv.ac.uk
Abstract. In this paper we describe an approach to classifying objects
in a domain where classifications are uncertain using a novel combina-
tion of argumentation and data mining. Classification is the topic of a
dialogue game between two agents, based on an argument scheme and
critical questions designed for use by agents whose knowledge of the do-
main comes from data mining. Each agent has its own set of examples
which it can mine to find arguments based on association rules for and
against a classification of a new instance. These arguments are exchanged
in order to classify the instance. We describe the dialogue game, and in
particular discuss the strategic considerations which agents can use to
select their moves. Different strategies give rise to games with different
characteristics, some having the flavour of persuasion dialogues and other
deliberation dialogues.
1 Introduction
In this paper we describe an approach to classifying objects in a domain not
governed by strict rules which makes use of a novel combination of argumentation
and data mining techniques. Our scenario is that classification is performed by
two agents, each of which has their own set of records of past examples recording
the values of a number of features presumed relevant to the classification and
the correct classification. One of the agents will propose a classification, and
a set of justifying reasons for the classification. This proposal is based on the
application of an association rule mined from the agent’s set of examples to the
case under consideration. The classification is the consequent of the rule, and
the antecedent gives the justifying reasons. The other agent will then use its set
of examples to play “devil’s advocate” and attempt to overturn the proposed
classification. We call our system PADUA (Protocol for Argumentation Dialogue
Using Association Rules).
This interaction can be viewed as a form of dialogue game (e.g. [5]), based
on the exchange of arguments. Dialogue games come in a variety of flavours
[11], including Persuasion, where each participant tries to persuade the other
participant of its own thesis, by offering arguments that support this thesis, and
Deliberation, in which the participants exchange arguments to reach an agreed
decision, with neither of them committed to a particular position at the outset.
2
Our interaction has aspects of both, with the balance different according to
the dialogue strategies employed. Formal Dialogue Games [7] are interactions
between two or more players, where each player moves by making utterances,
according to a defined set of rules known as a Dialogue Game Protocol, which
gives the set of moves possibly expected after a previous move; choosing the best
move among these moves is the Strategy Problem.
As mentioned above, the key idea of PADUA is to form arguments directly
from some set of records providing examples relating to a particular domain,
avoiding any need for expert analysis of the data, or knowledge representation.
The repository of background knowledge used by each participant can be con-
sidered to be a binary valued data set where each record represents a previous
case and each column an attribute taken from the global set of attributes de-
scribed by the background knowledge. Given this set up we can apply Associ-
ation Rule Mining (ARM) [1] techniques to the data set to discover relations
between attributes, expressed in the form of Association Rules (ARs). In order
to use this information, we follow the notion of presumptive argumentation as
the instantiation of argument schemes subject to challenge through characteris-
tic critical questions introduced by Walton [11]. PADUA makes use of a custom
argument scheme and associated critical questions. In this paper we shall discuss
the strategy problem in PADUA, and the consequences of the strategy used for
the dialogue type.
The rest of this paper is organized as following: Section 2 describes the ar-
gument scheme and the basic structure of the PADUA protocol. Section 3 gives
some necessary background on strategies in dialogue systems. Section 4 discusses
in detail the suggested strategy heuristics to be applied in PADUA protocol. Sec-
tion 5 gives a detailed example of the suggested strategy, and some discussion
of the relation between these strategies and dialogue types.
2 PADUA Protocol
The model of argumentation we will follow is that of [11] in which a prima facie
justification is given through the instantiation of an argument scheme. This
justification is then subject to a critique through a number of critical questions
which may cause the presumptive conclusion to be withdrawn.
The basic argument scheme is one we have devised for the purpose, Argument
from Proposed Rule. The Premises are:
1. Data Premise: There is a set of examples D pertaining to the domain.
2. Rule Premise: From D a Rule R can be mined with a level of confidence
greater than some threshold T. R has antecedents A and a conclusion which
includes membership of class C.
3. Example Premise: Example E satisfies A.
4. Conclusion: E is a C because A.
This can be subject to a number of critical questions:
3
Can the case be distinguished from the proposed rule? If D supports another
Rule R2 and the antecedents of R2 subsume those of R, and are satisfied by
E, and the confidence of R2 is below T, this suggests that these additional
features may indicate we are dealing with some kind of exception to R.
Does the rule have unwanted consequences? If the conclusion of R includes
some fact F not satisfied by E, this suggests that R is not applicable to E.
Is there a better rule with the opposite conclusion? If there is another rule
R3 which can be mined from D and the antecedents of R3 are satisfied by
E, and the confidence of R3 is greater than that of R, this suggests that R
is not applicable to E.
Can the rule be strengthened by adding additional antecedents? This does not
challenge the classification, but rather the justification for the classification.
If there is another Rule R4 which can be mined from D and the antecedents
of R4 subsume those of R, and are satisfied by E, and the confidence of R4
is greater than that of R, this suggests that the additional features should
be included in the justification of the classification.
Can the rule be improved by withdrawing consequences? This challenges nei-
ther the classification, nor the justification, but the rule proposed. If there
is another Rule R5 which can be mined from D and the conclusions of R
subsume those of R5 and include a feature not satisfied by E, provided the
confidence of R5 remains above the threshold, this suggests that the addi-
tional features should be excluded from the rule justifying the classification.
This argument scheme and these critical questions form the basis of the PADUA
dialogue game.
2.1 Dialogue Scenario
The proposed dialogue game consists of two players (the proponent and the
opponent) which have conflicting points of views regarding some case (C). The
proponent claims that the case falls under some class (c1), while the opponent
opposes the proponent’s claim, and tries to prove that case actually falls under
some other class (c2=¬c1). Each player tries to establish its point of view by the
means of arguments based on association rules, which are mined from player’s
own database, using an association rule mining technique as described in [4].
The proponent starts the dialogue by proposing some AR (R1:PQ),
to instantiate the argument scheme. The premises (P) match the case, and the
conclusion (Q) justifies the agent’s position. Then the opponent has to play a
legal move that would undermine the initial rule proposed by the proponent:
these moves are based of the five critical questions described above. As can
be seen from the questions, four of these moves involve some new rule. This
is mined from the opponent’s background database, and represents an attack
on the original rule. The turn then goes back to the proponent which has to
reply appropriately to the last move. The game continues until one player has
no adequate reply. Then this player loses the game, and the other player wins.
4
2.2 PADUA Framework
The formal framework we suggest Argumentation Dialogue Framework (ADF )
is defined as follows:
ADF =< P, Attr, C, M, R, Conf , playedM oves, play > (1)
Where P: denotes the players of the dialogue game. Attr: denotes the whole set of
attributes in the entire framework. C: denotes the case argued about. M: denotes
the set of possible (legal) moves. R: denotes the set of rules that govern the
game. Conf : denotes the confidence threshold, all the association rules proposed
within this framework must satisfy this threshold. playedMoves: denotes the set
of moves played in the dialogue so far, this set of played moves represents the
commitment store of the dialogue system under discussion. Finally, play: is a
function that maps players to some legal move.
2.3 PADUA Players
Each player in PADUA game (pP=P ro, Opp.)is defined as a dialogical
agent [3]:
pP:p=< namep, Attrp, Gp, Σp, >>p>(2)
where: namep: is the player (agent) name, here: pPthen name(p)
{pro, opp}.Attrp: is the set of attributes this player can understand. Gp: is
the set of goals this player tries to achieve, here Gpis defined as a subset of
the attributes set Attrp, i.e. Gpis the set of attributes (classes) this player
tries to prove true. Σp: is the set of ARs the player has mined from its back-
ground database, hence fore pis defined as follows: pP:Σp={r1. . . rm},
where ri=< P rem, C on, Conf > is an association rule and can be read as
P rem Conc with a confidence =C onf . The elements of P rem and Conc are
defined as a tuple < attribute, value >, where attribute Attrp, and value is
the list of values assigned to this attribute in the given rule. >>p: represents
the preferences order over Σp, a definition of this preference relation ship is sug-
gested as >>p:Σp×Σp→ {true, f alse}, but the exact implementation of this
relation may differ from player to player.
2.4 PADUA Legal Moves
The set of moves (M) consists of 6 possible moves, one based on instantiating the
argument scheme and five based on the critical questions against an instantiation.
They are identified as follows:
1. Propose Rule:pplays this move to propose a new rule with a confidence
higher than some confidence threshold.
2. Distinguish:this move is played to undermine a previously played move, as it
adds some new premise(s) to this rule, such that the confidence of the new
rule is lower than the confidence of the original rule (and/or lower than some
acceptance threshold).
5
3. Unwanted Consequences: Here psuggests that certain consequences (conclu-
sions) of some rule do not match the case under discussion.
4. Counter Rule:pplays this move to propose a new rule that contradicts the
previous rule. The confidence of the proposed counter rule should be higher
than the confidence of the previous rule (and/or than the threshold Conf ).
5. Increase Confidence:pplays this move to add some new premises to a pre-
vious rule so that the overall confidence rises to some acceptable level.
6. Withdraw Unwanted Consequences:pplays this move to exclude the un-
wanted consequences of the rule it previously proposed, while maintaining a
certain level of confidence.
This defines the formal dialogue game. We now consider the strategies that
might be used when playing the game. First we consider some related previous
work on developing strategies for formal dialogue games.
3 Dialogue Strategies: Background
This section discusses some previous argumentation systems that have consid-
ered argument selection strategies:
Moore, in his work with the DC dialectical system [8], concluded from his
studies that an agent’s argumentation strategy is best analyzed at three levels:
1. Maintaining the focus of the dispute.
2. Building its point of view or attacking the opponent’s one.
3. Selecting an argument that fulfils the objectives set at the previous two
levels.
The first two levels refer to the agent’s strategy, i.e. the high level aims of
the argumentation, while the third level refers to the tactics, i.e. the means to
achieve the aims fixed at the strategic levels. Moore’s requirements form the
basis of most other research into agent argumentation strategies.
In [2] a computational system was suggested that captures some of the heuris-
tics for argumentation suggested by Moore. This system requires a preference
ordering over all the possible arguments, and a level of prudence to be assigned
to each agent. The strength of an argument is defined according to the complex-
ity of the chain of arguments required to defend this argument from the other
arguments that attack it. An agent can have either a “build” or a “destroy”
strategy. When using the build strategy (b-strategy), an agent tries to assert
arguments the strength of which satisfies its prudence level. If the b-strategy
fails, it switches to the destroy strategy (d-strategy), where it tries to use any
possible way to attack the opponent’s arguments. The basic drawback of this
approach is that computational limits may affect the agent’s choice.
In [6] a three layer system was proposed to model argumentation strategies:
the first layer consists of the “default” rules, which have the form (utterance
- condition); the higher two layers provide preference orderings over the rules.
The system is shown to be deterministic, i.e. a particular utterance is selected
6
in a given situation every time, but this system still requires hand crafting of
the rules.
In [10], a decision heuristic was proposed to allow the agents to decide which
argument to advance. The idea behind this work is that an agent should, while
attempting to win a dispute, reveal as little of what it knows as possible, as
revealing too much information in a current dialogue might damage an agent’s
chances of winning a future argument. A new argumentation framework was
developed to represent the suggested heuristics and arguments. The main short-
coming of this approach is the exponential complexity of the algorithms used.
4 Strategies and Tactics for PADUA
In PADUA, a player pPmust select the kind of move to be played, and also
the particular content of this move depending on: the thesis this player aims to
prove true (or false), the case under discussion, the player’s set of association
rules, the amount of information this agent is willing to expose in its move, and
the player’s current state in the dialogue. All these factors must be considered in
the strategy the player adopts and the tactics applied to implement this strategy.
Table1 lists the possible next moves after each of the legal moves in PADUA
protocol. A player must select a single move to play in its turn; moreover every
possible next move is associated with a set of possible rules: this set contains the
rules that match the selection criteria of the move, i.e. their confidence, premises
and conclusion match this move. Except for unwanted consequences, the moves
introduce a new rule. Proposing a counter rule leads to a switch in the rule
being considered, so entering a nested dialogue. The notion of move (act) and
Move Next Move New Rule
1 2,3,4 yes
2 1,3,5 yes
3 1,6 No
4 1,2,3 Nested Dialogue
5 2,3,4 yes
6 2,3,4 yes
Table 1. Possible Moves
content selection is argued to be best captured at different levels, as suggested
by Moore [8]. In [2] the first level of Moor’s layered strategy was replaced with
different profiles for the agents involved in the interaction. We also adopt this
approach. Here we also add another level to Moore’s structure (level 0) which
distinguishes PADUA games into two basic classes. In one players attempt to
win using as few steps as possible, i.e. the move’s type and content are chosen
so that the played move gives the opponent’s the least freedom to plan its next
move. In the other, games that are played to fully explore the characteristics of
7
the underlying argumentation system, and dialogue game, so here the move’s
type and content are chosen so that the played move will restrict the opponent’s
freedom to plan its next move to the least extent possible. The layered strategy
system we adopt is defined as follows:
Level 0: Define the game mood: i.e. Win mode or Dialogue mode.
Level 1: Define the players (agents) profiles.
Level 2: Choose to build or destroy: where in a Build mode the player tries
to build its own thesis, while in a Destroy mode the player tries to destroy
the opponent’s thesis.
Level 3: Choose some appropriate argumentative content: depending on the
tactics and heuristics suggested.
4.1 Agent Profile
In [3], which used arguments based on standard if then rules, five classes of
agent profiles were defined as follows:
1. Agreeable Agent: Accept whenever possible.
2. Disagreeable Agent: Only accept when no reason not to.
3. Open-minded Agent: Only challenge when necessary.
4. Argumentative Agent: Challenge whenever possible.
5. Elephant Child Agent: Question when ever possible.
In this paper we consider only the first two profiles (i.e. agreeable and dis-
agreeable agents), as these attitudes are the most appropriate for the particular
argument scheme we are using.
4.2 PADUA Strategy
The function P lay is defined as follows:
P lay :P×Mposs ×Rposs ×playedM oves ×S× → M(3)
Where: Pis the set of game players; playedMoves is the set of moves played in
the dialogue so far; and Mis the set of possible (legal) moves. Mposs: is the set of
the possible moves this player can play Mposs M(as defined in Table1). Rposs:
is the set of legal rules that this agent can put forward in the dialogue (Rposs
2Σp); this set contains the rules that match the each of the possible moves. S: is
the Strategy Matrix, and has the form S= [gm, prof ileP, sm] where: gmGm:
is the game mode, where Gm={win, dialog ue},profilePP rof ileP: is the
player profile, where P rof ileP={agreeable, disagreeable}, and finally, sm
Sm: is the strategy mode, where Sm={build, destroy}.
4.3 PADUA Tactics
A set of tactics are suggested to fulfil the strategic considerations discussed
above; these concern the best move to play and, where applicable, the content
of the chosen move, i.e. the best rule to be put forward in the dialogue.
8
Legal Moves Ordering Legal moves’ ordering defines the order in which legal
(possible) moves are considered when selecting the next move. All games begin
with Propose Rule: there are three possible responses to this, and these in turn
have possible responses. The preference for these moves depends on whether the
agent is following a build or a destroy strategy. In a destroy strategy the agent
will wish to discredit the rule proposed by its opponent, and hence will prefer
moves such as unwanted consequences and distinguish. In contrast when using a
build strategy an agent will prefer to propose its own rule, and will only attempt
to discredit its opponents rule if it has no better rule of its own to put forward.
The preferred order for the two strategies is shown in Table2.
Whether players are agreeable or disagreeable will have an influence on
whether the agent wishes to dispute the rule put forward by its opponent, and,
the nature of the challenge if one is made.
Last Move Build Mode Destroy Mode
1 4,3,2 3,2,4
2 1,3,5 3,5,1
3 1,6 6,1
4 1,3,2 3,2,1
5 1,3,2 3,2,1
6 1,3,2 3,2,1
Table 2. Possible Moves Preferences
Agreeable Players An agreeable player ap Paccepts a played rule without
challenging it if:
1. An exact match of this rule can be found in its own set of association rule
(Σap) with a higher or similar confidence.
2. Can find partial match of this rule in its own set of association rule (Σap ), a
rule rpm Σap is considered to be a partial match of another rule rΣap
if it has the same conclusion (consequences) of r, it’s set of premises is
a superset of rule r premises, and all these premises match the case; and
finally it has a higher or similar confidence.
Otherwise the agreeable agent challenges the played move, depending whether
it wishes to build or destroy using the legal moves preferences shown in Table2
selecting a rule using the following content tactics:
1. Confidence: Confidence of moves played by agreeable agent should be con-
siderably lower/higher than the attacked rule, otherwise the agent agrees
with its opponent.
2. Consequences: Consequences always contain a class attribute.
Minimum changes to previous move consequences.
As few attributes as possible.
9
3. Premises: Premises are always true of the case.
Minimum changes to previous move premises.
As few attributes as possible.
Disagreeable Players A disagreeable agent accepts a played rule if and only if
all possible attacks fail, and so does not even consider whether its data supports
the rule; the choice of the attack (i.e legal move) to be played depends on the
preferences shown in Table2 and the choice of rule is in accordance with the
following content tactics:
1. Confidence: Confidence of moves played can be:
(a) Considerably different from last move
(b) Slightly different from last move.
The choice of confidence depend on the general mode of game whether it’s
in a win-mood or a dialogue-mood.
2. Consequences: Consequences always contain a class attribute.
As few attributes as possible.
3. Premises: Premises are always true of the case.
As few attributes as possible.
Best Move Table3 brings these considerations together and shows the best
move relative to the agent type and the game mode, for each of the move types.
For example in win mode an agent will want to propose a rule with high con-
fidence, as one which the opponent is likely to be forced to accept, whereas in
game mode, where a more thorough exploration of the search space is sought,
any acceptable rule can be used to stimulate discussion.
5 Example
Our example domain concerns the voting records of US members of Congress,
on the basis of which we wish to classify them according to party affiliation.
Although there will be typical Democrat and Republican views on various issues,
people may vote against the party line for personal or regional reasons. Some
members of Congress may be more maverick than others. Thus, while there is
no defining issue which will allow us to classify with certainty, we can argue for a
classification on the basis of voting records. The data set we use is taken from [9],
and it represents the U.S. House of Representatives members of Congress (in the
1984 US congressional elections) on the 16 key votes identified by the (CQA).
The congressional voting records database contains 435 instances, among which
(45.2%) are Democrats and (54.8%) are Republicans. The dataset original 17
binary attributes (including the class attribute) were normalized to 34 unique
numerical attributes, each corresponds to certain attribute value. This dataset
was horizontally divided into two equal size datasets, each of which was assigned
to a player in PADUA framework. Rules were mined from this dataset using
30% support, and 70% confidence thresholds.
10
Best Moves
Agreeable Disagreeable
Win mode Game mode Win mode Game mode
propose High confidence Average confi-
dence
High confidence Average confi-
dence
Fewest attributes Average at-
tributes
Fewest attributes Fewest attributes
distinguish Lowest confi-
dence
Average drop Lowest confi-
dence
Average drop
Fewest attributes Fewest attributes Fewest attributes Fewest attributes
Unwanted con-
sequences
If some conse-
quences are not
in or contradict
the case
Only if some con-
sequences contra-
dict the case
If some consequences are not in or
contradict the case
Counter rule Average confi-
dence
High confidence High confidence Average confi-
dence
Fewest attributes Fewest attributes Average at-
tributes
Fewest attributes
Increase Confi-
dence
Highest confi-
dence
Average increase Highest confi-
dence
Average increase
Fewest attributes Fewest attributes Fewest attributes Fewest attributes
Withdraw un-
wanted conse-
quences
The preferable reply to unwanted consequences attack selecting cri-
teria is the same of the very last move that led to the unwanted conse-
quences.
Table 3. Best move content tactics
We have experimented by running several PADUA dialogue games, starting
from the same case. The difference between the games lays in the underlying
strategy options of each agent that participate in each of these games.
Table4 shows the attributes of the case used in the example.
Case: [5, 7, 13, 15, 17, 21, 24, 26, 29]
5: adoption-of-the-budget -
resolution=y.
7: physician-fee-freeze=n.
13: anti-satellite- test-ban=y. 15: aid-to-nicaraguan -contras=y.
17: mx-missile=y. 21: synfuels-corporation-cutback=y.
24: education-spending=n. 26: superfund-right-to-sue=n.
29: duty-free- exports=y.
Table 4. Example Case
As an illustration, we will describe the run with two disagreeable agents
playing in win mode, the proponent (P rop) using a build strategy and the oppo-
nent (Opp) a destroy strategy.P rop begins by proposing a rule to justify saying
that the member of Congress concerned is a Democrat: R1: Democrat because
11
education-spending=n and duty-free-exports =y with a (97.66%) confi-
dence. Opp can reply by distinguishing this rule, since adding the premise aid-
to-nicaraguan-contras=y reduces confidence to 80.43%. P rop now proposes
a new rule: R2: Democrat because mx-missile=y and duty-free-exports=y
with a (98.63%) conifednce. This rule cannot be distinguished or countered since
there is no better rule for Republican and so P rop wins.
Note how, Opp, being in destroy mode, uses first the distinguish move and
only proposes a rule if this move cannot be played. In build mode Opp plays a rule
of its own. Note also that the distinction made greatly reduces the confidence,
whereas a distinction with a less drastic effect could have been played in game
mode. When Opp is an agreeable agent it would simply accepts the proposed
rule, as it too can mine the rule with sufficient confidence. Where P rop is in
destroy mode, it responds to the distinction with an increase confidence move,
forcing Opp to propose a rule of its own.
As would be expected, in game mode, longer dialogues are produced. Where
the agents are both agreeable, game mode leads to a series of rule proposals
until a mutually acceptable one is found. Where Opp is in destroy mode,P rop’s
proposals will be met by a distinction, and where Opp is in build mode it will
produce counter proposals as long as it can. Where P rop is in destroy mode it
will make use of the unwanted consequences move to refute Opp’s distinction
if possible. Where both agents are disagreeable and in win mode, because the
game does not terminate on the proposal of an acceptable rule, this last move,
refuting a distinction by pointing to unwanted consequences which cannot be
met with a withdraw consequences move, is what ends the game.
6 Discussion
Padua provides a way of determining the classification of cases on the basis of
distributed collections of examples related to the domain without the need to
share information, and without the need for analysis and representation of the
examples. The argumentation leads to a classification which, while uncertain, is
mutually acceptable and consistent with the different collections of examples.
Different strategies for move selection give rise to dialogues with different
characteristics. Using disagreeable agents gives rise to a persuasion dialogue,
since the opponent will do anything possible to avoid accepting the proposal.
Win mode will lead to the swiftest resolution: game mode between disagreeable
agents will lead to a lengthier exchange, and concession may be forced without
the best argument being produced. A dialogue between two agreeable agents has
the characteristics of a deliberation dialogue in that here the opponent is happy
to concede once an acceptable proposal has been made. Win mode may be a very
short exchange, since this simply verifies that Prop’s best rule is also acceptable
with respect to the second agent’s data set. When game mode is used, the game
has the flavour of brainstorming in that more ideas, even some which are less
promising, will be explored.
12
Further work will empirically explore our system to examine the efficiency
and quality of classifications and the effect of giving the individual data sets
used by the agents particular characteristics. We also intend to explore domains
in which classification is into a enumerated set of options rather than binary,
and develop an extended version of the game with more than two participants.
References
1. Agrawal, R., Imielinski, T., and Swami, A. N. (1993). Association rules between sets
of items in large databases. In Proc. of ACM SIGMOD Int. Conf. on Management
of Data. Washington, D.C, May 1993. pp 207-216.
2. Amgoud, L. and Maudet, N. (2002) Strategical considerations for argumentative
agents (preliminary report). In Proc. of 9th Int. Workshop on Non-Monotonic
Reasoning (NMR). Toulouse, France, April 2002. Special session on Argument,
Dialogue, Decision. pp. 409-417.
3. Amgoud, L., Parsons, S. (2001) Agent dialogues with conflicting preferences. In
Proc. of 8th Int. Workshop on Agent Theories, Architectures and Languages. Seat-
tle, Washignton, August 2001. pp 1-15
4. Coenen, F.P. Leng, P., and Goulbourne, G. (2004). Tree Structures for Mining
Association Rules. Journal of Data Mining and Knowledge Discovery, Vol 8, No 1,
pp25-51.
5. Hamblin, C. L.(1970) Fallacies. Methuen, 1970.
6. Kakas, A.C., Maudet, N. and Moraitis, P.(2004) 2nd Int. Workshop on Argument
in Multi Agent Systems (Argmas 2004), Springer LNCS 3366 2005.
7. Mcburney, P., Parsons, S. (2002). Games That Agents Play: A Formal Framework
for Dialogues between Autonomous Agents. In Jo. of logic, language and informa-
tion, 11(3), pp 315-334.
8. Moore, D.: Dialogue game theory for intelligent tutoring systems. PhD thesis, Leeds
Metropolitan University (1993).
9. D.J. Newman, S. Hettich, C.L. Blake and C.J. Merz (1998). UCI Repository of ma-
chine learning databases. http://www.ics.uci.edu/mlearn/MLRepository.html.
University of California, Irvine, Dept. of Information and Computer Sciences.
10. Oren, N., Norman, T. J., Preece, A.(2006). Loose Lips Sink Ships: a Heuristic
for Argumentation. In Proc. of 3rd Int. Workshop on Argumentation in Multi-
Agent Systems(Argmas 2006), Hakodate, Japan, May 2006. pp. 121134. Pro-
ceedings available at http://homepages.inf.ed.ac.uk/irahwan/argmas/argmas06/
argmas2006proceedings.pdf
11. Walton, D. N., Krabbe, E. C. W. (1995). Commitment in Dialogue: Basic Concepts
of Interpersonal Reasoning. SUNY Press, Albany, NY, USA.
12. Walton, D. N. (1996) Argument Schemes for Presumptive Reasoning. Lawrence
Erlbaum Associates, Mahwah, NJ, USA.
... @BULLET Claim P: P is the head of some rule @BULLET Why P: Seeks the body of rule for which P is head @BULLET Concede P: agrees that P is true @BULLET Retract P: denies that P is true @BULLET P since S: A rule with P head and S body. In (Wardeh et al, 2007Wardeh et al, , 2008 ) we introduced an alternative basis for such persuasion dialogues to enable what we termed arguing from experience to solve classification problems. Here the agents do not have belief bases, but only a database of previous examples. ...
... (Ashley 1990) and (Aleven 1997). The work in (Wardeh et al 2007Wardeh et al , 2008 ) describes an implementation allowing argumentation from experience of this sort, called PADUA (Protocol for Argumentation Dialogue Using Association Rules). PADUA, however, is, like the systems of (Prakken 2006), restricted to just two agents. ...
Chapter
Full-text available
In this paper a framework, PISA (Pooling Information from Several Agents), to facilitate multiplayer (three or more protagonists), “argumentation from experience” is described. Multiplayer argumentation is a form of dialogue game involving three or more players. The PISA framework is founded on a two player argumentation framework, PADUA (Protocol for Argumentation Dialogue Using Association Rules), also developed by the authors. One of the main advantages of both PISA and PADUA is that they avoid the resource intensive need to predefine a knowledge base, instead data mining techniques are used to facilitate the provision of “just in time” information. Many of the issues associated with multiplayer dialogue games do not present a significant challenge in the two player game. The main original contributions of this paper are the mechanisms whereby the PISA framework addresses these challenges.
... The authors termed this arguing from experience, to contrast this style of argument with the typical form of persuasion dialogues, in which each agent uses a separate knowledge base in which domain expertise is represented as a set of rules [14]. Strategies for PADUA dialogures were discussed in [17] and PADUA was evaluated on a substantial dataset relating to welfare benefits in [18]. In this paper we present PISA (Pooling Information from Several Agents) which extends PADUA to allow any number of software agents to engage in the dialogue . ...
... We have applied PISA to several datasets including the welfare benefits application used to evaluate PISA [18] and a datsset drawn from an application to process applications for a nursery school in Ljubl- jana [12]. In this section we will illustrate experimentally the kinds of dialogues produced by PISA, using the nursery dataset from the UCL data repository [17]. 1 . The Nursery data set was derived from a hierarchical decision model originally developed to rank applications for nursery schools. ...
Conference Paper
Full-text available
A framework, PISA, for conducting dialogues to resolve disputes concerning the correct categorisation of particular cases, is described. Unlike previous systems to conduct such dialogues, which have typically involved only two agents, PISA allows any number of agents to take part, facilitating discussion of cases which permit many possible categorizations. A particular feature of the framework is that the agents argue directly from individual repositories of experiences rather than from a previously engineered knowledge base, as is the usual case, and so the knowledge engineering bottleneck is avoided. Argument from experience is enabled by real time data-mining conducted by individual agents to find reasons to support their viewpoints, and critique the arguments of other parties. Multiparty dialogues raise a number of significant issues, necessitating appropriate design choices. The paper describes how these issues were resolved and implemented in PISA, and illustrates the operation of PISA using an example based on a dataset relating to nursery provision. Finally some experiments comparing PISA with other classifiers are reported.
... The difference of opinion to be resolved comes from the fact that experiences differ, and so the set of examples available to the participants may ground different conclusions with respect to a new example. This alternative basis for persuasion dialogues, to enable what we termed arguing from experience to solve classification problems, was introduced in ([6], [7]). When presented with a new case the agents use data mining techniques to discover associations between features of the case under consideration and the appropriate classification according to their previous experience. ...
... moves that can possibly follow this move).Table 1 summarizes PADUA protocol rules, and indicates whether a new rule is introduced. For a fuller discussion of the operations of the PADUA system and the rationale for its moves see [7]: for a discussion of different strategies that the participating agents can use, see [6]. ...
Conference Paper
Full-text available
Error rates in the assessment of routine claims for welfare benefits have been found to very high in Netherlands, USA and UK. This is a significant problem both in terms of quality of service and financial loss through over payments. These errors also present challenges for machine learning programs using the data. In this paper we propose a way of addressing this problem by using a process of moderation, in which agents argue about the classification on the basis of data from distinct groups of assessors. Our agents employ an argument based dialogue protocol (PADUA) in which the agents produce arguments directly from a database of cases, with each agent having their own separate database. We describe the protocol and report encouraging results from a series of experiments comparing PADUA with other classifiers, and assessing the effectiveness of the moderation process.
... By using ASs, our work is more oriented towards the interaction with human agents. Combining argumentation and datamining has been investigated in [7]. Rising arguments is based on PADUA (Protocol for Argumentation Dialogue Using Association Rules), designed to empower agents with the ability to extract arguments from a dataset. ...
Article
Full-text available
Agents need to argue with other agents many times, developing persuasion strategies that are effective over repeated situations. Applying reinforcement learning (RL) to the design of argumentation policies is appealing to dialogues where the counterpart can be modelled as a probability distribution. The idea of this research is to apply RL to speech acts in order to learn which discourse pattern is best to be conveyed during an argumentation game. Empowered by this learning mechanism, the persuasive agents gradually become more skillful through repeated argumentation.
... As it can be viewed as a dialogue game, there is also the question of what strategies and tactics the participants should adopt. Some preliminary work has been done on this [20] , there it was shown that the participants can, for example, be represented as cooperative or adversarial. The reported experiments confirm that different strategies give rise to different flavours of dialogue. ...
Conference Paper
Full-text available
In this paper we describe PADUA, a protocol designed to enable agents to debate an issue drawing arguments not from a knowledge base of facts, rules and priorities but directly from a dataset of records of instances in the domain. This is particularly suited to applications which have large, possibly noisy, datasets, for which knowledge engineering would be difficult. Direct use of data requires a different style of argument, which has many affinities to case based reasoning. Following motivation and a discussion of the requirement of this form of reasoning, we present the protocol and illustrate its use with a case study. We conclude with a discussion of some significant issues highlighted by our approach.
Article
Full-text available
The Toulmin model has been proved useful in law and argumentation theory. This model describes the basic process in justifying a claim, which comprises six elements, i.e., claim (C), data (D), warrant (W), backing (B), qualifier (Q), and rebuttal (R). Specifically, in justifying a claim, one must put forward ‘data’ and a ‘warrant’, whereas the latter is authorized by ‘backing’. The force of the ‘claim’ being justified is represented by the ‘qualifier’, and the condition under which the claim cannot be justified is represented as the ‘rebuttal’. To further improve the model, (Goodnight, Informal Logic 15:41–52, 1993) points out that the selection of a backing needs justification, which he calls legitimation justification. However, how such justification is constituted has not yet been clarified. To identify legitimation justification, we separate it into two parts. One justifies a backing’s eligibility (legitimation justification1; LJ1); the other justifies its superiority over other eligible backings (legitimation justification2; LJ2). In this paper, we focus on LJ1 and apply it to the legal justification (of judgements) in hard cases for illustration purposes. We submit that LJ1 refers to the justification of the legal interpretation of a norm by its backing, which can be further separated into several orderable subjustifications. Taking the subjustification of a norm’s existence as an example, we show how it would be influenced by different positions in the philosophy of law. Taking the position of the theory of natural law, such subjustification is presented and evaluated. This paper aims not only to inform ongoing theoretical efforts to apply the Toulmin model in the legal field, but it also seeks to clarify the process in the justification of legal judgments in hard cases. It also offers background information for the possible construction of related AI systems. In our future work, LJ2 and other subjustifications of LJ1 will be discussed.
Article
Full-text available
In this paper we describe the impact that Walton’s conception of argumentation schemes had on AI and Law research. We will discuss developments in argumentation in AI and Law before Walton’s schemes became known in that community, and the issues that were current in that work. We will then show how Walton’s schemes provided a means of addressing all of those issues, and so supplied a unifying perspective from which to view argumentation in AI and Law.
Article
Full-text available
Doug Walton, who died in January 2020, was a prolific author whose work in informal logic and argumentation had a profound influence on Artificial Intelligence, including Artificial Intelligence and Law. He was also very interested in interdisciplinary work, and a frequent and generous collaborator. In this paper seven leading researchers in AI and Law, all past programme chairs of the International Conference on AI and Law who have worked with him, describe his influence on their work.
Book
Practical Reasoning and Argument about Action.- Burden of Proof in Deliberation Dialogs.- A Generative Dialogue System for Arguing about Plans in Situation Calculus.- Dominant Decisions by Argumentation Agents.- A Model for Integrating Dialogue and the Execution of Joint Plans.- Practical Reasoning Using Values.- Persuasion and Negotiation.- Strategic Argumentation in Rigorous Persuasion Dialogue.- Assumption-Based Argumentation for the Minimal Concession Strategy.- Subjective Effectiveness in Agent-to-Human Negotiation: A Frame x Personality Account.- Argumentation Theory.- Dynamics in Argumentation with Single Extensions: Attack Refinement and the Grounded Extension (Extended Version).- Arguing Using Opponent Models.- Realizing Argumentation in Multi-agent Systems Using Defeasible Logic Programming.- Computing Abductive Argumentation in Answer Set Programming.- Multi-Party Argument from Experience.- Applications and Emotions.- Using Ontology Modularization for Efficient Negotiation over Ontology Correspondences in MAS.- Applying Dialogue Games to Manage Recommendation in Social Networks.- Emotions in Rational Decision Making.- Using Personality Types to Support Argumentation.- Comparing Argumentation Frameworks for Composite Ontology Matching.
Article
We describe PADUA, a protocol designed to support two agents debating a classification by offering arguments based on association rules mined from individual datasets. We motivate the style of argumentation supported by PADUA, and describe the protocol. We discuss the strategies and tactics that can be employed by agents participating in a PADUA dialogue. PADUA is applied to a typical problem in the classification of routine claims for a hypothetical welfare benefit. We particularly address the problems that arise from the extensive number of misclassified examples typically found in such domains, where the high error rate is a widely recognised problem. We give examples of the use of PADUA in this domain, and explore in particular the effect of intermediate predicates. We have also done a large scale evaluation designed to test the effectiveness of using PADUA to detect misclassified examples, and to provide a comparison with other classification systems.
Article
Full-text available
While researchers have looked at many aspects of argumen-tation, an area often neglected is that of argumentation strategies. That is, given multiple possible arguments that an agent can put forth, which should be selected in what circumstances. In this paper, we propose a heuristic that implements one such strategy, namely revealing as little information as possible to other dialogue participants. After formalising the concept and presenting a simple argumentation framework in which it can be used, we show a sample dialogue utilising the heuristic. We conclude by exploring ways in which this heuristic can be employed and a discussion of future work is made which will allow for the use of our approach in more complicated, realistic dialogues.
Conference Paper
Full-text available
An increasing number of software applications are being conceived, designed, and implemented using the notion of autonomous agents. These agents need to communicate in order to resolve differences of opinion and conflicts of interests, some of which result from differences in preferences. Hence the agents need the ability to engage in dialogues that can handle these communications while taking the differences in preferences into account. In this paper we present a general framework of dialogue where several agents can fulfill the above goals.