Fig 4 - uploaded by Trevor Bench-Capon
Content may be subject to copyright.
Attacking the test as too broad 

Attacking the test as too broad 

Source publication
Article
Full-text available
This paper studies the use of hypothetical and value-based reasoning in US Supreme-Court cases concerning the United States Fourth Amendment. Drawing upon formal AI & Law models of legal argument a semi-formal reconstruction is given of parts of the Carney case, which has been studied previously in AI & law research on case-based reasoning. As par...

Context in source publication

Context 1
... Warrant required does not follow any more since now r 2 is needed again to build an argument for this conclusion (cf. Figure 1) and its condition p ( c ) ≤ t p is not satisfied. In fact, there now is an unattacked argument for the negation of this condition, namely, the argument in Figure 4. What is happening here is that the proposed test is effectively modifying r 3 by removing the condition that it should be readily mobile. The hypothetical is intended to show that this modification is not acceptable, since it would then cover cases where the vehicle should be afforded the privacy appropriate to a home. In the extract given in Rissland (1989) counsel responds by restoring the mobility criterion, effectively proposing r 3 as his test. The Justices, however, pose further hypotheticals indicating the view that a mobile home, in a trailer park and lived in as a residence for several months would have privacy expectations above the threshold. It was the considerations raised by this sort of exchange that meant that the majority opinion did not rely on r 3 as the test, but added r 6 , referring to the location of the vehicle which had not been explicitly stated in this form in earlier cases. Thus we can see r 6 , the main innovation of Carney , as coming from the hypothetical reasoning. A continuation of this exchange is quoted in Rissland (1989). The justice ...

Similar publications

Conference Paper
Full-text available
The paper analyzes the problematic issues of improving the legal and regulatory documentation support zoning in Canada under the laws, regulations and customs prevailing in the European Union and the United States.
Article
Full-text available
Legalized gambling has become both a major industry and concern in the United States, but little research from the behavior-analytic perspective has been done on the topic. The present study consisted of two experiments that had participants play a computer-simulated slot machine. The variables manipulated were the percentage payback rate (i.e., ov...
Article
Full-text available
Currently, several proposed changes in sports betting laws are being debated in the United States and the European Union. This article examines the characteristics of sports bettors in three countries, Canada, Spain, and the United Kingdom, to determine who bets on sports in environments where this activity is both legal and popular. Uncondi-tional...

Citations

... The discoursive grammar reconstructs this normative tension as collision between the general values of SECURITY and FREEDOM. The underlying dialectic reappears in many typical constitutional right cases, where individual liberty collides with collective security issues (Sartor [30]), e.g., also in the form of privacy versus law enforcement in the analysed Fourth Amendment cases (Bench-Capon and Prakken [54]). ...
Article
Full-text available
The logico-pluralist LogiKEy knowledge engineering methodology and framework is applied to the modelling of a theory of legal balancing, in which legal knowledge (cases and laws) is encoded by utilising context-dependent value preferences. The theory obtained is then used to formalise, automatically evaluate, and reconstruct illustrative property law cases (involving the appropriation of wild animals) within the Isabelle/HOL proof assistant system, illustrating how LogiKEy can harness interactive and automated theorem-proving technology to provide a testbed for the development and formal verification of legal domain-specific languages and theories. Modelling value-oriented legal reasoning in that framework, we establish novel bridges between the latest research in knowledge representation and reasoning in non-classical logics, automated theorem proving, and applications in legal reasoning.
... This fifth argument would promote, in turn, the value of information, or the lack of it, prevailing over the others. The argument D can raise some critical questions that can be expressed in the form of arguments, and that can be represented through structured argumentation versions of VAF (see, for example, [2,3,18], and [8] for representing exceptions). A second rectification consists of simply deleting D from the representation, while a third one consists of considering an attack from C to D. All these rectifications would explain the high esteem for argument C, whose only attacker, D, would be ignored or rejected. ...
... 7 These possibilities are in line with [26], where the authors claimed that, in order to create argumentation systems, designers must take into account implicit domain-specific knowledge or beliefs. Maybe these representational problems are more important than the extent to which the empirical data match some semantics, 8 in the sense that they call for a foregoing resolution. In the meantime, these problems could be tackled with experiments and questionnaires specifically aimed at elucidating what is being represented, and/or using simpler scenarios whose representations raise fewer doubts. ...
... 7 Note that those who considered that A attacks C tended not to see any incompatibility between these arguments (Fig. 2). 8 This opinion was subscribed by an anonymous reviewer. ...
Article
Full-text available
We reported a series of experiments carried out to confront the underlying intuitions of value-based argumentation frameworks (VAFs) with the intuitions of ordinary people. Our goal was twofold. On the one hand, we intended to test VAF as a descriptive theory of human argument evaluations. On the other, we aimed to gain new insights from empirical data that could serve to improve VAF as a normative model. The experiments showed that people’s acceptance of arguments deviates from VAF’s semantics and is rather correlated with the importance given to the promoted values, independently of the perceptions of argument interactions through attacks and defeats. Furthermore, arguments were often perceived as promoting more than one value with different relative strengths. Individuals’ analyses of scenarios were also affected by external factors such as biases and arguments not explicit in the framework. Finally, we confirmed that objective acceptance, that is, the acceptance of arguments under any order of the values, was not a frequent behavior. Instead, participants tended to accept only the arguments that promoted the values they subscribe.
... The next scheme concerns trade-offs between two dimensions, as described in [6]. For example in the US Fourth Amendment domain there is a trade-off between being able to enforce the law and respect for privacy [10]. The factor involves balancing these two concerns and is something like "Sufficient respect for privacy while enabling enforcement". ...
Preprint
Full-text available
We present argumentation schemes to model reasoning with legal cases. We provide schemes for each of the three stages that take place after the facts are established: factor ascription, issue resolution and outcome determination. The schemes are illustrated with examples from a specific legal domain, US Trade Secrets law, and the wider applicability of these schemes is discussed.
... A third type of ascription arises when a pair of dimensions need to be considered together, because one may trade off against the other so that we need to strike a balance between them. For example, as discussed in [9], in the US Fourth Amendment which protects against unreasonable search, the privacy of the citizen must be balanced against the exigency of the need to enforce the law. If the life of the President is thought to be under threat, privacy will be respected less than if we are dealing with a minor offence. ...
... The next scheme concerns trade-offs between two dimensions, as described in [6]. For example in the US Fourth Amendment domain there is a trade-off between being able to enforce the law and respect for privacy [9]. The factor involves balancing these two concerns and is something like "Sufficient respect for privacy while enabling enforcement". ...
Chapter
Full-text available
Reasoning with legal cases by balancing factors (reasons to decide for and against the disputing parties) is a two stage process: first the factors must be ascribed and then these reasons for and against weighed to reach a decision. While the task of determining which set of reasons is stronger has received much attention, the task of factor ascription has not. Here we present a set of argument schemes for factor ascription, illustrated with a detailed example.
... In addition it brings a taxonomy of applications of computing in law considering different jurisdictions, changes in regulation and implications for workers in the area and access to Justice (Nissan 2018). The theoretical-empirical articles address the use of AI models in the elaboration of arguments based on value-based reasoning and hypothetical reasoning between lawyers and law students (Ashley 2009;Bench-Capon and Prakken 2010;Hafner and Berman 2002), as well as the implementation of prediction models capable of transforming litigation into cases to be mediated in non-conflictual scenarios (El Jelali et al. 2015). ...
... The Cluster's research agenda suggests that more research must be done on legal precedents, as there is no firm and well-understood basis for how these precedents limit future court decisions. It must include the elaboration and testing analyzes of the context's procedural, teleological, and temporal dimensions, using other legal domains (Bench-Capon and Prakken 2010;Hafner and Berman 2002;Horty and Bench-Capon 2012). It is also necessary to understand the contextual influences of legal precedents and their potential for computational realization. ...
Article
Full-text available
Studies on the use of Artificial Intelligence (AI) in the public sector are on the rise. However, there is still a lack of depth concerning specific government segments, such as in the field of justice. In this sense, this work seeks to understand the evolution of the use of AI in Justice and its future perspectives. The authors carried out a Systematic Literature Review (SLR) with a bibliometric analysis of 69 articles collected in the Scopus and Web of Science databases, without a time frame. The categorized results demonstrate stability in productivity between 1988 and 2014 and substantial growth from 2015 onwards. There is also a clear interaction between sub-themes relating to AI and judicialization, including Knowledge-based Systems, Online Dispute Resolution, Algorithmic Surveillance, Decision Support Systems, and Machine Learning Explainable. Thus, the authors expect that this SLR will contribute to the advancement of studies on AI in Justice, subsidize the management of public policies aimed at the justice system, and guide managers in the production chain.
... Toulmin à lui seul peut permettre d'analyser l'argument des élèves.Bien que le modèle de Toulmin ait prouvé son utilité dans plusieurs travaux de recherche sur l'argumentation(Bench-Capon & Prakken, 2010;Pedemonte & Balacheff, 2016), il est également critiqué. Parmi ces critiques, deux sont particulièrement pertinentes dans le cas de l'enseignement des mathématiques. ...
Thesis
Full-text available
L’objet de notre étude porte sur l’influence de la compréhension des élèves de la relation entre la figure et ses dessins sur la construction des argumentations et des preuves au début du secondaire dans le contexte camerounais. Au Cameroun, la démonstration est l’une des compétences fondamentales à développer en géométrie au début du secondaire comme prescrit par les programmes de mathématiques. Elle n’est pas un objet d’étude, son apprentissage se fait en même temps que l’étude des quadrilatères et des triangles. Les résultats de plusieurs études montrent que les difficultés des élèves pour produire des preuves formelles proviennent de la méconnaissance de leur fonctionnement et de leurs exigences propres. Nous pensons que les difficultés rencontrées par les élèves pour construire des arguments correctes proviennent des conceptions erronées sur la figure, de la méconnaissance des règles de traductions qui permettent de les connecter la figure à ses dessins. Notre thèse est la suivante : une compréhension cohérente de la relation entre la figure et ses dessins est une condition préalable à la production des arguments corrects utiles dans la construction d’une argumentation et à la production d’une démonstration. Pour défendre notre thèse, nous avons divisé notre travail en six chapitres. Dans les quatre premiers chapitres, nous présentons notre problématique, une revue de la littérature, le cadre théorique de l’étude et une analyse de quelques manuels scolaires de mathématiques. Dans les deux derniers chapitres, nous présentons la méthodologie ainsi que les résultats de nos expérimentations. Notre première situation expérimentale portait sur l’exécution d’une tâche de preuve dans un problème avec dessin. La deuxième situation expérimentale portait sur la construction d’une définition et la troisième situation expérimentale portait sur la construction d’un théorème. Les expérimentations ont permis de mettre en lumière la façon dont les élèves appréhendent le dessin ainsi que les connaissances sur la figure qu’ils mobilisent pour construire leurs argumentations et produire leurs preuves. Cette étude permet d’affirmer que le concept-image de l’élève sur la figure constitue un facteur important dans la construction de ses argumentations et la production de ses preuves. Les arguments informels des élèves semblent être le résultat de l’incohérence de leurs concepts images sur les figures manipulées.
... The balancing of reasons has, however, attracted a good deal of attention from more formally oriented researchers. Hage's Reason Based Logic [57] centered on this, and there have been various more recent approaches to the topic including [31,54,62] and [25], all of which offer different treatments of preferences and trade-offs. ...
... This method of articulating a reasoning task in terms of a set of argumentation schemes was applied to other areas, including democratic deliberation [32], Bayesian reasoning about evidence in criminal cases [75], hypothetical reasoning in law [31] and reasoning about the actions of others [12]. These examples show that providing a repertoire of dedicated argumentation schemes provides an effective way of specifying the procedures used to address a variety of reasoning tasks. ...
Article
Full-text available
In this paper we describe the impact that Walton’s conception of argumentation schemes had on AI and Law research. We will discuss developments in argumentation in AI and Law before Walton’s schemes became known in that community, and the issues that were current in that work. We will then show how Walton’s schemes provided a means of addressing all of those issues, and so supplied a unifying perspective from which to view argumentation in AI and Law.
... It is an essential genre that students need to master in order to excel in law school exams. This special genre is also referred to as "problem-solving essay" (Candlin, Bhatia, & Jensen, 2002), "hypothetical case" (Burnham, 1987), "hypothetical reasoning" (Bench-Capon & Prakken, 2010), or simply "hypothetical writing." It follows the moves of common law argumentation, such as the Issue-Rule-Application-Conclusion framework (widely known as "IRAC") or its variations such as the Conclusion-Rule-Explanation-Application-Conclusion framework ("CREAC") (see Strong, 2018). ...
Thesis
In the past three decades, the construct of second language (L2) writing complexity has been theorized and refined in both second language acquisition (SLA) (Crossley, 2020; Housen, De Clercq, Kuiken, & Vedder, 2019; Lu, 2011; Norris & Ortega, 2009) and Systemic Functional Linguistics (SFL) research (Byrnes, 2009; Ryshina-Pankova, 2015; Schleppegrell, 2004). The general consensus is that lexical and syntactic variations are regarded as signs of advanced academic writing. The contemporary legal writing pedagogy, however, is informed by the Plain English Movement (Benson, 1985; Dorney, 1988; Felsenfeld, 1981), which largely discourages the use of overly complex structures and “elegant variations.” The recommendation to use plain English in legal writing thus poses a challenge to the theoretical consensus in SLA and SFL research and raises a question about the conceptualizations and assessment of writing complexity in academic legal writing classrooms. This dissertation consists of three interrelated studies and aims to address this contradiction by examining the development and assessment of writing complexity (i.e., lexis, syntax, and discourse) in 246 hypothetical legal essays written by 31 international (LL.M.) students over one-year of legal language study at the Georgetown University Law Center. In Study 1, I used a structural, corpus-based approach and tracked the changes of 31 students’ lexical and syntactic complexity in six data collection points over one year and compared the complexity indices with those benchmarked by eight model essays. In Study 2, I offer an in-depth discussion of four students’ distinct developmental trajectories of discourse complexity, which I analyzed through the system of engagement (Martin & White, 2005) from the SFL perspective. Finally, in Study 3, I adopted a mixed-methods approach to investigate two legal instructors’ conceptualizations of writing complexity in the classroom setting. Results showed that overall students were able to write with significantly more sophisticated words and, in the second semester, significantly shorter sentences, a pattern consistent with the pedagogical focus of the program. Additionally, the four individual trajectories of discourse complexity indicate that, even when starting with a similar proficiency level, some students constructed increasingly complex legal discourse by actively engaging different legal voices through dialogic expansion and contraction, while others only made marginal progress. Consistent with the instructors’ conceptualizations of writing complexity, structurally more complex essays were not necessarily of higher text quality. In fact, the instructors’ assessment of text quality was found to be influenced by a number of external factors, such as students’ performance in comparison to others and their academic progress in the program. I conclude by discussing some additional insights that emerged from the dissertation such as the boundaries between text modeling and plagiarism, as well as the role of text length in assessing timed writing. Finally, I call for a critical reflection on the pedagogical principles of Plain English and highlight the value of an integrated structural-functional approach to holistically understand the construct of L2 writing complexity in academic legal writing.
... These arguments are often not about the facts of the case, but rather reflect ideas of purpose or value 39 . Sometime these arguments require consideration of trade-offs between and balancing of values 40 . It is hard to see these kinds of arguments emerging from contemporary machine learning algorithms. ...
... afs are basically directed graphs with vertices representing abstract arguments and arrows denoting attack relations between them. afs have been proven useful for modeling decision-support systems in different application areas such as agriculture, e-government, medical care and legal services, see for example [7][8][9][10]. ...
... Due to lines 13-15, we rewrite (7) as ∪ k+1 j=1 to_be_in j = {x|∀y ∈ {x} − label(y) = out} k+1 . (8) Referring to lines 11-13 in the algorithm, we rewrite (8) as (9) Considering lines 10 and 11 in the algorithm, we rewrite (9) as ∪ k+1 j=1 to_be_in j = {x|∀y ∈ {x} − ∃z ∈ {y} − such that z ∈ ∪ k j=1 to_be_in j }. (10) As Recall, ...
Article
An abstract argumentation framework is a directed graph $(V,E)$ such that the vertices of $V$ denote abstract arguments and $E \subseteq V \times V$ represents the attack relation between them. We present a new ad hoc algorithm for computing the grounded extension of an abstract argumentation framework. We show that the new algorithm runs in $\mathcal{O}(|V|+|E|)$ time. In contrast, the existing state-of-the-art algorithm runs in $\mathcal{O}(|V|+|S||E|)$ time where $S$ is the grounded extension of the input graph.