Figure 17 - uploaded by Guy Politzer
Content may be subject to copyright.
The hypothetical syllogism.

The hypothetical syllogism.

Source publication
Article
Full-text available
This paper describes a cubic water tank equipped with a movable partition receiving various amounts of liquid used to represent joint probability distributions. This device is applied to the investigation of deductive inferences under uncertainty. The analogy is exploited to determine by qualitative reasoning the limits in probability of the conclu...

Citations

... As will be explained, a purely quantitative approach to stopping parallels a proposal in formal epistemology to the effect that a proposition is acceptable precisely if it is sufficiently probable. More recently, *The Supplementary Materials, containing the Julia code that was used for the simulations reported, are available in a GitHub repository which can be accessed via this link. 1 The term "probability mass" is used loosely here, as it evokes a helpful image of some "stuff" (e.g., sand, or mud, or water; see van Fraassen, 1989, or Politzer, 2016 being spread out in a space representing all relevant possibilities and that can be moved around in all sorts of ways. Strictly speaking, however, the term "probability mass" is reserved for discrete probability spaces; for continuous-valued parameters, the term "probability density" is used. ...
Article
Full-text available
Stopping rules are criteria for determining when data collection can or should be terminated, allowing for inferences to be made. While traditionally discussed in the context of classical statistics, Bayesian statisticians have also begun exploring stopping rules. Kruschke proposed a Bayesian stopping rule utilizing the concept of Highest Density Interval, where data collection can cease once enough probability mass (or density) accumulates in a sufficiently small region of parameter space. This paper presents an alternative to Kruschke’s approach, introducing the novel concept of Relative Importance Interval and considering the distribution of probability mass within parameter space. Using computer simulations, we compare these proposals to each other and to the widely-used Bayes factor-based stopping method. Our results do not indicate a single superior proposal but instead suggest that different stopping rules may be appropriate under different circumstances.
... More recently, there have been advances in the study of human coherence in deduction under uncertainty (Pfeifer and Kleiter, 2011;Pfeifer, 2014;Singmann et al., 2014;Cruz et al., 2015;Evans et al., 2015;Politzer and Baratgin, 2016). De Finetti (1964, 1974 provides an effective method to appraise the coherence of a probability evaluation, using coherence intervals determined by the probability of the premises (Suppes, 1966;Hailperin, 1996Hailperin, , 2010Coletti and Scozzafava, 2002;Gilio and Over, 2012;Baratgin and Politzer, 2016;Politzer, 2016). If the coherence interval of the conclusion is [0, 1], the inference schema is called "probabilistically uninformative"; if the coherence interval of the conclusion is a restrained interval [l, u], it is called "probabilistically informative" (Pfeifer and Kleiter, 2006). ...
... We thus have a kind of "trilogy, " in which the premises are taken in pairs out of a set of three sentences (A, C, and "if A, C"). 1 TABLE 1 | Probabilistic inference schemas in the sufficient conditional "If A, then C" and the necessary conditional "Only if A, C". In this study, we analyze the performances in terms of coherence, for Chinese and French participants, in these three inference schemas with two conditional forms: the sufficient conditional "If A, then C" and the necessary conditional "Only if A, C." The coherence interval for the conclusions of MP and AC can be obtained by calculation (Suppes, 1966;Hailperin, 1996Hailperin, , 2010Coletti and Scozzafava, 2002;Gilio, 2002;Wagner, 2004;Sobel, 2009) or by an analogical representation method (Politzer, 2016). We present the coherence intervals for the three inference schemas in each, the sufficient conditional and the necessary conditional. ...
Article
Full-text available
According to the weak version of linguistic relativity, also called the Sapir-Whorf hypothesis, the features of an individual’s native language influence his worldview and perception. We decided to test this hypothesis on the sufficient conditional and the necessary conditional, expressed differently in Chinese and French. In Chinese, connectors for both conditionals exist and are used in everyday life, while there is only a connector for the sufficient conditional in French. A first hypothesis follows from linguistic relativity: for the necessary conditional, better logic performance is expected in Chinese participants rather than French participants. As a second hypothesis, for all participants, we expect performance on the sufficient conditional to be better than on the necessary conditional. Indeed, despite the isomorphism of the two conditionals, they differ in how information is processed for reasoning. We decided to study reasoning under uncertainty as it reflects reality more accurately. To do so, we analyzed the coherence of participants using de Finetti’s theory for deduction under uncertainty. The results of our study show no significant difference in performance between Chinese and French participants, neither on the sufficient conditional nor on the necessary conditional. Thus, our first hypothesis derived from the weak version of linguistic relativity is not confirmed. In contrast, our results confirm the second hypothesis in two out of three inference schemas.
... A bet is possible if at least a (−1, +1)-model gives the value 1. This 'indifferent' step is necessary as a first step in order to elaborate a probability judgment on E (the outlaid pay) which corresponds to the second epistemic level (de Finetti 1980, Baratgin andPolitzer 2016). ...
Article
Full-text available
The trivalent and functional theory of the truth of conditionals developed by Bruno de Finetti has recently gathered renewed interests, particularly from philosophical logic, psychology and linguistics. It is generally accepted that de Finetti introduced his theory in 1935. However, a reading of his first publications indicates an earlier conception of almost all his theory. We bring to light a manuscript and unknown writings, dating back to 1928 and 1932, detailing de Finetti's theory. The two concepts of thesis and hypothesis are presented as a cornerstone on which logical connectives are established in a 2-to-3 valued logic. The proposed generalisation of the bivalent material implication to the trivalent framework, based on the bivalent entailment is however different from the one that will be introduced in 1935. In these early writings de Finetti presents original results that will later be independently rediscovered by other researchers. In particular, the 'suppositional logic' developed by Theodore Hailperin in 1996 presents numerous similarities. Conversely, we consider the notion of validity proposed by Hailperin in line with de Finetti's approach. Overall we attribute the primacy of the trivalent theory to de Finetti; this early conception enabled him to take an original position and argue with Hans Reichenbach.
... (under the name of "eikosogram,"Greek for probability picture),Politzer (2014),Böcherer-Linder et al. (2018), and Böcherer- ...
Article
Full-text available
A normalized version of the ubiquitous two-by-two contingency matrix is associated with a variety of marginal, conjunctive, and conditional probabilities that serve as appropriate indicators in diagnostic testing. If this matrix is enhanced by being interpreted as a probabilistic Universe of Discourse, it still suffers from two interrelated shortcomings, arising from lack of length/area proportionality and a potential misconception concerning a false assumption of independence between the two underlying events. This paper remedies these two shortcomings by modifying this matrix into a new Karnaugh-map-like diagram that resembles an eikosogram. Furthermore, the paper suggests the use of a pair of functionally complementary versions of this diagram to handle any ternary problem of conditional probability. The two diagrams split the unknowns and equations between themselves in a fashion that allows the use of a divide-and-conquer strategy to handle such a problem. The method of solution is demonstrated via four examples, in which the solution might be arithmetic or algebraic, and independently might be numerical or symbolic. In particular, we provide a symbolic arithmetic derivation of the well-known formulas that express the predictive values in terms of prevalence, sensitivity and specificity. Moreover, we prove a virtually unknown interdependence among the two predictive values, sensitivity, and specificity. In fact, we employ a method of symbolic algebraic derivation to express any one of these four indicators in terms of the other three. The contribution of this paper to the diagnostic testing aspects of mathematical epidemiology culminates in a timely application to the estimation of the true prevalence of the contemporary worldwide COVID-19 pandemic. It turns out that this estimation is hindered more by the lack of global testing worldwide rather than by the unavoidable imperfection of the available testing methods.
... But just like we can make use of tools like notepads and video recorders to aid our memory, there are tools that can help us navigate complex reasoning tasks in which we have to draw inferences from uncertain information. In particular, we can use probability theory to establish precise constraints between related degrees of belief (e.g., Gilio and Over, 2012;Politzer, 2016), and we can use Bayesian networks (BNs) to establish the precise implications of a change in the probability of one piece of information for the probability of other, related pieces of information (Pearl, 1988(Pearl, , 2000Korb and Nicholson, 2011;Fenton and Neil, 2018). ...
Article
Full-text available
Bayesian reasoning and decision making is widely considered normative because it minimizes prediction error in a coherent way. However, it is often difficult to apply Bayesian principles to complex real world problems, which typically have many unknowns and interconnected variables. Bayesian network modeling techniques make it possible to model such problems and obtain precise predictions about the causal impact that changing the value of one variable may have on the values of other variables connected to it. But Bayesian modeling is itself complex, and has until now remained largely inaccessible to lay people. In a large scale lab experiment, we provide proof of principle that a Bayesian network modeling tool, adapted to provide basic training and guidance on the modeling process to beginners without requiring knowledge of the mathematical machinery working behind the scenes, significantly helps lay people find normative Bayesian solutions to complex problems, compared to generic training on probabilistic reasoning. We discuss the implications of this finding for the use of Bayesian network software tools in applied contexts such as security, medical, forensic, economic or environmental decision making.
... These coherence intervals follow from the axioms of probability theory as a matter of deductive logic. Once inferred, we can show whether the lower bound is greater than or equal to the probability of the premises (Gilio & Over, 2012;Oaksford & Chater, 2017;Pfeifer & Kleiter, 2009;Politzer, 2015). However, as Hinterecker et al. (2016Hinterecker et al. ( , p. 1607 observe, there is no modal logic (Garson, 2016) in which the inference from A or B or both to {(A & B) is valid. ...
... While this result has no immediate bearing on the inferences people should or did draw in these experiments, Hinterecker et al. (2016) argue that this result uniquely supports the revised MMT theory. However, the theory of how people judge the subjective probabilities of the singular events used in their materials (Khemlani, Lotstein, & Johnson-Laird, 2012, 2015 forms no part of the revised MMT. It is an ad hoc and detachable theory to which other psychological theories could just as well appeal. ...
Article
Full-text available
Hinterecker, Knauff, and Johnson-Laird (2016) compared the adequacy of the probabilistic new paradigm in reasoning with the recent revision of mental models theory (MMT) for explaining a novel class of inferences containing the modal term "possibly." For example, the door is closed or the window is open or both, therefore, possibly the door is closed and the window is open (A or B or both, therefore, possibly(A & B)). They concluded that their results support MMT. In this comment, it is argued that Hinterecker et al. (2016) have not adequately characterized the theory of probabilistic validity (p-validity) on which the new paradigm depends. It is unclear how p-validity can be applied to these inferences, which are anyway peripheral to the theory. It is also argued that the revision of MMT is not well motivated and its adoption leads to many logical absurdities. Moreover, the comparison is not appropriate because these theories are defined at different levels of computational explanation. In particular, revised MMT lacks a provably consistent computational level theory that could justify treating these inferences as valid. It is further argued that the data could result from the noncolloquial locutions used to express the premises. Finally, an alternative pragmatic account is proposed based on the idea that a conclusion is possible if what someone knows cannot rule it out. This account could be applied to the unrevised mental model theory rendering the revision redundant. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
... While this result has no immediate bearing on the inferences people should or did draw in these experiments, Hinterecker et al. (2016) argue that this result uniquely supports the revised MMT theory. However, the theory of how people judge the subjective probabilities of the singular events used in their materials (Khemlani, Lotstein, & Johnson-Laird, 2012, 2015 forms no part of the revised MMT. It is an ad hoc and detachable theory to which other psychological theories could just as well appeal. ...
Article
Hinterecker et al. (2016) compared the adequacy of the probabilistic new paradigm in reasoning with the recent revision of mental models theory (MMT) for explaining a novel class of inferences containing the modal term “possibly”. For example, the door is closed or the window is open or both, therefore, possibly the door is closed and the window is open (A or B or both, therefore, possibly(A & B)). They concluded that their results support MMT. In this comment, it is argued that Hinterecker et al. (2016) have not adequately characterised the theory of probabilistic validity (p-validity) on which the new paradigm depends. It is unclear how p-validity can be applied to these inferences, which are anyway peripheral to the theory. It is also argued that the revision of MMT is not well motivated and its adoption leads to many logical absurdities. Moreover, the comparison is not appropriate because these theories are defined at different levels of computational explanation. In particular, revised MMT lacks a provably consistent computational level theory that could justify treating these inferences as valid. It is further argued that the data could result from the non-colloquial locutions used to express the premises. Finally, an alternative pragmatic account is proposed based on the idea that a conclusion is possible if what someone knows cannot rule it out. This account could be applied to the unrevised mental model theory rendering the revision redundant.
... Stalnaker's approach and also Lewis's approach can be seen as a psychological process (Cariani & Rips, 2016;Rips & Marcus, 1977), inspired by the Ramsey test (Ramsey, 1990). First, add the antecedent to the stock of beliefs represented by a possible world, 4 More complex theories defend the choice of the subjective Bayesian theory of de Finetti (1974a,b) in which the logic of reference is a trivalent logic where the uncertainty represent an additional truth value (Baratgin, Over, & Politzer, 2013Over & Baratgin, 2017) and probabilistic inferences can be analysed in the light of coherence intervals (Coletti & Scozzafava, 2002;Pfeifer & Kleiter, 2006;Politzer, 2016). 5 For instance, in Smiley's system, to be valid, an inference must be classically valid and furthermore, the conjunction of the premises cannot be a (classical) contradiction and the consequent cannot be a (classical) tautology. ...
Article
Unconnected conditionals, also called irrelevant conditionals, are sentences of form if A, C, whose antecedent and consequent bear no connection. According to the main theories of conditional reasoning, the truth or high probability of an antecedent and a consequent is sufficient to make true or highly probable the corresponding conditional. We tested this assumption and showed that it does not hold for unconnected conditionals. Furthermore, we investigated experimentally the factors which favour the endorsement of irrelevant conditional constructions and found that this rate increases when an analogy can be built between the antecedent and the consequent or when the conditional is asserted before its components.
... with P(H) the prior probability of hypothesis H, P(H|D) the posterior probability after the knowledge of data D, P(D|H) the likelihood and P(D) the probability of D. 3 The other main change, not covered in this paper, is the analyzis of deductive arguments in the light of de Finetti's Bayesian coherence interval [12,41,45]. Some studies show a relative coherence in Human deduction under uncertainty [14, 41-43, 46, 48]. 4 Recent studies show that a majority of individuals have a trivalent interpretation of the conditional event [47] and that de Finetti's three-valued tables [18] are the best approximation for participants' truth tables [5,6,11]. ...
Chapter
The Bayesian model was recently proposed as a normative reference for psychology studies in deductive reasoning. This new paradigm supports that individuals evaluate the probability of an indicative conditional if A then C in the natural language as the conditional probability \(P(\textit{C given A})\) (P(C|A) according to Bayes’ rule). In this paper, we show applying an eye-tracking methodology that if the cognitive process for both probability assessments (\(P(\textit{if A then C})\) and P(C|A)) is really identical, it actually doesn’t match the traditional focusing situation of revision corresponding to Bayes’ rule (change of reference class in a static universe). Individuals appear to revise their probability as if the universe was evolving. They use a minimal rule in mentally removing the elements of the worlds that are not A. This situation, called updating, actually seems to be the natural frame for individuals to evaluate the probability of indicative conditional and the conditional probability.
... How many of these cars are two-door cars?" Sixty-three percent of the participants gave coherent intervals for MP but only 41% for DA, (a difference which the authors explained by matching-based guessing with DA). The result for MP is comparable with that of the pioneering study by George (1997, experiment 2) who presented the minor or the major or both premises of MP arguments as "very probable" and observed 70% of responses which, with hindsight, can be identified as coherent (see also al., 2015, andSingmann & for similar results). Pfeifer & Kleiter (2006) studied also contraposition with a premise of the type "exactly 93% of the cars on a parking lot are blue", and the question, "Imagine all the cars that are not blue. ...
Article
Full-text available
The new paradigm in the psychology of reasoning redirects the investigation of deduction conceptually and methodologically because the premises and the conclusion of the inferences are assumed to be uncertain. A probabilistic counterpart of the concept of logical validity and a method to assess whether individuals comply with it must be defined. Conceptually, we used de Finetti's coherence as a normative framework to assess individuals' performance. Methodologically, we presented inference schemas whose premises had various levels of probability that contained non-numerical expressions (e.g., “the chances are high”) and, as a control, sure levels. Depending on the inference schemas, from 60% to 80% of the participants produced coherent conclusions when the premises were uncertain. The data also show that (1) except for schemas involving conjunction, performance was consistently lower with certain than uncertain premises, (2) the rate of conjunction fallacy was consistently low (not exceeding 20%, even with sure premises), and (3) participants' interpretation of the conditional agreed with de Finetti's “conditional event” but not with the material conditional.