ArticlePDF Available

Foreword: Three-valued logics and their applications

Authors:
A preview of the PDF is not available
... Cobreros et all. [25] and Fedorov et al. [4] have also analysed it, positively. As shown [23], this would work experimentally, but at the expense of performance -in cost, speed, and noise rejection -and scalability. ...
... Hence, QC promises to be easier to realize than QM. Three-valued logic, even in GF (2 m ) implementations, besides contingency, reference failure, and vagueness, have been associated with at least four other phenomena of interest -namely, conditionals, majority voting, computability, and the semantic paradoxes [25], [28]. These mathematical processes relate to coherence and are inversely related to indeterminacy. ...
Article
Full-text available
This work uses the algebraic approach to show how we communicate when applying the quantum mechanics (QM) concept of coherence, proposing tri-state+ in quantum computing (QC). In analogy to Einstein's stimulated emission, when explaining the thermal radiation of quantum bodies in communication, this work shows that one can use the classical Information Theory by Shannon (with two, random logical states only, “0” and “1”, emulating a relay), and add a coherent third truth value Z, as a new process that breaks the Law of the Excluded Middle (LEM). Using a well-known result in topology and projection as a ``new hypothesis'' here, a higher dimensional state can embed in a lower-dimensional state. This means that any three-valued logic system, breaking the LEM, can be represented in a binary logical system, obeying the LEM. This satisfies QC in behavior, offering multiple states at the same time in GF(3^m), but frees the implementation to use binary logic and LEM. This promises to allow indeterminacy, such as contingency, reference failure, vagueness, majority voting, conditionals, computability, the semantic paradoxes, and many more, to play a role in logic synthesis, with a much better resolution of indeterminate contributions to obtain coherence and help cybersecurity. We establish a link between Einstein's and Shannon's theories in QM, hitherto not reported, and use it to provide a model for QC without relying on external devices (i.e., quantum annealing), or incurring in decoherence. By focusing on adequate software, this could replace the emphasis in QC, from hardware to software.
... Cobreros et all. [25] and Fedorov et al. [4] have also analysed it, positively. As shown [23], this would work experimentally, but at the expense of performance -in cost, speed, and noise rejection -and scalability. ...
... Three-valued logic, even in GF (2 m ) implementations, besides contingency, reference failure, and vagueness, have been associated with at least four other phenomena of interest -namely, conditionals, majority voting, computability, and the semantic paradoxes [25], [29]. These mathematical processes relate to coherence and are inversely related to indeterminacy. ...
Preprint
Full-text available
This work uses the algebraic approach to show how we communicate when applying the quantum mechanics (QM) concept of coherence, proposing tri-state+ in quantum computing (QC). In analogy to Einstein's stimulated emission, when explaining the thermal radiation of quantum bodies in communication, this work shows that one can use the classical Information Theory by Shannon (with two, random logical states only, “0” and “1”, emulating a relay), and add a coherent third truth value Z, as a new process that breaks the Law of the Excluded Middle (LEM). Using a well-known result in topology and projection as a ``new hypothesis'' here, a higher dimensional state can embed in a lower-dimensional state. This means that any three-valued logic system, breaking the LEM, can be represented in a binary logical system, obeying the LEM. This satisfies QC in behavior, offering multiple states at the same time in GF(3^m), but frees the implementation to use binary logic and LEM. This promises to allow indeterminacy, such as contingency, reference failure, vagueness, majority voting, conditionals, computability, the semantic paradoxes, and many more, to play a role in logic synthesis, with a much better resolution of indeterminate contributions to obtain coherence and help cybersecurity. We establish a link between Einstein's and Shannon's theories in QM, hitherto not reported, and use it to provide a model for QC without relying on external devices (i.e., quantum annealing), or incurring in decoherence. By focusing on adequate software, this could replace the emphasis in QC, from hardware to software.
... Cobreros et all. [25] and Fedorov et al. [4] have also analysed it, positively. As shown [23], only binary logic systems (i.e., following Shannon) would work experimentally, but at the expense of performance -in cost, speed, and noise rejection -and scalability. ...
... Three-valued logic, even in GF (2 m ) implementations, besides contingency, reference failure, and vagueness, have been associated with at least four other phenomena of interest -namely, conditionals, majority voting, computability, and the semantic paradoxes [25], [29]. These mathematical processes relate to coherence and are inversely related to indeterminacy. ...
Conference Paper
Full-text available
This work uses the algebraic approach to show how we communicate when applying the quantum mechanics (QM) concept of coherence, proposing tri-state+ in quantum computing (QC). In analogy to Einstein's stimulated emission, when explaining the thermal radiation of quantum bodies in communication, this work shows that one can use the classical Information Theory by Shannon (with two, random logical states only, “0” and “1”, emulating a relay), and add a coherent third truth value Z, as a new state that breaks the Law of the Excluded Middle (LEM). Using a well-known result in topology and projection as a "new hypothesis'' here, a higher dimensional state can embed in a lower-dimensional state. This means that any three-valued logic system, breaking the LEM, can be represented in a binary logical system, obeying the LEM. This satisfies QC in behavior, offering multiple states at the same time in GF(3^m), but frees the implementation to use binary logic and LEM. This promises to allow indeterminacy, such as contingency, reference failure, vagueness, majority voting, conditionals, computability, the semantic paradoxes, and many more, to play a role in logic synthesis, with a much better resolution of indeterminate contributions to obtain coherence and help cybersecurity. We establish a link between Einstein's and Shannon's theories in QM, hitherto not reported, and use it to provide a model for QC without relying on external devices (i.e., quantum annealing), or incurring in decoherence. By focusing on adequate software, this could replace the emphasis in QC, from hardware to software.
... En cuanto a los modelos incompletos, se mostrará como Reconstructor es capaz de utilizarlos, junto con una semántica trivaluada no-clásica, para evaluar la satisfacción de las leyes en situaciones de información incompleta. Si bien las lógicas no clásicas del tipo que presentaré ya habían sido utilizadas con propósitos similares en otras áreas (véanse Cobreros et al., 2014;Priest, 2008), nunca habían sido aplicadas en detalle en el marco de la filosofía de la ciencia y la metateoría. ...
... Evaluating whether these models satisfy the theoretical laws will thus require the use of some non-classical semantics, which I specify in section 3. This will not only add another interesting use for three-valued paracomplete logics (see Cobreros, Égré, Ripley, & van Rooij, 2014, for a good summary of current uses), but it will also allow me to introduce an additional way of testing a reconstruction, by way of determination methods. ...
Article
In this article, I develop three conceptual innovations within the area of formal metatheory, and present a computer program, called Reconstructor, that implements those developments. The first development consists in a methodology for testing formal reconstructions of scientific theories, which involves checking both whether translations of paradigmatically successful applications into models satisfy the formalisation of the laws, and also whether unsuccessful applications do not. I show how Reconstructor can help carry this out, since it allows the end-user to specify a formal language, input axioms and models formulated in that language, and then ask if the models satisfy the axioms. The second innovation is the introduction of incomplete models (for which the denotation of some terms is missing) into scientific metatheory, in order to represent cases of missing information. I specify the paracomplete semantics built into Reconstructor to deal with sentences where denotation failures occur. The third development consists in a new way of explicating the structuralist notion of a determination method, by equating them with algorithms. This allows determination methods to be loaded into Reconstructor and then executed within a model to find out the value of a previously non-denoting term (i.e. it allows the formal reconstruction to make predictions). This, in turn, can help test the reconstruction in a different way. Finally, I conclude with some suggestions about additional uses the program may have.
Article
Full-text available
In this paper, we present an experiment of our randomized hints strategy of automated reasoning for yielding Axiom(5) from Axiom(1)(2)(3)(4) of Infinite-Valued Lukasiewicz Logic. In the experiment, we randomly generated a set of hints with size ranging from 30 to 60 for guiding hyper-resolution based search by the theorem prover. We have successfully found the most useful hints list (with 30 clauses) among 150 * 6 hints lists. Also, we discuss a curious non-linear increase of generated clauses in deducing Axiom(5) by applying our randomized hints strategy.
Chapter
The paper provides a brief overview of modern applications of multi-valued logic models, where the design of heterogeneous computing systems with small computing units based on three-valued logic gives the mathematically better and more effective solution compared to binary models. It is necessary for applications to implement circuits comprised from chipsets, the operation of which is based on three-valued logic. To be able to implement such schemes, a fundamentally important theoretical problem must be solved: the problem of completeness of classes of functions of three-valued logic. From a practical point of view, the completeness of the classes of such functions ensures that circuits with the desired operations can be produced from on an arbitrary (finite) set of chipsets. In this paper, the closure operator on the set of functions of three-valued logic, that strengthens the usual substitution operator has been considered. It was shown that it is possible to recover the sublattice of closed classes in the general case of closure of functions with respect to the classical superposition operator. The problem of the lattice of closed classes for the class of functions \(T_2\) preserving two is considered. The closure operator \(\mathcal{R}_1\) for which functions that differ only by dummy variables are considered to be equivalent is considered in this paper. A lattice is constructed for closed subclasses in \(T_2 = \{f | f (2, \ldots , 2) = 2 \}\) – class of functions preserving twoKeywordsThree-valued logic applicationThree-valued logicClosure operatorLattice structureClosed subclassesSubstitution operator
Article
Full-text available
This paper provides a brief overview of modern applications of nonbinary logic models, where the design of heterogeneous computing systems with small computing units based on three-valued logic produces a mathematically better and more effective solution compared to binary models. For application, it is necessary to implement circuits composed of chipsets, the operation of which is based on three-valued logic. To be able to implement such schemes, a fundamentally important theoretical problem must be solved: the problem of completeness of classes of functions of three-valued logic. From a practical point of view, the completeness of the class of such functions ensures that circuits with the desired operations can be produced from an arbitrary (finite) set of chipsets. In this paper, the closure operator on the set of functions of three-valued logic that strengthens the usual substitution operator is considered. It is shown that it is possible to recover the sublattice of closed classes in the general case of closure of functions with respect to the classical superposition operator. The problem of the lattice of closed classes for the class of functions T2 preserving two is considered. The closure operators R1 for the functions that differ only by dummy variables are considered equivalent. This operator is withiin the scope of interest of this paper. A lattice is constructed for closed subclasses in T2={f|f(2,…,2)=2}, a class of functions preserving two.
Article
Full-text available
We start by presenting various ways to define and to talk about many-valued logic(s). We make the distinction between on the one hand the class of many-valued logics and on the other hand what we call “many-valuedness”: the meta-theory of many-valued logics and the related meta-theoretical framework that is useful for the study of any logical systems. We point out that universal logic, considered as a general theory of logical systems, can be seen as an extension of many-valuedness. After a short story of many-valuedness, stressing that it is present since the beginning of the history of logic in Ancient Greece, we discuss the distinction between dichotomy and polytomy and the possible reduction to bivalence. We then examine the relations between singularity and universality and the connection of many-valuedness with the universe of logical systems. In particular, we have a look at the interrelationship between modal logic, 3-valued logic and paraconsistent logic. We go on by dealing with philosophical aspects and discussing the applications of many-valuedness. We end with some personal recollections regarding Alexander Karpenko, from our first meeting in Ghent, Belgium in 1997, up to our last meeting in Saint Petersburg, Russia in 2016.
Book
A new proposal for integrating the employment of formal and empirical methods in the study of human reasoning. In Human Reasoning and Cognitive Science, Keith Stenning and Michiel van Lambalgen—a cognitive scientist and a logician—argue for the indispensability of modern mathematical logic to the study of human reasoning. Logic and cognition were once closely connected, they write, but were “divorced” in the past century; the psychology of deduction went from being central to the cognitive revolution to being the subject of widespread skepticism about whether human reasoning really happens outside the academy. Stenning and van Lambalgen argue that logic and reasoning have been separated because of a series of unwarranted assumptions about logic. Stenning and van Lambalgen contend that psychology cannot ignore processes of interpretation in which people, wittingly or unwittingly, frame problems for subsequent reasoning. The authors employ a neurally implementable defeasible logic for modeling part of this framing process, and show how it can be used to guide the design of experiments and interpret results. Bradford Books imprint
Article
This book argues that an adequate account of vagueness must involve degrees of truth. The basic idea of degrees of truth is that while some sentences are true and some are false, others possess intermediate truth values: they are truer than the false sentences, but not as true as the true ones. This idea is immediately appealing in the context of vagueness-yet it has fallen on hard times in the philosophical literature, with existing degree-theoretic treatments of vagueness facing apparently insuperable objections. The book seeks to turn the tide in favour of a degree-theoretic treatment of vagueness, by motivating and defending the basic idea that truth can come in degrees, by arguing that no theory of vagueness that does not countenance degrees of truth can be correct, and by developing a new degree-theoretic treatment of vagueness-fuzzy plurivaluationism-that solves the problems plaguing earlier degree theories.