ArticlePDF Available

Universal (Meta-)Logical Reasoning: Recent Successes

Authors:

Abstract and Figures

Classical higher-order logic, when utilized as a meta-logic in which various other (classical and non-classical) logics can be shallowly embedded, is suitable as a foundation for the development of a universal logical reasoning engine. Such an engine may be employed, as already envisioned by Leibniz, to support the rigorous formalisation and deep logical analysis of rational arguments on the computer. A respective universal logical reasoning framework is described in this article and a range of successful first applications in philosophy, artificial intelligence and mathematics are surveyed.
Content may be subject to copyright.
Universal (Meta-)Logical Reasoning: Recent Successes
Christoph Benzm¨ullera,b
aFreie Universit¨at Berlin, FB Mathematik und Informatik, D-14195 Berlin, Germany
bUniversit´e du Luxembourg, FSTC, L-4365 Esch-sur-Alzette, Luxembourg
Abstract
Classical higher-order logic, when utilized as a meta-logic in which various other
(classical and non-classical) logics can be shallowly embedded, is suitable as a
foundation for the development of a universal logical reasoning engine. Such
an engine may be employed, as already envisioned by Leibniz, to support the
rigorous formalisation and deep logical analysis of rational arguments on the
computer. A respective universal logical reasoning framework is described in
this article and a range of successful first applications in philosophy, artificial
intelligence and mathematics are surveyed.
Keywords: computational metaphysics, classical higher-order logic,
non-classical logics, automated reasoning
1. Introduction
The quest for a universal reasoning framework is very prominently repre-
sented in the works of Leibniz. He envisioned a scientia generalis founded on
acharacteristica universalis, that is, a universal formal language in which all
knowledge about the world and the sciences can be encoded. A quick study
of the survey literature on logical formalisms suggests that quite the opposite
to Leibniz’s dream has become reality. Instead of a characteristica universalis,
we are today facing a very rich and heterogenous zoo of different logical sys-
tems, and instead of converging towards a single superior logic, this logic zoo is
further expanding, eventually even at accelerated pace. As a consequence, the
unified vision of Leibniz seems farther away than ever before. However, there
are also some promising initiatives to counteract these diverging developments,
and related works on unifying approaches to logic include categorial logic [1, 2],
algebraic logic [3] and coalgebraic logic [4, 5]. While some practical work has
been reported utilizing the algebraic logic approach [6, 7], these approaches
typically have had a strong emphasis on theory only.
IThis research was funded by the German National Research Foundation (DFG) under
Heisenberg grant BE 2501/9 (Studies in Computational Metaphysics) and by Volkswagen
Stiftung under grant CRAP (Consistent Rational Argumentation in Politics).
URL: http://christoph-benzmueller.de (Christoph Benzm¨uller)
Preprint submitted to Science of Computer Programming November 8, 2018
The solution presented here draws on another alternative at universal logical
reasoning: the shallow semantical embeddings (SSE) approach. This approach
has a very pragmatic motivation, foremost reuse of tools, simplicity and ele-
gance. It utilises classical higher-order logic (HOL) [8, 9] as a unifying meta-
logic in which the syntax and semantics of varying other logics can be explicitly
modeled and flexibly combined. Off-the-shelf interactive theorem provers (ITPs)
and automated theorem provers (ATPs) for HOL [10] can then be employed to
reason about and within the shallowly embedded logics.
This survey article summarises and reflects upon the main results of the
research project Studies in Computational Metaphysics (CompMeta), conducted
from 2012 to 2017 at Freie Universit¨at Berlin and Stanford University. In this
project the SSE approach has been further developed and empirical studies
have been conducted in various disciplines, including philosophy, mathematics
and artificial intelligence. In philosophy/metaphysics, for example, an initial
focus has been on computer-supported assessments of modern variants of the
ontological argument for the existence of God, where the SSE approach has
been utilised in particular for automating variants of higher-order (multi-)modal
logics [11]. It is these sort of challenge applications of expressive logical reasoning
that the SSE approach is primarily addressing.
A most relevant aspect from the perspective of computer programming is
that the SSE approach in an elegant and theoretically well-founded manner
strives for maximal reuse of already existing theorem proving technology with
minimal coding effort. The prototype systems that have been implemented in
the course of CompMeta performed surprisingly well in all the conducted case
studies, which provides good evidence for the practical relevance of the approach.
Note that the performance of these implemented systems will (to some extent)
naturally advance in the future without much effort simply because the state-
of-the-art ATPs and satisfiability modulo theories (SMT) solvers it integrates
will further improve in regular cycles.
The article is structured as follows. Section 2 outlines the SSE approach
and discusses its application with a challenge puzzle in epistemic reasoning: the
wise men puzzle. The presented solution puts a particular emphasis on the
adequate modeling of common knowledge. Section 3 presents the motivation
and objectives of the CompMeta project, in which the SSE approach has been
further explored and empirically assessed. The main results of CompMeta are
subsequently summarised and discussed in Sect. 4. Section 5 concludes the
article.
2. The Shallow Semantical Embeddings Approach
HOL has its roots in the logic of Frege’s Begriffsschrift [12]. However, the
version of HOL as addressed here is a (simply) typed logic of functions, which
has been proposed by Church [8]. It provides lambda-notation, as an elegant
and useful means to denote unnamed functions, predicates and sets (by their
characteristic functions). Types in HOL eliminate paradoxes and inconsisten-
cies. Russel’s paradox (the set of sets which do not contain themselves), for
example, which can be formalized in Frege’s logic, cannot be represented in
HOL due to type constraints. For more details and further references on HOL
and its automation we refer to the literature [9, 10]. Very relevant for the work
presented here is that the theory of HOL is well understood [13, 14] and that off-
the-shelf ATPs and ITPs for HOL exists, which can easily be reused. Respective
reasoning systems that are particularly relevant for the SSE approach include
the proof assistants Isabelle/HOL [15] and Coq [16], the ATPs LEO-II [17],
Leo-III [18] and Satallax [19], and the model finder Nitpick [20]. In the running
example discussed below the Isabelle/HOL proof assistant is used. There are
two main reasons for this choice: (i) The powerful graphical interface of Isa-
belle/HOL enables a particularly intuitive interaction with the SSE approach
in which the logic embeddings can be very elegantly displayed and edited, and
(ii) this system, via its Sledgehammer tool [21], integrates powerful first-order
(FO) ATPs and SMT solvers, including E [22], CVC4 [23], Z3 [24], SPASS [25]
and Vampire [26], and it also connects with the HOL ATPs Leo-II and Satallax.
This combination makes it a most suitable environment for conducting mixed
interactive and automated experiments with the SSE approach.
In the remainder of this section the SSE approach will be outlined with the
help of a prominent puzzle in epistemic reasoning: the wise men puzzle (cf. its
discussion in the literature [27, 28]). A particular emphasis and novelty in the
formalisation below is the adequate modeling of the common knowledge of a
set of agents, which is defined as the transitive closure of the agents mutual
knowledge. While the adequate encoding of the notion of transitive closure
poses a challenge for inexpressive knowledge representation frameworks, we here
utilise and showcase a particularly short and elegant solution in HOL (a single
line of code).
2.1. Outline of the SSE Approach
Let L be an object logic of interest, for example, higher-order (HO) modal
logic, which amongst others has prominent applications in metaphysics. Since
our running example below requires the combination of modalities, we will in
fact work with a HO multi-modal logic (HOMML).
The overall idea of the SSE approach is to provide a lean and elegant equa-
tional theory which interprets the syntactical constituents of logic L (in our case
HOMML) as lambda-terms of the meta-logic HOL.
An encoding of HOMML in HOL is presented in Fig. 1. Unlike the traditional
translation approach [29], the connection between HOMML and HOL, i.e. the
equational theory defining the translation is itself formalised in HOL. Moreover,
in contrast to a deep logical embedding, where the syntax and the semantics
of logic L would be formalised in full detail (using structural induction and
recursion), only the crucial differences in the semantics of both, L and HOL, are
directly addressed in the equational theory, while the commonalities are shared
between both logics. HOMML and HOL, for example, share the domain of
individuals. A crucial difference, however, lies in the possible world semantics
on the side of HOMML. Hence, the equational theory defining HOMML in HOL
provides an explicit modeling of this particular aspect of modal semantics. The
Figure 1: Shallow semantical embedding of HOMML in HOL.
central idea of this theory is to associate Boolean valued formulas ϕoof HOMML
with world-predicates (truth-sets characterised as lambda-abstractions) ϕioin
HOL, where istands for a reserved type for worlds (lines 5-7 in Fig. 1). The
predicate type iois abbreviated as σand αstands for the type iioof
accessibility relations.
To establish our mapping it essentially suffices to equate the classical logical
connectives of HOMML with corresponding world-lifted predicates and relations
in HOL. For example, in line 13 in Fig. 1 the HOMML connective is identified
with the lambda-term λϕ.λψ.λw.ϕwψw, such that ϕψis mapped to the truth
set {x|ϕw ψw}, which is denoted in HOL by the lambda-term λw.ϕw ψw.
The indexed modal operators 2rof HOMML are identified with lambda-terms
λϕ.λw.v.rwv ϕv, where argument symbol rdenotes an accessibility relation
between possible worlds. We may alternatively say that a parameterised, generic
2-operator is introduced here as λr.λϕ.λw.v.rwv ϕv, such that 2is
mapped to the truth set λw.v.rwv ϕv (line 21 in Fig. 1). This generic
2-operator can then be instantiated for concrete accessibility relations ras
required.
The mapping of constant symbols and variables from HOMML to HOL is
trivial, since only a type-lifting is required. Most importantly, the mapping
of HOMML to HOL can be defined by a set of non-recursive equations (in
fact, abbreviations), in which the dependency of HOMML formulas on possible
worlds is made explicit, while other aspects and parameters of its semantic
interpretation, such as the underlying semantic domains, remain shared between
HOMML and HOL.
Another interesting and important aspect is that the SSE approach scales
well for FO and HO quantifiers. Analogous to the encoding of the proposi-
tional HOMML connectives, they can be introduced as simple abbreviations of
lambda-terms in HOL as well. The introduction of new binder mechanisms is
not required, since the already existing lambda-binder(s) in meta-logic HOL can
be elegantly reused. This is shown in line 17 in Fig. 1, where the HOL univer-
sal quantifier Π(µo)ois type-lifted to become a HOMML universal quantifier
Π(µ(io))(io). In line 18 convenient binder notation is then defined for
the lifted HOMML quantifier, so that we may write x.φx instead of the less
intuitive variant Π(λx.φx). Type polymorphism is employed in lines 17-18 to
avoid the otherwise required enumeration of such quantifier-defining equations
for different argument types. The existential quantifier for HOMML is intro-
duced analogously in lines 19-20.
The final step is to provide a notion of validity for the type-lifted HOMML
formulas in HOL (see line 25 in Fig. 1): A type-lifted formula ϕ, is valid, denoted
here as bϕc, if and only if the application of ϕto wholds for all worlds w. In
addition, a notion of local validity, denoted here as bϕccw, can be introduced:
bϕccw is true if and only if ϕ cw holds, where cw is an uninterpreted constant
symbol representing the current world.
The presented equations thus characterise a fragment of HOL which, modulo
the above sketched type-lifting, corresponds to HOMML. The faithfulness of this
correspondence, that is, its soundness and completeness with respect to Henkin
semantics, can be established with pen and paper methods [11].1
2.2. Operations on (Accessibility) Relations, including Transitive Closure
Figure 2 presents some useful operations on (accessibility) relations. They
can e.g. be used to elegantly postulate an accessibility relation rto be reflexive,
transitive and euclidean, which are the semantic properties typically associated
with modal knowledge operators 2r(see also line 6 in Fig. 3). The expressivity
of HOL is particularly exploited in the single-line definition of the transitive clo-
sure operation tc in line 14 in Fig. 2, which expresses that two objects (worlds)
xand yare related in the transitive closure (tc R) of a relation Rif and only
if they are related in all transitive super-relations Qof R. Utilising this defini-
tion, ATPs integrated with Isabelle/HOL via Sledgehammer can be employed
to prove some useful lemmata, including the transitivity of the transitive closure
1Further work will investigate whether such faithfulness proofs can eventually be formalised
as well in the approach presented here or whether e.g. a deep embedding is required in this
case.
Figure 2: Operations on (Accessibility) Relations, including Transitive Closure.
of any relation R(line 21) and the symmetry of the transitive closure of any
symmetric relation R(lines 27-28). In the lower window of the GUI in Fig. 2
we e.g. see that the FO ATPs E and SPASS quickly prove the latter lemma,
while the SMT solver CVC4 timesout (the SMT solver Z3 is still running here).
Sledgehammer determines the exact dependencies for the proven conjecture and
it identifies a trusted proof tactic in Isabelle/HOL (here auto), which is capable
of reproving the lemma when the determined dependencies are preselected. The
reconstructed proof utilising the tactic auto is recognised by Isabelle’s inference
kernel. The lemma is then accepted as such by the system.
2.3. Example Application: Wise Men Puzzle
The wise men puzzle, a famous logic riddle whose formalisation has been
studied in some detail e.g. by Baldoni [27], is as follows: Once upon a time, a
king wanted to find the wisest out of his three wisest men. He arranged them
in a circle and told them that he would put a white or a black spot on their
foreheads and that one of the three spots would certainly be white. The three
wise men could see and hear each other but, of course, they could not see their
faces reflected anywhere. The king, then, asked to each of them to find out the
color of his own spot. After a while, the wisest correctly answered that his spot
was white.
An encoding of this epistemic puzzle scenario utilising the SSE approach is
presented in Fig. 3. As an improvement over related work and also over own pre-
vious experiments [30], an adequate modeling of mutual knowledge and common
knowledge is provided in Fig. 3 by following the suggestions of Sergot [31]. The
key idea is to model the knowledge of each wise men, say a, with the help of an
indexed KT45 (=S5) modal operator 2a. We thus introduce three accessibilty
relations a,band c(see line 3 in Fig.3) and instantiate the generic 2-operator
from Fig.1 accordingly to obtain the indexed knowledge operators 2a,2band
2c, one for each wise men (cf. their uses in lines 26, 28 and 30). The accessibilty
relations a,band care constrained in line 6 to obey reflexivity, transitivity and
euclideaness. This ensures that 2a,2band 2care KT45 knowledge operators
as intended. Following Sergot, the mutual knowledge of the wise men a,band
cis introduced next be defining a relation Eabc as the union of the accessibility
relations a,band c(in line 8). However, the corresponding 2Eabc-operator
does not yet qualify as an operator for common knowledge, since it may fail to
be transitive. Hence, another relation Cabc is introduced as the transitive clo-
sure of Eabc. The ATPs integrated with Isabelle/HOL confirm (in lines 12-14)
that Cabc is reflexive, transitive and euclidean, which means that 2Cabc is a
suitable encoding of the common knowledge of the wise men a,band c.
The formalisation of the epistemic puzzle scenario is continued in lines 16-
30 in Fig. 3.2In addition to the already declared constant symbols a,band
2This encoding still abstracts from the temporal dynamics of the scenario, and the adequate
inclusion of such aspects, for example by adopting and integrating an semantic embedding of
dynamic epistemic logic [32], is still ongoing work. The formalisation presented here never-
Figure 3: Wise Men Puzzle.
c, which denote the epistemic accessibility relations of the wise men in the
scenario, two further uninterpreted constants symbols are introduced in line 16
of Fig. 3. Predicate wise is used to identify and denote the set of wise men in
the scenario (see line 18), and the predicate ws expresses whether a wise man
has a white spot. Line 20 states that it is common knowledge of the wise men
a,band cthat at least one of them has a wite spot. Line 22 (respectively, line
24) then postulates that it is common knowledge that if one wise man has a
white spot (respectively, not a white spot), then the other wise men see and
thus know this. This information, which is implicit background knowledge that
is not explicitly stated in the puzzle itself, is nevertheless relevant for solving it.
Note in particular, how the exploitation of the meta-logic HOL here avoids the
otherwise required duplication of these axioms for different combinations of x
and yin the object logic HOMML. Lines 26 and 28 encode the information that
the first two wise men that are asked by the king (they are called aand bhere) do
not know whether they have a white spot. This again is postulated as common
knowledge of the wise men. Then, in line 30, the theorem is formulated that
the third wise men cnow knows that he has a white spot. This theorem can be
proven by the ATPs integrated with Isabelle/HOL via the Sledgehammer tool.
The lower part of the GUI window in Fig. 3 shows that the FO ATPs E and
SPASS succeed, and so does the SMT solver CVC4 (while Z3 is still running).
Moreover, in line 31, a consistency check for the entire formalisation of the
puzzle scenario is performed: the model finder Nitpick computes and presents a
model (not displayed here) that satisfies the presented axioms and definitions.
Note the elegance and minimal effort with which an integrated interactive
and automated theorem proving environment for HOMML has been imple-
mented in Figures 1-3 on top of an existing theorem proving infrastructure
for HOL. In fact, the entire implementation of HOMML and its application to
the wise men puzzle did not require more than 94 line of Isabelle/HOL code,
including commentary. And still, a good degree of automation is achieved,
which significantly benefits from the existing ATPs and SMT solvers already
integrated with Isabelle/HOL via the Sledgehammer tool.
After this brief illustration of the SSE approach we now turn attention to
the CompMeta project and briefly discuss its original motivation, its objectives,
and some relevant preceding work.
3. Ob jectives of the CompMeta Project and Preliminary Work
The CompMeta project has its intellectual roots in the author’s work, con-
ducted with colleagues since the mid-nineties, on the theory and practice of HO
theorem proving (cf. [14, 33, 34] and the references therein), on HO proof assis-
tants (cf. [35, 36] and the references therein), and on their applications in mathe-
theless already elegantly demonstrates some core advantages of the SSE approach, including
the already mentioned appropriate modeling of common knowledge based on the transitive
closure of mutual knoweldge.
matics, artificial intelligence and education (e.g. [37, 38, 39]). These research ac-
tivities inspired first experiments towards the development of a universal (meta-)
logical reasoning framework based on the SSE approach [40, 41, 30, 11]. The
core motivation for the CompMeta project has been to further consolidate these
initial ideas and to assess the approach in empirical studies. The main ob jectives
thus included
1. to further explore the theoretical foundation of the SSE approach,
2. to exemplarily implement the approach for a range of challenge logics in
existing ATPs and ITPs for HOL,
3. to provide evidence for its universal logical reasoning capabilities within
exemplary case studies in metaphysics and beyond,
4. and to educate a new generation of students and researchers to master the
SSE approach.
Regarding (1) it was planned to study the faithfulness of the embedding of
further challenging quantified non-classical logics in HOL. The hypothesis has
been that in all cases the faithfulness, i.e. soundness and completeness of the
embedding, can be shown when a notion of Henkin semantics is assumed (on
the side of both logics). Regarding (2), a close collaboration with the projects
Leo-II and Leo-III, running in parallel at Freie Univerit¨at Berlin, was foreseen,
in addition to the use of the proof assistants Isabelle/HOL and Coq. With re-
spect to (3), a focus has been on applications in theoretical philosophy, resp. in
metaphysics, since there is a particular need for very expressive non-classical
logics in this area. For example, hyper-intensional second order modal logic is
utilised as the starting point in Zalta’s Principia Logico-Metaphysica [42], and
similarly expressive logics are studied in prominent recent textbooks by Stal-
naker [43] and Williamson [44]. Unfortunately, however, there had not been
any attempts prior to the CompMeta project to implement and automate such
challenging logic formalisms in computer systems. Mainstream knowledge rep-
resentation formalisms in computer science and artificial intelligence, including
e.g. semantic web technologies, typically fail to deliver (not only) in this ap-
plication context due to their lack of expressivity. The CompMeta project, in
contrast, intended to address this gap and to contribute to the pioneering of
the new area of computational metaphysics, which has its roots in the work of
Zalta and colleagues at Stanford University [45, 46, 47]. For the appropriate
modeling of foundational philosophical ontologies and for the formal analysis of
challenge arguments in philosophy (and beyond) a suitably expressive modeling
and reasoning framework was obviously required. With respect to (4), the goal
has been to design and offer an interdisciplinary lecture course on computational
metaphysics, in which the active use of the CompMeta framework was intended
to play a central role in combination with the training of a new generation of
students to independently master the approach.
4. Results of the CompMeta Pro ject
The main results and highlights of the CompMeta project are summarised
in this section and references are given to the most important publications
stemming from the project.
4.1. Application Study I: Ontological Argument for the Existence of God
Different modern variants of the ontological argument for the existence of
God, one of the still vividly debated masterpiece arguments in metaphysics
(see e.g. Sobel’s textbook [48] and the references therein), have been rigorously
analysed on the computer in the course of CompMeta. These contributions,
many of which were achieved in close collaboration with Bruno Woltzenlogel-
Paleo, received a media repercussion on a global scale.3
In the course of the conducted experiments [49, 50, 51, 52], the theorem
prover Leo-II detected a previously unknown inconsistency in Kurt G¨odel’s
prominent, HO modal logic variant [53] of the ontological argument, while Dana
Scott’s amendment [54] of it was verified for logical soundness in the interactive
proof assistants Isabelle/HOL [15] and Coq [55].4In Fig. 4 the axioms causing
the inconsistency in G¨odel’s manuscript are highlighted (see also the discussion
in Sect. 4.6.1). This inconsistency, which was missed by philosophers, is ex-
plained in detail in two conference papers [51, 50]. Further relevant insights
contributed or confirmed by ATPs e.g. include the separation of relevant from
irrelevant axioms, the determination of mandatory properties of modalities, and
undesired side-implications of the axioms such as the modal collapse.5
Further variants of G¨odel’s axioms were proposed by Anderson, H´ajek and
Bjørdal [57, 58, 59, 60, 61, 62]. These variants have meanwhile also been for-
mally analysed, and ATPs have even contributed to the clarification of an un-
settled philosophical dispute between Anderson and H´ajek [63]. In the course
of this work, different notions of quantification (actualist and possibilist) have
been utilised and combined within the semantical embedding approach [64].
Moreover, the modal collapse, whose avoidance has been the key motivation for
the contributions of Anderson, Bjørdal and H´ajek (and many others), has been
further investigated [65].
A significant further contribution has been achieved by David Fuenmayor,
a philosophy student recruited from in the computational metaphysics lecture
3See e.g. http://www.spiegel.de/international/germany/scientists-use-computer-to-
mathematically-prove-goedel-god-theorem-a-928668.html
4Scott was not aware of the inconsistency in G¨odel’s variant. Amongst others, he slightly
modified G¨odel’s definition of essence, which causes the inconsistency. Scott did so because
it felt natural to him to require that essential properties of an individual should actually be
possessed by that individual, and so he added a respective conjunct to the definition; cf. also
[50, 51].
5The modal collapse [56, 48] is a sort of constricted inconsistency at the level of possible
world semantics. The assumption that there may actually be more than one possible world
is refuted; this follows from G¨odel’s axioms as the ATPs quickly confirm. In other words,
odel’s axioms, as a side-effect, imply that everything is determined (we may even say: that
there is no free will).
Figure 4: The axioms causing the inconsistency in G¨odel’s modal logic variant of the ontologi-
cal argument for the existence of God are highlighted in blue. The inconsistency was detected
by the HO ATP Leo-II. (Disclaimer: Unpublished works of Kurt G¨odel are Copyright Insti-
tute for Advanced Study and are used with permission. All rights reserved by Institute for
Advanced Study)
course at Freie Universit¨at Berlin (see Section 4.5). Fuenmayor, in a student
project [66, 67], formalised the most relevant parts of Fitting’s [68] textbook
Types, Tableaus, and G¨odel’s God. This book develops another interesting
emendation of the ontological argument, which — similar to other recent works
— aims at preserving the overall conclusion (necessary existence of God), while
at the same time getting rid of the modal collapse. Fitting’s means to achieve
this is by modifying the foundational logical system. Instead of an extensional
HO modal logic he employs a more expressive intensional HO modal logic, which
enables a different, and as Fitting explains, more adequate interpretation of
e.g. the notion of positive poperties in G¨odel’s argument.
The so far mentioned studies only address a small portion of the entire
relevant literature on the ontological argument. By extending these studies, it
can be expected that many further issues in human refereed contributions can
be revealed. A follow-up project could thus try to develop an encompassing
map that rigorously distinguishes sound from unsound work in this area.
Summary of key insights. Variants of extensional and intensional HO modal
logics can easily be implemented in the SSE approach; a very good degree of
proof automation can be achieved this way, matching or exceeding the argumen-
tation granularity we typically find in human authored publications on this sub-
ject; flexible logic modifications and combinations are supported; the approach
is practically highly useful and it combines automated theorem proving with
model and countermodel finding (the latter well supports the detection of typos
and minor issues during the formalisation process); it has been demonstrated
how the approach supports a novel, experimental style of work in metaphysics.
4.2. Application Study II: Zalta’s Principia Logico-Metaphysica
Formalising and automating masterpiece rational arguments in philosophy
with the SSE approach on the computer is not trivial. However, it still leads
to comparably small corpora of axioms, lemmata and theorems, and, hence,
it does not provide reliable feedback on the scalability of the approach for
larger and more ambitious formalisations. For that reason another challenge
has been tackled in the CompMeta project: the Principia Logico-Metaphysica
(PLM) of Edward Zalta [42], which aims at a foundational logical theory for
metaphysics, mathematics and the sciences (PLM thus intends to subsume the
Principia Mathematica [69]). Zalta has chosen a hyper-intensional, relational
second-order modal logic S5 [70, 71] as the foundational logic for PLM. It has
thus been a challenge question for CompMeta, whether this non-trivial foun-
dational logic can still be suitably encoded and automated in the semantical
embedding approach. Besides hyper-intensionality, a particular challenge has
been to overcome the conceptional gap between the relational core of PLM
and the functional core of HOL, and to suitably handle the different strengths
of comprehension principles supported in both logics that assert the existence
of relations and functions (the use of unrestricted comprehension principles in
PLM causes undesirable paradoxes and inconsistencies [72]). And, of course, a
main challenge has also been to deal with the comparably large size of PLM
in relation to the small axiom sets as studied in the context of the ontological
argument.
The author’s initial attempts, conducted during an extended research stay
at Stanford University in 2015/16, to semantically embed PLM’s base logic in
HOL by following a pure proof theoretic approach were unsuccessful. Later in
2016, in the course of the computational metaphysics lecture course in summer
2016 at Freie Universit¨at Berlin (see below), Zalta in an invited presentation
then outlined some ideas towards a set theoretical semantics for PLM, which
were suggested to him by Peter Aczel. This set theoretic perspective on PLM
subsequently enabled the development of a suitable shallow semantical embed-
ding of PLM in HOL. It was in fact Daniel Kirchner, a mathematics student
recruited the lecture course, who took on the challenge within an MSc thesis
project at Freie Universit¨at Berlin. Kirchner has meanwhile succeeded in for-
malising the PLM in Isabelle/HOL by suitably adapting the SSE approach so
that it soundly covers the base logic of PLM [73].
Kirchner’s work contributes various novel ideas and tools, including the pro-
vision of powerful automation means for PLM at different, cross-linked levels of
abstraction. For example, he developed a direct, tactic-based theorem prover
for PLM in Isabelle/HOL, which, one-to-one, implements the proof theory of
PLM as developed by Zalta with pen and paper. This object-level theorem
prover for PLM is connected with the HOL meta-level in Kirchner’s work via the
specifically tailored shallow semantical embedding he developed, and this link
establishes an Isabelle/HOL-internal criterion, modulo expansion of the seman-
tical embedding, for the soundness of his novel prover. Further, similar provers
are provided by him at well-defined, intermediate expansion levels. Kirchner’s
architecture thus provides multiple options for proof automation, ranging from
the full expansion of the semantical embedding (combined with calls to off-the-
shelf reasoning tools integrated with Isabelle/HOL via the Sledgehammer tool)
to the more intuitive, one-to-one automation of the proof theory of PLM within
Kirchner’s new tactic-based theorem prover.
An unexpected, but key result of Kirchner’s work has been the discovery
of a paradox in PLM [74, 73] (in the spirit of Russel’s paradox [75] for Frege’s
logic of the Begriffsschrift [76]): a deeply-rooted and known paradox is reintro-
duced in PLM, respectively, in the abstract object theory underlying the PLM,
when the logic of complex terms is simply adjoined to the frameworks specially-
formulated comprehension principle for relations. Kirchner’s result constitutes
a new and important paradox, given how much expressive and analytic power
is contributed by having the two kinds of complex terms in the system. The
results also provide a fresh perspective on the question of whether relational
type theory or functional type theory better serves as a foundation for logic and
metaphysics [72].
In close collaboration and supported by further experiments with Isa-
belle/HOL, possible emendations of PLM are currently being studied by Zalta
and Kirchner. The ongoing style of interaction well illustrates a new dynam-
ics in the scientific discovery process in metaphysics: rigorous experimentation
with implementations of foundational logical systems may quicken and inspire
the scientific discovery process in this area and also foster more reliable results.
Summary of key insights. The semantic embedding approach scales for ambi-
tious and large projects in metaphysics such as PLM; the approach is practically
applicable and already shows a good degree of automation, which will naturally
further improve (with the ATPs it relies upon); with the help of the implemented
framework new knowledge has been contributed; moreover, students can be well
motivated when using the approach to dive into complex, foundational questions
on the edge of current research in metaphysics in short time.
4.3. Application Study III: Free Logic and Axioms Systems for Category Theory
Partiality and undefinedness are prominent challenges in various areas of
mathematics and computer science. Unfortunately, however, modern proof as-
sistant systems and ATPs based on traditional classical or intuitionistic log-
ics provide rather inadequate support for these challenge concepts. Free logic
[77, 78, 79, 80] offers a theoretically appealing solution, but it has been consid-
ered as rather unsuited towards practical utilisation.
In collaboration with Dana Scott, a shallow embedding of free logic in HOL
has been developed and implemented in the CompMeta project. Just as for
the embeddings mentioned above, various state-of-the-art FO and HO ATPs
and model finders, which are integrated (modulo suitable logic translations)
with Isabelle/HOL via the Sledgehammer tool, can now be utilised to automate
reasoning in free logic. As a result we obtain an elegant and powerful imple-
mentation of an integrated interactive-automated theorem proving (and model
finding) environment for free logic.
To demonstrate the practical relevance of this new system, a series of axioms
systems for category theory has been systematically explored [81, 82, 83]. The
starting point has been a generalisation of the standard axioms for a monoid
to a partial composition operation. The purpose of this work has not been
to make or claim any contribution to category theory but rather to show how
formalisations involving the kind of logic required, in this case free logic, can be
implemented and validated within modern proof assistants such as Isabelle/HOL
when utilising the SSE approach.
Subsequently, the relation of the developed axiom systems to alternative pro-
posals from the literature has been studied within the framework. This includes
an axiom set proposed by Freyd and Scedrov in their textbook Categories, Al-
legories [84] for which we have revealed a technical flaw. Either all operations,
e.g. morphism composition, are total in their theory or their axiom system is in-
consistent. This observation applies when a free logic reading of their axiomatic
theory is adopted, where the free variables are assumed to range over all objects,
including the “undefined”. When adopting an algebraic reading of their axioms
system, where free variables range only over all defined objects, then strictness
axioms or conditions are missing in their framework. Both readings have been
formalised in the SSE approach.
Thus, in interaction with the SSE based implementation of free logic in
Isabelle/HOL, a minor (one may say technical) but nevertheless relevant issue
in a mathematics textbook has been revealed that domain experts had missed
before. The repair for this problem is quite straightforward, however. The
solution essentially corresponds to a set of axioms proposed by Scott [85] in the
1970s.
In the experiments reported above, the exploration studies have been signif-
icantly supported by series of experiments in which automated reasoning tools
were called from within the proof assistant Isabelle/HOL via the Sledgehammer
tool. Moreover, very useful feedback was obtained at various stages from the
model finder Nitpick [20], saving us from making several mistakes.
At the conceptual level this work exemplifies a new style of explorative
mathematics which rests on fruitful human-machine interaction with integrated
interactive-automated theorem proving technology. The conducted experiments
were such that the required reasoning was often too tedious and time-consuming
for humans to be carried out repeatedly with highest level of precision. It is
here where cycles of formalisation and experimentation efforts in Isabelle/HOL
provided significant support. Moreover, the technical inconsistency issue for
the axiom system of Freyd and Scedrov was discovered by ATPs, which further
emphasises the added value of automated theorem proving in this area.
Summary of key insights. The SSE approach is applicable also to free and in-
clusive logics, which so far were believed to be too difficult to automate and thus
of little practical relevance; quite to the contrary: as our experiments show, the
approach is indeed well suited for practical applications, e.g. for the exploration
of mathematical theories in domains such as category theory, where partiality
and undefinedness play a central role; new knowledge can be discovered this
way.
4.4. Theoretical Study IV: Universal Cut-Elimination
The development of cut-free calculi for expressive logics, such as quantified
non-classical logics, is usually a non-trivial task. However, for a wide range of
logics there exists a surprisingly elegant and uniform solution: simply utilise
the SSE approach. More precisely, by modeling and studying these logics as
semantically embedded fragments of HOL (with Henkin semantics), existing
cut-elimination results for HOL (with Henkin semantics) may be reused. In the
course of the CompMeta project, this idea has been further explored and exem-
plarily applied for proving cut-elimination for quantified conditional logics [86].
Conditional logics [87, 88], known also as logics of normality or typicality,
have many applications, including counterfactual reasoning, default reasoning,
deontic reasoning, metaphysical modeling, action planning and reasoning about
knowledge. Moreover, it is well known that they subsume normal modal logics,
since the modal box operator can be defined in terms of the more expressive
conditional operator. In contrast to the rather straightforward, Kripke-style
semantics of normal modal logics, conditional logics come e.g. with a HO selec-
tion function semantics, which makes them interesting objects of study. While
there is broad literature on propositional conditional logics, comparably few
authors have addressed FO extensions of conditional logics; those include Del-
grande [89, 90] and Friedman et al. [91].
The conditional logics studied in the course of the CompMeta project utilise
constant- and/or varying-domain FO quantifiers and they combine these with
further quantifiers for propositional variables. Such a rich combination has not
been adressed in the literature before. In particular, cut-elimination for these
logic(s) was still open (only for propositional conditional logics some related
results had been available [92, 93]; cf. also the references therein).
While earlier, practical work in the CompMeta project had already shown
that automation of quantified conditional logics is indeed feasible by utilising
the SSE approach [94], the second half of the project then switched the atten-
tion to the theoretical challenge of proving cut-elimination. It was then shown
[86], that by utilising the SSE approach for quantified (and non-quantified) con-
ditional logics, the question whether cut-elimination holds for them can in fact
be reduced to proving the faithfulness of their semantical embedding in HOL.
The latter task, however, constitutes a much simpler problem than proving
cut-elimination directly.
The exploited reduction principle is similarly applicable to other object logics
in the SSE approach, including many logics for which cut-elimination is still
open. However, special attention has to be payed to cut-simulation [86, 95],
which may render cut-elimination as a pointless criterion.
Key insights:. Cut-elimination of a given object logic can often be reduced to
showing the faithfulness of a shallow semantical embedding of this logic in HOL
(with Henkin semantics); the approach has been applied to prove cut-elimination
for some variants of quantified conditional logics, for which the question was still
open; it should be possible to obtain similar cut-elimination results for many
other challenging object logics by adopting the same reduction principle.
4.5. Educational Study V: Lecture Course on Computational Metaphysics
The early successes in the CompMeta project inspired the design of a world-
wide new lecture course on computational metaphysics [96, 97]. This lecture
course, which was set-up and held in collaboration with Alexander Steen and
Max Wisniewski, was awarded with the 2015/16 central teaching award of
Freie Universit¨at Berlin.6The course received substantial support from Jasmin
Blanchette (Amsterdam), Wolfgang Lenzen (Osnabr¨uck), Bruno Woltzenlogel-
Paleo (Canberra) and Edward Zalta (Stanford), who all contributed invited
guest lectures.
Students with heterogenous knowledge backgrounds from computer science,
mathematics, philosophy and physics attended the lecture course, and they came
from all three major universities of Berlin: Freie Universit¨at Berlin, Technical
6cf. http://www.fu-berlin.de/campusleben/lernen-und-lehren/2016/160428-lehrpreis/
University Berlin and Humboldt University Berlin. The attendance in the lec-
tures usually varied between 40 and 70 students. 36 students were formally
registered for the course and were graded.
The steep learning curves of nearly all students were astonishing, in partic-
ular in the second half of the course, when small, heterogeneous student groups
were formed to work each on an encoding and formal assessment of a different
publication in philosophy or mathematics by adapting and utilising the SSE
approach within the Isabelle/HOL proof assistant. The heterogeneous group
compositions, the 24/7 feedback from the Isabelle/HOL environment, and the
motivating project topics were prime reasons, as the author believes, for the very
good overall results of the course. A selection of project results has meanwhile
been presented at conferences or published as book chapters or journal articles
[98, 66, 74, 73, 99, 100, 67, 101]. Several students picked up follow-up topics
and turned them into BSc or MSc thesis projects [102, 103, 104, 105, 106].
A key ingredient for the successful implementation of the course has been,
that a single methodology and overall technique (the SSE approach) was used
throughout, enabling the students to quickly adopt a wide range of different logic
variants in short time within a single proof assistant framework (Isabelle/HOL).
The interdisciplinary course concept appears well suited to foster a much im-
proved logic education across disciplines.
Summary of key insights. The SSE approach is well suited to support a novel
form of university level logic education to heterogeneous groups of students;
excellent learning curves are possible; new teaching methods are enabled in
interaction with ITPs and ATPs for HOL.
4.6. Further Results and Comments
A range of related application studies (contributed partly also by collabora-
tors) has not been mentioned above. Amongst others, these works include se-
mantical embeddings of multivalued logic SIXTEEN [107], nominal logics [108],
temporal logics [109], paraconsistent logics, intuitionistic modal logics, etc. Also
the (partly ongoing) work of Streit, another student recruited from the compu-
tational metaphysics lecture course, on the formalisation of Boolos’ textbook
on provability logic [110] and on the formalisation of Bostrom’s simulation ar-
gument [111] has not been addressed above.
A relevant and challenging future application direction of the semantical
embedding approach lies in the modeling of legal, ethical, social and cultural
norms in intelligent machines [112]. To enable such applications, the author is
currently, in a collaboration with Leon van der Torre and Xavier Parent from
the University of Luxemburg, adapting the semantical embedding approach to
cover recent developments in the area of deontic logics [113]. Standard deon-
tic logic, which is just a normal modal logic, is obviously already covered by
the approach. More challenging has been the semantical embedding of e.g. in-
put/output logic [114] and dyadic deontic logic [115]. First results in this ap-
plication direction are promising [116, 117, 118, 119].
4.6.1. Note on Invention and Creativity in ATPs
As reported above, the theorem prover Leo-II detected the inconsistency of
the axioms in G¨odel’s original variant of the ontological argument; this incon-
sistency was not known to philosophers before. The clue in the proof of falsity
from the axioms [51, 50], is the empty essence lemma: from G¨odel’s [53] def-
inition of essential properties (essence, cf. Ess. in Fig. 4) it follows that the
empty property, i.e., the everywhere false property (alternatively we may pick
the property of being self-different), is an essential property of every individual.
Dana Scott [54] slightly modified G¨odel’s definition of essence in his variant
of the ontological argument (for cosmetic reasons — the inconsistency was not
known to him at the time), with the effect that the empty essence lemma is no
longer valid.
In its successful, automatic discovery of the inconsistency, the Leo-II prover
had to guess the instantiation of the empty property for a second-order variable
during proof search [51, 50]. This part in the proof is non-analytic: inspection
shows that one cannot synthesise the required instantiation by e.g. unification
with existing information (terms) in the search space. In fact, blind guessing of
this instantiation seems unavoidable here. The author considers this as a small
but nevertheless very interesting example for a true discovery (based on guessing
and checking) by an ATP; and this discovery is philosophically relevant.
4.6.2. Improved Infrastructure for HO Interactive and Automated Reasoning
In the course of the above works, and in close collaboration with de-
velopment of Leo-III, the CompMeta project fostered the development of a
reusable theorem proving infrastructure for a range of non-classical logics
[120, 121, 122, 123, 18]. This includes various, reusable encodings in Isa-
belle/HOL syntax, Coq syntax and in TPTP syntax, which have been all made
publicly available.7Moreover, this includes a flexible pre-processing module
[18, 124, 125] for the Leo-III prover (and any other TPTP THF [34] compli-
ant prover). This preprocessor turns Leo-III into a flexible reasoner for a very
wide range of propositional and quantified modal logics. In fact, no other im-
plemented system is available today which covers a wider range of modal logic
variants than Leo-III, and this approach can easily be extended for many other
non-classical logics that have been mentioned above.
5. Conclusion
The presented reasoning framework based on shallow semantical embeddings
in HOL constitutes the most widely applied universal logical reasoning approach
available to date. There is, however, a significant difference to Leibniz’ origi-
nal idea of a characteristica universalis and to various related proposals in the
literature. Instead of proposing a single, universal object-level formalism, the
7Cf. e.g. https://github.com/FormalTheology/GoedelGod
shallow semantical embedding approach supports many different competing ob-
ject logics from the logic zoo. No ontological commitment is enforced at the
object logic level. For example, the approach well supports both classical and
intuitionistic object logics, and can even elegantly combine them. The concrete
portfolio of embedded object logics is determined by the specific requirements of
an application at hand. Only at meta-level a single, unifying logic is provided,
namely HOL (or any richer logic incorporating HOL). By unfolding the defini-
tions of the logic embeddings, problem encodings utilising these object logics
are uniformly mapped to meta-logic HOL. This way Leibniz’ vision of a char-
acteristica universalis is realised in an indirect way in the presented approach:
universal logical reasoning is established (only) at the meta-level in HOL.
The presented universal logical reasoning framework has many challenging
applications in artificial intelligence, computer science, philosophy, mathematics
and natural language processing. A most relevant and timely application di-
rection concerns the application of the semantical embedding approach for the
modeling of ethical, legal, social and cultural norms in intelligent machines [116],
ideally in combination with the realisation of human-intuitive forms of rational
argumentation in machines complementing internal decision making means at
the level of statistical information and subsymbolic representations.
Acknowledgements. A big thanks (in alphabetical order) goes to all collabo-
rators and supporters of the CompMeta project, including (but not limited
to): Jasmin Blanchette, Harold Boley, Frode Bjørdal, Chad Brown, Maximilian
Claus, Ali Farjami, David Fuenmayor, Tobias Gleissner, Max Haslbeck, Daniel
Kirchner, Hanna Lachnitt, Wolfgang Lenzen, Tomer Libal, Irina Makarenko,
Paul Oppenheimer, Jens Otten, Xavier Parent, Larry Paulson, Florian Rabe,
Raul Rojas, Fabian Sch¨utz, Hans-J¨org Schurr, Dana Scott, Alexander Steen,
David Streit, Geoff Sutcliffe, Leon van der Torre, Max Wisniewski, Bruno
Woltzenlogel-Paleo, Edward Zalta, Marco Ziener.
References
[1] J. Lambek, P. Scott, Introduction to Higher Order Categorical Logic, Cambridge
University Press, 1986.
[2] B. Jacobs, Categorical Logic and Type Theory, Vol. 141 of Studies in Logic and
the Foundations of Mathematics, North Holland, Elsevier, 1999.
[3] H. Andreka, I. N´emeti, I. Sain, Universal Algebraic Logic, Studies in Universal
Logic, Birkh¨auser Basel, 2017.
[4] L. Moss, Coalgebraic logic, Annals of Pure and Applied Logic 96 (1-3) (1999)
277–317.
[5] J. Rutten, Universal coalgebra: a theory of systems, Theoretical Computer Sci-
ence 249 (1) (2000) 3–80.
[6] W. Guttmann, G. Struth, T. Weber, Automating algebraic methods in Isabelle,
in: S. Qin, Z. Qiu (Eds.), Proc. of ICFEM 2011, Vol. 6991 of LNCS, Springer,
2011, pp. 617–632.
[7] S. Foster, G. Struth, On the fine-structure of regular algebra, Journal of Auto-
mated Reasoning 54 (2) (2015) 165–197.
[8] A. Church, A formulation of the simple theory of types, Journal of Symbolic
Logic 5 (1940) 56–68.
[9] P. Andrews, Church’s type theory, in: E. N. Zalta (Ed.), The Stanford Encyclo-
pedia of Philosophy, summer 2018 Edition, Metaphysics Research Lab, Stanford
University, 2018.
[10] C. Benzm¨uller, D. Miller, Automation of higher-order logic, in: D. M. Gabbay,
J. H. Siekmann, J. Woods (Eds.), Handbook of the History of Logic, Volume
9 — Computational Logic, North Holland, Elsevier, 2014, pp. 215–254. doi:
10.1016/B978-0-444-51624- 4.50005-8.
[11] C. Benzm¨uller, L. Paulson, Quantified multimodal logics in simple type theory,
Logica Universalis 7 (1) (2013) 7–20. doi:10.1007/s11787- 012-0052-y.
[12] G. Frege, Begriffsschrift. Eine der arithmetischen nachgebildete Formelsprache
des reinen Denkens, Halle, 1879.
[13] P. B. Andrews, An Introduction to Mathematical Logic and Type Theory: To
Truth Through Proof, Vol. 27 of Applied Logic Series, Springer, 2002.
[14] C. Benzm¨uller, C. Brown, M. Kohlhase, Higher-order semantics and extension-
ality, Journal of Symbolic Logic 69 (4) (2004) 1027–1088. doi:10.2178/jsl/
1102022211.
[15] T. Nipkow, L. C. Paulson, M. Wenzel, Isabelle/HOL: A Proof Assistant for
Higher-Order Logic, no. 2283 in LNCS, Springer, 2002.
[16] Y. Bertot, P. Casteran, Interactive Theorem Proving and Program Development,
Springer, 2004.
[17] C. Benzm¨uller, N. Sultana, L. C. Paulson, F. Theiß, The higher-order prover
LEO-II, Journal of Automated Reasoning 55 (4) (2015) 389–404. doi:10.1007/
s10817-015-9348-y.
[18] A. Steen, C. Benzm¨uller, The higher-order prover Leo-III, in: D. Galmiche,
S. Schulz, R. Sebastiani (Eds.), Automated Reasoning. IJCAR 2018, Vol.
10900 of LNCS, Springer, Cham, 2018, pp. 108–116. doi:10.1007/
978-3-319-94205- 6_8.
[19] C. E. Brown, Satallax: An automatic higher-order prover, in: B. Gramlich,
D. Miller, U. Sattler (Eds.), Automated Reasoning - 6th International Joint
Conference, IJCAR 2012, Manchester, UK, June 26-29, 2012. Proceedings, Vol.
7364 of Lecture Notes in Computer Science, Springer, 2012, pp. 111–117.
[20] J. C. Blanchette, T. Nipkow, Nitpick: A counterexample generator for higher-
order logic based on a relational model finder, in: M. Kaufmann, L. C. Paulson
(Eds.), Interactive Theorem Proving, First International Conference, ITP 2010,
Edinburgh, UK, July 11-14, 2010. Proceedings, Vol. 6172 of Lecture Notes in
Computer Science, Springer, 2010, pp. 131–146.
[21] J. C. Blanchette, S. B¨ohme, L. C. Paulson, Extending Sledgehammer with SMT
solvers, Journal of Automated Reasoning 51 (1) (2013) 109–128.
[22] S. Schulz, System description: E 1.8, in: K. L. McMillan, A. Middeldorp,
A. Voronkov (Eds.), Logic for Programming, Artificial Intelligence, and Rea-
soning - 19th International Conference, LPAR-19, Stellenbosch, South Africa,
December 14-19, 2013. Proceedings, Vol. 8312 of Lecture Notes in Computer
Science, Springer, 2013, pp. 735–743. doi:10.1007/978-3-642-45221- 5.
[23] M. Deters, A. Reynolds, T. King, C. W. Barrett, C. Tinelli, A tour of CVC4:
How it works, and how to use it, in: K. Claessen, V. Kuncak (Eds.), Formal
Methods in Computer-Aided Design, FMCAD 2014, Lausanne, Switzerland, Oc-
tober 21-24, 2014, IEEE, 2014, p. 7.
[24] L. M. de Moura, N. Bjørner, Z3: An Efficient SMT Solver, in: C. R. Ramakrish-
nan, J. Rehof (Eds.), Tools and Algorithms for the Construction and Analysis
of Systems, 14th International Conference, TACAS 2008, Held as Part of the
Joint European Conferences on Theory and Practice of Software, ETAPS 2008,
Budapest, Hungary, March 29-April 6, 2008. Proceedings, Vol. 4963 of Lecture
Notes in Computer Science, Springer, 2008, pp. 337–340.
[25] J. C. Blanchette, A. Popescu, D. Wand, C. Weidenbach, More SPASS with
Isabelle – Superposition with Hard Sorts and Configurable Simplification, in:
L. Beringer, A. P. Felty (Eds.), Interactive Theorem Proving - Third Interna-
tional Conference, ITP 2012, Princeton, NJ, USA, August 13-15, 2012. Pro-
ceedings, Vol. 7406 of Lecture Notes in Computer Science, Springer, 2012, pp.
345–360.
[26] L. Kov´acs, A. Voronkov, First-Order Theorem Proving and Vampire, in:
N. Sharygina, H. Veith (Eds.), Computer Aided Verification - 25th International
Conference, CAV 2013, Saint Petersburg, Russia, July 13-19, 2013. Proceedings,
Vol. 8044 of Lecture Notes in Computer Science, Springer, 2013, pp. 1–35.
[27] M. Baldoni, Normal multimodal logics: Automatic deduction and logic pro-
gramming extension, Ph.D. thesis, Dipartimento di Informatica, Universit´a degli
Studi di Torino, (Revised version dated July 9, 2003) (2003).
[28] R. Fagin, J. Y. Halpern, Y. Moses, M. Vardi, Reasoning About Knowledge, The
MIT Press, 2004.
[29] H. Ohlbach, A. Nonnengart, M. de Rijke, D. Gabbay, Encoding two-valued non-
classical logics in classical logic, in: J. Robinson, A. Voronkov (Eds.), Handbook
of Automated Reasoning (in 2 volumes), Elsevier and MIT Press, 2001, pp.
1403–1486.
[30] C. Benzm¨uller, Combining and automating classical and non-classical logics in
classical higher-order logic, Annals of Mathematics and Artificial Intelligence
62 (1-2) (2011) 103–128. doi:10.1007/s10472-011-9249-7.
[31] M. Sergot, Epistemic logic and ‘common knowledge’, Lecture Course Notes,
Department of Computing Imperial College, London, https://www.doc.ic.ac.
uk/~mjs/teaching/ModalTemporal499/Epistemic_499_v0809_2up.pdf (2008).
[32] A. Baltag, B. Renne, Dynamic epistemic logic, in: E. N. Zalta (Ed.), The Stan-
ford Encyclopedia of Philosophy, winter 2016 Edition, Metaphysics Research
Lab, Stanford University, 2016.
[33] C. Benzm¨uller, Comparing approaches to resolution based higher-order theorem
proving, Synthese 133 (1-2) (2002) 203–235. doi:10.1023/A:1020840027781.
[34] G. Sutcliffe, C. Benzm¨uller, Automated reasoning in higher-order logic using the
TPTP THF infrastructure, Journal of Formalized Reasoning 3 (1) (2010) 1–27.
[35] J. Siekmann, C. Benzm¨uller, S. Autexier, Computer supported mathematics
with OMEGA, Journal of Applied Logic 4 (4) (2006) 533–559. doi:10.1016/j.
jal.2005.10.008.
[36] S. Autexier, C. Benzm¨uller, D. Dietrich, J. Siekmann, OMEGA: Resource-
adaptive processes in an automated reasoning systems, in: M. W. Crocker,
J. Siekmann (Eds.), Resource-Adaptive Cognitive Processes, Cognitive Tech-
nologies, Springer, 2010, pp. 389–423. doi:10.1007/978-3-540-89408- 7_17.
[37] J. Siekmann, C. Benzm¨uller, A. Fiedler, A. Meier, I. Normann, M. Pol-
let, Proof development in OMEGA: The irrationality of square root of 2,
in: F. Kamareddine (Ed.), Thirty Five Years of Automating Mathematics,
Applied Logic series (28), Kluwer Academic Publishers, 2003, pp. 271–314.
doi:10.1007/978-94-017-0253- 9_11.
[38] C. Benzm¨uller, M. Schiller, J. Siekmann, Resource-bounded modelling and anal-
ysis of human-level interactive proofs, in: M. W. Crocker, J. Siekmann (Eds.),
Resource-Adaptive Cognitive Processes, Cognitive Technologies, Springer, 2010,
pp. 291–311. doi:10.1007/978-3-540-89408- 7_13.
[39] C. Benzm¨uller, A. Pease, Higher-order aspects and context in SUMO, Journal
of Web Semantics 12-13 (2012) 104–117. doi:10.1016/j.websem.2011.11.008.
[40] C. Benzm¨uller, L. Paulson, Exploring properties of normal multimodal logics in
simple type theory with LEO-II, in: C. Benzm¨uller, C. Brown, J. Siekmann,
R. Statman (Eds.), Reasoning in Simple Type Theory — Festschrift in Honor of
Peter B. Andrews on His 70th Birthday, Studies in Logic, Mathematical Logic
and Foundations, College Publications, 2008, pp. 386–406, (Superseded by 2013
article in Logica Universalis).
[41] C. Benzm¨uller, L. Paulson, Multimodal and intuitionistic logics in simple type
theory, The Logic Journal of the IGPL 18 (6) (2010) 881–892. doi:10.1093/
jigpal/jzp080.
[42] E. N. Zalta, Principia Logico-Metaphysica (Draft/Excerpt), preprint available
at https://mally.stanford.edu/principia.pdf (2018).
[43] R. C. Stalnaker, Mere Possibilities: Metaphysical Foundations of Modal Seman-
tics, Princeton University Press, 2012.
[44] T. Williamson, Modal Logic as Metaphysics, Oxford:OUP, 2013.
[45] B. Fitelson, E. N. Zalta, Steps toward a computational metaphysics, Journal
Philosophical Logic 36 (2) (2007) 227–247. doi:10.1007/s10992-006-9038-7.
[46] P. Oppenheimer, E. Zalta, A computationally-discovered simplification of the
ontological argument, Australasian Journal of Philosophy 89 (2) (2011) 333–
349.
[47] J. Alama, P. E. Oppenheimer, E. N. Zalta, Automating leibniz’s theory of con-
cepts, in: A. P. Felty, A. Middeldorp (Eds.), Automated Deduction - CADE-25 -
25th International Conference on Automated Deduction, Berlin, Germany, Au-
gust 1-7, 2015, Proceedings, Vol. 9195 of Lecture Notes in Computer Science,
Springer, 2015, pp. 73–97.
[48] J. Sobel, Logic and Theism: Arguments for and Against Beliefs in God, Cam-
bridge U. Press, 2004.
[49] C. Benzm¨uller, B. Woltzenlogel Paleo, Automating G¨odel’s ontological proof of
God’s existence with higher-order automated theorem provers, in: T. Schaub,
G. Friedrich, B. O’Sullivan (Eds.), ECAI 2014, Vol. 263 of Frontiers in Artificial
Intelligence and Applications, IOS Press, 2014, pp. 93 – 98. doi:10.3233/
978-1-61499-419- 0-93.
[50] C. Benzm¨uller, B. Woltzenlogel Paleo, The inconsistency in G¨odel’s ontological
argument: A success story for AI in metaphysics, in: S. Kambhampati (Ed.),
IJCAI 2016, Vol. 1-3, AAAI Press, 2016, pp. 936–942.
URL http://www.ijcai.org/Proceedings/16/Papers/137.pdf
[51] C. Benzm¨uller, B. Woltzenlogel Paleo, An object-logic explanation for the in-
consistency in G¨odel’s ontological theory (extended abstract, sister conferences),
in: M. Helmert, F. Wotawa (Eds.), KI 2016: Advances in Artificial Intel-
ligence, Proceedings, LNCS, Springer, Berlin, Germany, 2016, pp. 244–250.
doi:10.1007/978-3-319-46073- 4.
[52] C. Benzm¨uller, B. Woltzenlogel Paleo, Experiments in Computational Meta-
physics: odel’s proof of God’s existence, Savijnanam: scientific exploration for
a spiritual paradigm. Journal of the Bhaktivedanta Institute 9 (2017) 43–57.
[53] K. G¨odel, Appx. A: Notes in Kurt G¨odel’s Hand, in: J. Sobel (Ed.), Logic and
Theism: Arguments for and Against Beliefs in God, Cambridge U. Press, 1970,
pp. 144–145.
[54] D. Scott, Appx. B: Notes in Dana Scott’s Hand, in: J. Sobel (Ed.), Logic and
Theism: Arguments for and Against Beliefs in God, Cambridge U. Press, 1972,
pp. 145–146.
[55] Y. Bertot, P. Casteran, Interactive Theorem Proving and Program Develop-
ment - Coq’Art: The Calculus of Inductive Constructions, Texts in Theoretical
Computer Science, Springer, 2004.
[56] J. Sobel, G¨odel’s ontological proof, in: On Being and Saying. Essays for Richard
Cartwright, MIT Press, 1987, pp. 241–261.
[57] C. Anderson, Some emendations of G¨odel’s ontological proof, Faith and Philos-
ophy 7 (3).
[58] A. Anderson, M. Gettings, G¨odel ontological proof revisited, in: odel’96: Logi-
cal Foundations of Mathematics, Computer Science, and Physics: Lecture Notes
in Logic 6, Springer, 1996, pp. 167–172.
[59] P. H´ajek, Magari and others on G¨odel’s ontological proof, in: A. Ursini,
P. Agliano (Eds.), Logic and algebra, Dekker, New York etc., 1996, p. 125–135.
[60] P. H´ajek, Der Mathematiker und die Frage der Existenz Gottes, in: B. Buldt et
al. (Ed.), Kurt G¨odel. Wahrheit und Beweisbarkeit, ¨obv & hpt, Wien, 2001, pp.
325–336, iSBN 3-209-03835-X.
[61] P. H´ajek, A new small emendation of G¨odel’s ontological proof, Studia Logica
71 (2) (2002) 149–164.
[62] F. Bjørdal, Understanding G¨odel’s ontological argument, in: T. Childers (Ed.),
The Logica Yearbook 1998, Filosofia, 1999, pp. 214–217.
[63] C. Benzm¨uller, L. Weber, B. Woltzenlogel Paleo, Computer-assisted analysis
of the Anderson-H´ajek controversy, Logica Universalis 11 (1) (2017) 139–151.
doi:10.1007/s11787-017-0160-9.
[64] C. Benzm¨uller, B. Woltzenlogel Paleo, Higher-order modal logics: Automation
and applications, in: A. Paschke, W. Faber (Eds.), Reasoning Web 2015, no.
9203 in LNCS, Springer, Berlin, Germany, 2015, pp. 32–74. doi:10.1007/
978-3-319-21768- 0_2.
[65] C. Benzm¨uller, B. Woltzenlogel Paleo, The modal collapse as a collapse of
the modal square of opposition, in: J.-Y. B´eziau, G. Basti (Eds.), The
Square of Opposition: A Cornerstone of Thought (Collection of papers re-
lated to the World Congress on the Square of Opposition IV, Vatican,
2014), http://www.springer.com/us/book/9783319450612, Studies in Univer-
sal Logic, Springer International Publishing Switzerland, 2016, pp. 307–313.
doi:10.1007/978-3-319-45062- 9_18.
[66] D. Fuenmayor, C. Benzm¨uller, Types, Tableaus and G¨odel’s God in Isa-
belle/HOL, Archive of Formal Proofs 2017.
URL http://afp.sourceforge.net/entries/Types_Tableaus_and_Goedels_
God.shtml
[67] A. Steen, M. Wisniewski, C. Benzm¨uller, Agent-based HOL reasoning, in: G.-M.
Greuel, T. Koch, P. Paule, A. Sommese (Eds.), Mathematical Software – ICMS
2016, 5th International Congress, Proceedings, Vol. 9725 of LNCS, Springer,
Berlin, Germany, 2016, pp. 75–81. doi:10.1007/978-3- 319-42432-3_10.
[68] M. Fitting, Types, Tableaus, and G¨odel’s God, Kluwer, 2002.
[69] A. N. Whitehead, B. Russell, Principia Mathematica, 3 vols, Cambridge: Cam-
bridge University Press. Second edition, 1925 (Vol. 1), 1927 (Vols 2, 3). Abridged
as Principia Mathematica to *56, Cambridge: Cambridge University Press,
1962., 1910, 1912, 1913.
[70] E. Zalta, Intensional Logic and the Metaphysics of Intentionality, A Bradford
book, MIT Press, 1988.
[71] E. Zalta, Abstract Objects: An Introduction to Axiomatic Metaphysics, Syn-
these Library, Springer, 1983.
[72] P. E. Oppenheimer, E. N. Zalta, Relations versus functions at the foundations of
logic: Type-theoretic considerations, Journal of Logic and Computation 21 (2)
(2011) 351–374. doi:10.1093/logcom/exq017.
URL http://dx.doi.org/10.1093/logcom/exq017
[73] D. Kirchner, Representation and partial automation of the principia logico-
metaphysica in Isabelle/HOL, Archive of Formal Proofs 2017.
URL https://www.isa-afp.org/entries/PLM.html
[74] D. Kirchner, C. Benzm¨uller, E. N. Zalta, Mechanizing principia logico-
metaphysica in functional type theory, Tech. rep., CoRR, preprint of submitted
article (2017).
URL https://arxiv.org/abs/1711.06542
[75] G. Link (Ed.), One Hundred Years of Russell’s Paradox, De Gruyter, 2008.
[76] G. Frege, Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache
des reinen Denkens, Halle, 1879, translated in [?].
[77] K. Lambert, The definition of e(xistence)! in free logic, in: Abstracts: The Inter-
national Congress for Logic, Methodology and Philosophy of Science, Stanford:
Stanford University Press, 1960.
[78] D. Scott, Existence and description in formal logic, in: R. Schoenman (Ed.),
Bertrand Russell: Philosopher of the Century, George Allen & Unwin, London,
1967, pp. 181–200, (Reprinted with additions in: Philosophical Application of
Free Logic, edited by K. Lambert. Oxford Universitry Press, 1991, pp. 28 - 48).
[79] K. Lambert, Free Logic: Selected Essays, Cambridge: Cambridge University
Press, 2002.
[80] J. Nolt, Free logic, in: E. N. Zalta (Ed.), The Stanford Encyclopedia of Philos-
ophy, fall 2018 Edition, Metaphysics Research Lab, Stanford University, 2018.
[81] C. Benzm¨uller, D. S. Scott, Axiomatizing category theory in free logic, Tech.
rep., CoRR (2016).
URL http://arxiv.org/abs/1609.01493
[82] C. Benzm¨uller, D. S. Scott, Automating free logic in Isabelle/HOL, in: G.-M.
Greuel, T. Koch, P. Paule, A. Sommese (Eds.), Mathematical Software – ICMS
2016, 5th International Congress, Proceedings, Vol. 9725 of LNCS, Springer,
Berlin, Germany, 2016, pp. 43–50. doi:10.1007/978-3- 319-42432-3_6.
[83] C. Benzm¨uller, D. S. Scott, Axiom systems for category theory in free logic,
Archive of Formal Proofs 2018.
URL https://www.isa-afp.org/entries/AxiomaticCategoryTheory.html
[84] P. Freyd, A. Scedrov, Categories, Allegories, North Holland, 1990.
[85] D. Scott, Identity and existence in intuitionistic logic, in: M. Fourman, C. Mul-
vey, D. Scott (Eds.), Applications of Sheaves: Proceedings of the Research
Symposium on Applications of Sheaf Theory to Logic, Algebra, and Analysis,
Durham, July 9–21, 1977, Vol. 752 of Lecture Notes in Mathematics, Springer
Berlin Heidelberg, 1979, pp. 660–696.
[86] C. Benzm¨uller, Cut-elimination for quantified conditional logic, Journal of Philo-
sophical Logic 46 (3) (2017) 333–353. doi:10.1007/s10992-016-9403-0.
[87] R. C. Stalnaker, A theory of conditionals, in: Studies in Logical Theory, Black-
well, 1968, pp. 98–112.
[88] B. Chellas, Basic conditional logic, Journal of Philosophical Logic 4 (2) (1975)
133–153.
[89] J. Delgrande, On first-order conditional logics, Artificial Intelligence 105 (1-2)
(1998) 105–137.
[90] J. Delgrande, A first-order conditional logic for prototypical properties, Artificial
Intelligence 33 (1) (1987) 105–130.
[91] N. Friedman, J. Halpern, D. Koller, First-order conditional logic for default
reasoning revisited, ACM Transactions on Computational Logic 1 (2) (2000)
175–207.
[92] D. Pattinson, L. Schr¨oder, Generic modal cut elimination applied to conditional
logics, Logical Methods in Computer Science 7 (1). doi:10.2168/LMCS-7(1:
4)2011.
[93] J. Rasga, Sufficient conditions for cut elimination with complexity analysis, Ann.
Pure Appl. Logic 149 (1-3) (2007) 81–99. doi:10.1016/j.apal.2007.08.001.
[94] C. Benzm¨uller, Automating quantified conditional logics in HOL, in: F. Rossi
(Ed.), 23rd International Joint Conference on Artificial Intelligence (IJCAI-13),
AAAI Press, Beijing, China, 2013, pp. 746–753.
[95] C. Benzm¨uller, C. Brown, M. Kohlhase, Cut-simulation and impredicativity,
Logical Methods in Computer Science 5 (1:6) (2009) 1–21. doi:10.2168/
LMCS-5(1:6)2009.
[96] M. Wisniewski, A. Steen, C. Benzm¨uller, Einsatz von Theorembeweisern in der
Lehre, in: A. Schwill, U. Lucke (Eds.), Hochschuldidaktik der Informatik: 7.
Fachtagung des GI-Fachbereichs Informatik und Ausbildung/Didaktik der In-
formatik; 13.-14. September 2016 an der Universit¨at Potsdam, Commentarii
informaticae didacticae (CID), Universit¨atsverlag Potsdam, Potsdam, Germany,
2016, pp. 81–92.
[97] C. Benzm¨uller, M. Wisniewski, A. Steen, Computational Metaphysics, this lec-
ture course proposal received the 2015 central teaching award of FU Berlin
(2015).
URL http://christoph-benzmueller.de/papers/R57.pdf
[98] D. Fuenmayor, C. Benzm¨uller, A case study on computational hermeneutics:
E.J. Lowe’s Modal ontological argument, Journal of Applied Logic - IfCoLoG
Journal of Logics and their Applications (special issue on Formal Approaches
to the Ontological Argument)Accepted for publication; to be published also as
chapter in the book ’Beyond Faith and Rationality: Essays on Logic, Religion
and Philosophy’ printed in the Springer book series ’Sophia Studies in Cross-
cultural Philosophy of Traditions and Cultures’.
URL http://christoph-benzmueller.de/papers/J38.pdf
[99] M. Bentert, C. Benzm¨uller, D. Streit, B. Woltzenlogel Paleo, Analysis of an on-
tological proof proposed by Leibniz, in: C. Tandy (Ed.), Death and Anti-Death,
Volume 14: Four Decades after Michael Polanyi, Three Centuries after G.W.
Leibniz, Ria University Press, 2016, preprint: http://christoph-benzmueller.
de/papers/B16.pdf.
[100] D. Fuenmayor, C. Benzm¨uller, Automating emendations of the ontological ar-
gument in intensional higher-order modal logic, in: KI 2017: Advances in Arti-
ficial Intelligence 40th Annual German Conference on AI, Dortmund, Germany,
September 25-29, 2017, Proceedings, Vol. 10505 of LNAI, Springer, 2017, pp.
114–127. doi:10.1007/978-3-319-67190- 1_9.
[101] D. Fuenmayor, C. Benzm¨uller, Computer-assisted Reconstruction and Assess-
ment of E. J. Lowe’s Modal Ontological Argument, Archive of Formal Proofs
2017.
URL https://www.isa-afp.org/entries/Lowe_Ontological_Argument.html
[102] I. Makarenko, Automatisierung von freier logik in logik h¨oherer stufe, Bachelor’s
thesis, Department of Mathematics and Computer Science, Freie Universit¨at
Berlin (2016).
[103] D. Fuenmayor, Computational hermeneutics: Using computers to understand
philosophical arguments, Bachelor’s thesis, Department of Philosophy, Freie Uni-
versit¨at Berlin (2017).
[104] T. Gleissner, Converting higher-order modal logic problems into classical higher-
order logic, Bachelor’s thesis, Department of Mathematics and Computer Sci-
ence, Freie Universit¨at Berlin (2017).
[105] D. Kirchner, Mechanization of the principia-logico metaphysica, Master’s thesis,
Department of Mathematics and Computer Science, Freie Universit¨at Berlin
(2017).
[106] F. Sch¨utz, Wahrhaftiger Toren Zorn - Repr¨asentation und Interpretation von Ar-
gumenten, Master’s thesis, Department of Mathematics and Computer Science,
Freie Universit¨at Berlin (2017).
[107] A. Steen, C. Benzm¨uller, Sweet SIXTEEN: Automation via embedding into
classical higher-order logic, Logic and Logical Philosophy 25 (2016) 535–554.
doi:10.12775/LLP.2016.021.
[108] M. Wisniewski, A. Steen, Embedding of quantified higher-order nominal modal
logic into classical higher-order logic, in: C. Benzm¨uller, J. Otten (Eds.), 1st
International Workshop on Automated Reasoning in Quantified Non-Classical
Logics (ARQNL 2014), Vienna, Austria, Proceedings, Vol. 33 of EasyChair Pro-
ceedings in Computing, EasyChair Publications, 2014, pp. 59–64.
[109] M. Claus, Software model checking with higher-order automated theorem
provers: A logic embedding approach, Master’s thesis, Department of Math-
ematics and Computer Science, Freie Universit¨at Berlin (2015).
[110] G. Boolos, The Logic of Provability, Cambridge University Press, 1993.
[111] N. Bostrom, Are we living in a computer simulation?, The Philosophical Quar-
terly 53 (211) (2003) 243–255. doi:10.1111/1467-9213.00309.
[112] C. Benzm¨uller, Universal reasoning, rational argumentation and human-machine
interaction, Tech. rep., CoRR (2017).
URL http://arxiv.org/abs/1703.09620
[113] D. Gabbay, J. Horty, X. Parent, R. van der Meyden, L. van der Torre (Eds.),
Handbook of Deontic Logic and Normative Systems, College Publications, 2013.
[114] D. Makinson, L. W. N. van der Torre, Input/output logics, Journal of Philo-
sophical Logic 29 (4) (2000) 383–408. doi:10.1023/A:1004748624537.
[115] J. Carmo, A. J. I. Jones, Completeness and decidability results for a logic of
contrary-to-duty conditionals, Journal of Logic and Computation 23 (3) (2013)
585–626. doi:10.1093/logcom/exs009.
[116] C. Benzm¨uller, X. Parent, L. van der Torre, A deontic logic reasoning in-
frastructure, in: F. Manea, R. G. Miller, D. Nowotka (Eds.), 14th Confer-
ence on Computability in Europe, CiE 2018, Kiel, Germany, July 30-August,
2018, Proceedings, Vol. 10936 of LNCS, Springer, 2018, pp. 60–69. doi:
10.1007/978-3-319-94418- 0_6.
[117] C. Benzm¨uller, X. Parent, First experiments with a flexible infrastructure for
normative reasoning, Tech. rep., CoRR (2018).
URL http://arxiv.org/abs/1804.02929
[118] C. Benzm¨uller, X. Parent, I/O logic in HOL — first steps, Tech. rep., CoRR
(2018).
URL http://arxiv.org/abs/1803.09681
[119] C. Benzm¨uller, A. Farjami, X. Parent, Faithful semantical embedding of a dyadic
deontic logic in HOL, Tech. rep., CoRR (2018).
URL https://arxiv.org/abs/1802.08454
[120] C. Benzm¨uller, J. Otten, T. Raths, Implementing and evaluating provers for
first-order modal logics, in: L. D. Raedt, C. Bessiere, D. Dubois, P. Doherty,
P. Frasconi, F. Heintz, P. Lucas (Eds.), ECAI 2012, Vol. 242 of Frontiers in
Artificial Intelligence and Applications, IOS Press, Montpellier, France, 2012,
pp. 163–168. doi:10.3233/978-1-61499-098- 7-163.
[121] C. Benzm¨uller, T. Raths, HOL based first-order modal logic provers, in: K. L.
McMillan, A. Middeldorp, A. Voronkov (Eds.), Proceedings of the 19th Inter-
national Conference on Logic for Programming, Artificial Intelligence and Rea-
soning (LPAR), Vol. 8312 of LNCS, Springer, Stellenbosch, South Africa, 2013,
pp. 127–136. doi:10.1007/978-3-642-45221- 5_9.
[122] M. Wisniewski, A. Steen, C. Benzm¨uller, TPTP and beyond: Representation of
quantified non-classical logics, in: C. Benzm¨uller, J. Otten (Eds.), ARQNL 2016.
Automated Reasoning in Quantified Non-Classical Logics, Vol. 1770, CEUR
Workshop Proceedings, http://ceur-ws.org, 2016, pp. 51–65.
[123] T. Gleißner, A. Steen, C. Benzm¨uller, Theorem provers for every normal modal
logic, in: T. Eiter, D. Sands (Eds.), LPAR-21. 21st International Conference on
Logic for Programming, Artificial Intelligence and Reasoning, Vol. 46 of EPiC
Series in Computing, EasyChair, Maun, Botswana, 2017, pp. 14–30. doi:10.
29007/jsb9.
[124] C. Benzm¨uller, A. Steen, M. Wisniewski, Leo-III version 1.1 (system descrip-
tion), in: T. Eiter, D. Sands, G. Sutcliffe, A. Voronkov (Eds.), IWIL@LPAR
2017 Workshop and LPAR-21 Short Presentations, Maun, Botswana, May 7-12,
2017, Vol. 1 of Kalpa Publications in Computing, EasyChair, Maun, Botswana,
2017, pp. 11–61.
[125] A. Steen, M. Wisniewski, C. Benzm¨uller, Going polymorphic - TH1 reasoning for
Leo-III, in: T. Eiter, D. Sands, G. Sutcliffe, A. Voronkov (Eds.), IWIL@LPAR
2017 Workshop and LPAR-21 Short Presentations, Maun, Botswana, May 7-12,
2017, Vol. 1 of Kalpa Publications in Computing, EasyChair, Maun, Botswana,
2017, pp. 100–112.
... The LOGIKEY framework supports plurality at different layers; cf. Figure 1. Classical higher-order logic (HOL) is fixed as a universal meta-logic (Benzmüller [10]) at the base layer (L0), on top of which a plurality of (combinations of) object logics can become encoded (layer L1). Employing these logical notions, we can now articulate a variety of logicbased domain-specific languages (DSLs), theories and ontologies at the next layer (L2), Logics 2024, 2 32 thus enabling the modelling and automated assessment of different application scenarios (layer L3). ...
... We note, however, that this choice of (a family of) object logics (P L) is just one out of a variety of logical systems which can be encoded as fragments of HOL employing the shallow semantical embedding approach; cf. Benzmüller [10]. This approach also allows us 'upgrade' our object logic whenever necessary. ...
... In fact, the family of preference logics P L can be seen as encompassing, in substance, the proposals by von Wright [74] (variant ≺ AA ) and Halpern [79] (variants ⪯ AE /≺ AE ) 10 . As we will see later in Section 6, there are only four choices (⪯ EA /≺ EA and ⪯ AE /≺ AE ) of modal preference relations that satisfy the minimal conditions we impose for a logic of value aggregation. ...
Article
Full-text available
The logico-pluralist LogiKEy knowledge engineering methodology and framework is applied to the modelling of a theory of legal balancing, in which legal knowledge (cases and laws) is encoded by utilising context-dependent value preferences. The theory obtained is then used to formalise, automatically evaluate, and reconstruct illustrative property law cases (involving the appropriation of wild animals) within the Isabelle/HOL proof assistant system, illustrating how LogiKEy can harness interactive and automated theorem-proving technology to provide a testbed for the development and formal verification of legal domain-specific languages and theories. Modelling value-oriented legal reasoning in that framework, we establish novel bridges between the latest research in knowledge representation and reasoning in non-classical logics, automated theorem proving, and applications in legal reasoning.
... SSEs are a means for universal logical reasoning by translating a target logic to HOL using a representation of its semantics (see [1]). They form a cornerstone of the more recent development of the LogiKEy framework that focuses on the application of the method to ethical reasoning, normative theories and deontic logics [3,5]. ...
... Furthermore, it is based on a relational type theory that was previously argued * Daniel Kirchner daniel.kirchner@uni-bamberg.de 1 AI Systems Engineering, Otto-Friedrich-Universität Bamberg, Bamberg, Germany to be incompatible with, and not representable in, systems based on functional type theory like Isabelle/HOL (see [19]). In successfully implementing AOT, our work shows that the SSE approach scales to challenging logics, even if they extend classical logic significantly, and that the approach can be used to represent full theories by faithfully reconstructing their axioms and deductive system. ...
... This includes, for example, specialized commands for introducing new definitions and restricted variables, for which AOT imposes specific inferential roles and restrictions that otherwise could not be concisely captured by the existing infrastructure Isabelle provides. 1 As a result, we arrive at a customized theorem proving environment for AOT that guarantees to only accept valid reasoning according to AOT's own deductive system and, in particular, avoids artifactual theorems. This is achieved by constructing a so-called abstraction layer that restricts automation to only rely on the reconstruction of the axioms and derivation rules of AOT, while barring the derivation of theorems that are mere artifacts of the underlying model construction. ...
... Fig. 1. Classical higher-order logic (HOL) is fixed as a universal meta-logic [1] at the base layer (L0), on top of which a plurality of (combinations of) object logics can become encoded (layer L1). Employing these logical notions we can now articulate a variety of logic-based domain-specific languages (DSLs [7]), theories and ontologies at the next layer (L2), thus enabling the modelling and automated assessment of different application scenarios (layer L3). ...
... The LOGIKEY methodology builds upon previous work on the shallow semantical embedding [1] of quantified non-classical logics in HOL (starting with modal logics in [3]) and has been employed in teaching since summer 2016 with the awarded Computational Metaphysics course at the Freie Universität Berlin. 1 . Ever since, the approach has been applied in several other courses where it has enabled concise and elegant encoding of prominent logic puzzles (e.g. in epistemic logic), foundations of mathematics and arguments in philosophy that require the use of non-trivial combinations of quantified non-classical logics. ...
... The LOGIKEY methodology builds upon previous work on the shallow semantical embedding [1] of quantified non-classical logics in HOL (starting with modal logics in [3]) and has been employed in teaching since summer 2016 with the awarded Computational Metaphysics course at the Freie Universität Berlin. 1 . Ever since, the approach has been applied in several other courses where it has enabled concise and elegant encoding of prominent logic puzzles (e.g. in epistemic logic), foundations of mathematics and arguments in philosophy that require the use of non-trivial combinations of quantified non-classical logics. ...
Presentation
Full-text available
We understand logic education as an interdisciplinary challenge, which we want to tackle with the support of modern computer technologies. Thus, we have been researching and experimenting with an uniform approach towards teaching logic at undergraduate level for mixed groups of computer science, mathematics and philosophy students, with the support of automated reasoning tools. Moreover, motivated by applications in artificial intelligence, knowledge representation and natural language semantics, we have realized that classical logic alone is not sufficient. As a consequence, we have focused from the very beginning in teaching modal and non-classical logics. During the last years we have started to rely in the logico-pluralistic LOGIKEY methodology [2], which explicitly harnesses interactive and automated theorem proving technology as integrated in modern mathematical proof assistants. Our methodology is in fact independent of a concrete choice in this regard, as long as the system's logic is based upon higher-order logic HOL (i.e. logics extending Church's simple type theory [4]). So far we have mostly focused on utilizing the Isabelle/HOL proof assistant [8], by harnessing its ecosystem of integrated theorem provers (via Sledgehammer [5]) and model generators (e.g. Nitpick [6]). The LOGIKEY framework supports plurality at different layers; cf. Fig. 1. Classical higher-order logic (HOL) is fixed as a universal meta-logic [1] at the base layer (L0), on top of which a plurality of (combinations of) object logics can become encoded (layer L1). Employing these logical notions we can now articulate a variety of logic-based domain-specific languages (DSLs [7]), theories and ontologies at the next layer (L2), thus enabling the modelling and automated assessment of different application scenarios (layer L3). These linked layers, as featured in the LOGIKEY approach, facilitate fruitful interdisciplinary collaboration between specialists (and students) in different AI-related domains and domain experts in the design and development of knowledge-based systems. The LOGIKEY methodology builds upon previous work on the shallow semantical embedding [1] of quantified non-classical logics in HOL (starting with modal logics in [3]) and has been employed in teaching since summer 2016 with the awarded Computational Metaphysics course at the Freie Universität Berlin. 1. Ever since, the approach has been applied in several other courses where it has enabled concise and elegant encoding of prominent logic puzzles (e.g. in epistemic logic), foundations of mathematics and arguments in philosophy that require the use of non-trivial combinations of quantified non-classical logics. We list below some of the teaching venues in which the LOGIKEY methodology has been applied.-(Invited lecture course) Higher
... In addition, at an orthogonal, methodological level, we study the scalability of an approach to universal meta-logical reasoning [7], that is based on shallow semantical embeddings of (layers of) object logics in classical higher-order logic, aka Church's simple type theory. ...
... Exploiting the expressivity and compositionality of the simply typed λ-calculus in HOL, the SSE approach encodes such logic translations directly in HOL itself, which makes external translation mechanisms superfluous. This HOL-internal translation approach has been successfully extended for various quantified non-classicals logics and applied, under the name universal meta-logical reasoning [7], amongst others, to encode free first-order logic [5] in Isabelle/HOL. This approach was then further extended to embed FHOL in HOL [18] which, in this paper, we will rely on to implement our higher-level categorical constructions. ...
Chapter
This paper presents meta-logical investigations based on category theory using the proof assistant Isabelle/HOL. We demonstrate the potential of a free logic based shallow semantic embedding of category theory by providing a formalization of the notion of elementary topoi. Additionally, we formalize symmetrical monoidal closed categories expressing the denotational semantic model of intuitionistic multiplicative linear logic. Next to these meta-logical-investigations, we contribute to building an Isabelle category theory library, with a focus on ease of use in the formalization beyond category theory itself. This work paves the way for future formalizations based on category theory and demonstrates the power of automated reasoning in investigating meta-logical questions. KeywordsFormalization of mathematicsCategory theoryProof assistantsFormal methodsShallow embeddings
... To summarize, the above approaches underline a key issue in legal reasoning and knowledge representation, which is the tension between the expressiveness of the knowledge and the efficiency of reasoning over it (Benzmüller, 2019). Expressivity refers to the ability of the language to directly capture the nuances of the target language, which in our case is the legal language. ...
... We achieve that by following the "shallow semantical embeddings" defined by Benzmüller (2019). The basic idea of this approach is to provide a lean and elegant equational theory which interprets the syntactical constituents of logic L (in our case a specific LLTs language) as terms of another logic M. ...
Article
Full-text available
There is an inherent tension between knowledge representation and reasoning. For an optimal representation and validation, an expressive language should be used. For an optimal automated reasoning, a simple one is preferred. Which language should we choose for our legal knowledge representation if our goal is to apply automated legal reasoning? In this paper, we investigate the properties and requirements of each of these two applications. We suggest that by using Legal Linguistic Templates, one can solve the above tension in some practical situations.
... This line of research has been continued by an automatic solutions for Schur number five [8] and Keller's conjecture [5]. (iv) My own current work with colleagues focuses on higher-order metalogical reasoning technologies [3] that have enabled us to detect and explain errors and problems in peer-reviewed publications in mathematics, computational metaphysics, and machine ethics. These include the discovery of an unnoticed inconsistency in Gödel's modern variant of the ontological argument for the existence of God, the discovery and clarification of a deeply rooted paradox in Zalta's Principia Logico-Metaphysica, and the revelation of some minor problems in a well-known textbook on category theory [4]. ...
... Classical higher-order logic (HOL) is fixed in the LogiKEy methodology and infrastructure (Benzmüller et al., 2020) as a universal meta-logic (Benzmüller, 2019) at the base layer (L0), on top of which a plurality of (combinations of) object logics can Figure 2.: Basic semantical ingredients; propositional and modal connectives become encoded (layer L1). In the case of this paper, we encode extensions of System E at layer L1 in order to assess them. ...
Preprint
Full-text available
We report some results regarding the mechanization of normative (preference-based) conditional reasoning. Our focus is on Aqvist's system E for conditional obligation (and its extensions). Our mechanization is achieved via a shallow semantical embedding in Isabelle/HOL. We consider two possible uses of the framework. The first one is as a tool for meta-reasoning about the considered logic. We employ it for the automated verification of deontic correspondences (broadly conceived) and related matters, analogous to what has been previously achieved for the modal logic cube. The second use is as a tool for assessing ethical arguments. We provide a computer encoding of a well-known paradox in population ethics, Parfit's repugnant conclusion. Whether the presented encoding increases or decreases the attractiveness and persuasiveness of the repugnant conclusion is a question we would like to pass on to philosophy and ethics.
... BDI and deontic logics are modal logics as opposed to the more well-known first-order logic that has been a staple of AI research since the inception of the field. Modal logics are substantially more difficult to automate, with only a handful of very recent attempts offering a path forward (Benzmüller, 2019). Effectively, all of the work in the BDI tradition discussed here uses encoding schemes that capture at most a fragment of BDI logic in first-order logic in order to keep computation tractable. ...
Chapter
The Cambridge Handbook of Computational Cognitive Sciences is a comprehensive reference for this rapidly developing and highly interdisciplinary field. Written with both newcomers and experts in mind, it provides an accessible introduction of paradigms, methodologies, approaches, and models, with ample detail and illustrated by examples. It should appeal to researchers and students working within the computational cognitive sciences, as well as those working in adjacent fields including philosophy, psychology, linguistics, anthropology, education, neuroscience, artificial intelligence, computer science, and more.
Chapter
A simplified variant of Gödel’s ontological argument is presented. The simplified argument is valid already in basic modal logics K or KT, it does not suffer from modal collapse, and it avoids the rather complex predicates of essence (Ess.) and necessary existence (NE) as used by Gödel. The variant presented has been obtained as a side result of a series of theory simplification experiments conducted in interaction with a modern proof assistant system. The starting point for these experiments was the computer encoding of Gödel’s argument, and then automated reasoning techniques were systematically applied to arrive at the simplified variant presented. The presented work thus exemplifies a fruitful human-computer interaction in computational metaphysics. Whether the presented result increases or decreases the attractiveness and persuasiveness of the ontological argument is a question I would like to pass on to philosophy and theology.
Article
We present an approach for representing abstract argumentation frameworks based on an encoding into classical higher-order logic. This provides a uniform framework for computer-assisted assessment of abstract argumentation frameworks using interactive and automated reasoning tools. This enables the formal analysis and verification of meta-theoretical properties as well as the flexible generation of extensions and labellings with respect to well-known argumentation semantics.
Article
Full-text available
Computers may help us to better understand (not just verify) arguments. In this article we defend this claim by showcasing the application of a new, computer-assisted interpretive method to an exemplary natural-language argument with strong ties to metaphysics and religion: E. J. Lowe’s modern variant of St. Anselm’s ontological argument for the existence of God. Our new method, which we call computational hermeneutics, has been particularly conceived for use in interactive-automated proof assistants. It aims at shedding light on the meanings of words and sentences by framing their inferential role in a given argument. By employing automated theorem reasoning technology within interactive proof assistants, we are able to drastically reduce (by several orders of magnitude) the time needed to test the logical validity of an argument’s formalization. As a result, a new approach to logical analysis, inspired by Donald Davidson’s account of radical interpretation, has been enabled. In computational hermeneutics, the utilization of automated reasoning tools effectively boosts our capacity to expose the assumptions we indirectly commit ourselves to every time we engage in rational argumentation and it fosters the explicitation and revision of our concepts and commitments.
Chapter
Full-text available
A flexible infrastructure for the automation of deontic and normative reasoning is presented. Our motivation is the development, study and provision of legal and moral reasoning competencies in future intelligent machines. Since there is no consensus on the “best” deontic logic formalisms and since the answer may be application specific, a flexible infrastructure is proposed in which candidate logic formalisms can be varied, assessed and compared in experimental ethics application studies. Our work thus links the historically rich research areas of classical higher-order logic, deontic logics, normative reasoning and formal ethics.
Article
Full-text available
A flexible infrastructure for normative reasoning is outlined. A small-scale demonstrator version of the envisioned system has been implemented in the proof assistant Isabelle/HOL by utilising the first authors universal logical reasoning approach based on shallow semantical embeddings in meta-logic HOL. The need for such a flexible reasoning infrastructure is motivated and illustrated with a contrary-to-duty example scenario selected from the General Data Protection Regulation.
Article
Full-text available
A semantical embedding of input/output logic in classical higher-order logic is presented. This embedding enables the mechanisation and automation of reasoning tasks in input/output logic with off-the-shelf higher-order theorem provers and proof assistants. The key idea for the solution presented here results from the analysis of an inaccurate previous embedding attempt, which we will discuss as well.
Article
Full-text available
A shallow semantical embedding of a dyadic deontic logic by Carmo and Jones in classical higher-order logic is presented. This embedding is proven sound and complete, that is, faithful. The work presented here provides the theoretical foundation for the implementation and automation of dyadic deontic logic within off-the-shelf higher-order theorem provers and proof assistants.
Article
Full-text available
The automated theorem prover Leo-III for classical higher-order logic with Henkin semantics and choice is presented. Leo-III is based on extensional higher-order paramodulation and accepts every common TPTP dialect (FOF, TFF, THF), including their recent extensions to rank-1 polymorphism (TF1, TH1). In addition, the prover natively supports almost every normal higher-order modal logic. Leo-III cooperates with first-order reasoning tools using translations to (polymorphic) many-sorted first-order logic and produces verifiable proof certificates. The prover is evaluated on heterogeneous benchmark sets.
Article
Full-text available
Principia Logico-Metaphysica contains a foundational logical theory for metaphysics, mathematics, and the sciences. It includes a canonical development of Abstract Object Theory [AOT], a metaphysical theory (inspired by ideas of Ernst Mally, formalized by Zalta) that distinguishes between ordinary and abstract objects. This article reports on recent work in which AOT has been successfully represented and partly automated in the proof assistant system Isabelle/HOL. Initial experiments within this framework reveal a crucial but overlooked fact: a deeply-rooted and known paradox is reintroduced in AOT when the logic of complex terms is simply adjoined to AOT’s specially formulated comprehension principle for relations. This result constitutes a new and important paradox, given how much expressive and analytic power is contributed by having the two kinds of complex terms in the system. Its discovery is the highlight of our joint project and provides strong evidence for a new kind of scientific practice in philosophy, namely, computational metaphysics . Our results were made technically possible by a suitable adaptation of Benzmüller’s metalogical approach to universal reasoning by semantically embedding theories in classical higher-order logic. This approach enables one to reuse state-of-the-art higher-order proof assistants, such as Isabelle/HOL, for mechanizing and experimentally exploring challenging logics and theories such as AOT. Our results also provide a fresh perspective on the question of whether relational type theory or functional type theory better serves as a foundation for logic and metaphysics.
Conference Paper
Leo-III is an effective automated theorem prover for ex-tensional type theory with Henkin semantics. It is based on an exten-sional higher-order paramodulation calculus and supports reasoning in monomorphic and rank-1 polymorphic first-order and higher-order logics. Leo-III also automates various non-classical logics, including almost every normal higher-order modal logic.