Marek Sergot's research while affiliated with Imperial College London and other places

What is this page?


This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.

It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.

If you're a ResearchGate member, you can follow this page to keep up with this author's work.

If you are this author, and you don't want us to display this page anymore, please let us know.

Publications (114)


A Unified Logical Framework for Reasoning about Deontic Properties of Actions and States
  • Article
  • Full-text available

June 2023

·

27 Reads

·

1 Citation

Logic and Logical Philosophy

·

Robert Trypuz

·

Robert Craven

·

Marek J. Sergot

This paper studies some normative relations that hold between actions, their preconditions and their effects, with particular attention to connecting what are often called ‘ought to be’ norms with ‘ought to do’ norms. We use a formal model based on a form of transition system called a ‘coloured labelled transition system’ (coloured LTS) introduced in a series of papers by Sergot and Craven. Those works have variously presented a formalism (an ‘action language’) nC+ for defining and computing with a (coloured) LTS, and another, separate formalism, a modal language interpreted on a (coloured) LTS used to express its properties. We consolidate these two strands. Instead of specifying the obligatory and prohibited states and transitions as part of the construction of a coloured LTS as in nC+, we represent norms in the modal language and use those to construct a coloured LTS from a given regular (uncoloured) one. We also show how connections between norms on states and norms on transitions previously treated as fixed constraints of a coloured LTS can instead be defined within the modal language used for representing norms.

Download
Share

Making Sense of Raw Input (Extended Abstract)

July 2022

·

7 Reads

·

Matko Bošnjak

·

Lars Buesing

·

[...]

·

Marek Sergot

How should a machine intelligence perform unsupervised structure discovery over streams of sensory input? One approach to this problem is to cast it as an apperception task. Here, the task is to construct an explicit interpretable theory that both explains the sensory sequence and also satisfies a set of unity conditions, designed to ensure that the constituents of the theory are connected in a relational structure. However, the original formulation of the apperception task had one fundamental limitation: it assumed the raw sensory input had already been parsed using a set of discrete categories, so that all the system had to do was receive this already-digested symbolic input, and make sense of it. But what if we don't have access to pre-parsed input? What if our sensory sequence is raw unprocessed information? The central contribution of this paper is a neuro-symbolic framework for distilling interpretable theories out of streams of raw, unprocessed sensory experience. First, we extend the definition of the apperception task to include ambiguous (but still symbolic) input: sequences of sets of disjunctions. Next, we use a neural network to map raw sensory input to disjunctive input. Our binary neural network is encoded as a logic program, so the weights of the network and the rules of the theory can be solved jointly as a single SAT problem. This way, we are able to jointly learn how to perceive (mapping raw sensory information to concepts) and apperceive (combining concepts into declarative rules).


Actual Cause and Chancy Causation in Stit: A Preliminary Account

January 2022

·

29 Reads

·

1 Citation

The paper investigates how actual cause may be treated in ‘seeing to it that’ (‘stit’) logics, that is, determining when the actions of a particular agent, or a particular set of agents collectively, can be said to be the cause of a given outcome in given circumstances. There are two complementary problems: (1) the outcome is brought about by the actions of some set of agents and the task is to identify which of these agents are essential to that bringing about, and (2) the outcome depends partly on chance and the task is to identify which agents could have acted differently and thereby ensured a different outcome. The final part of the paper discusses briefly the need to account for causal and other dependences between the actions of agents, and how that might be done without abandoning the ‘stit’ framework altogether.KeywordsLogic of agencystitActual causationChancy causationJoint action


Fig. 1 A labelled transition s ¼ ðs; e; s 0 Þ
Fig. 4 Labelled transition system for the print shop example limited to two available print shops: a purely descriptive account
Fig. 5 STIT-like model for the print shop example limited to two available print shops
Who is obliged when many are involved? Labelled transition system modelling of how obligation arises

September 2021

·

88 Reads

·

2 Citations

Artificial Intelligence and Law

The paper tackles the problem of the relation between rights and obligations. Two examples of situations in which such a relation occurs are discussed. One concerns the abortion regulations in Polish law, the other one - a clash between freedom of expression and freedom of enterprise occurring in the context of discrimination. The examples are analysed and formalised using labelled transition systems in the nC+ framework. Rights are introduced to the system as procedures allowing for their fulfilment. Obligations are based on the requirement of cooperation in the realisa-tion of the goals of the agent that has a right. If the right of an agent cannot be fulfilled without an action of another agent, then that action is obligatory for that agent. If there are many potential contributors who are individually allowed to refuse, then the last of them is obliged to help when all the others have already refused. By means of formalisation this account of the relation under consideration is precisely expressed and shown consistent.


Making sense of raw input

May 2021

·

95 Reads

·

41 Citations

Artificial Intelligence

How should a machine intelligence perform unsupervised structure discovery over streams of sensory input? One approach to this problem is to cast it as an apperception task [1]. Here, the task is to construct an explicit interpretable theory that both explains the sensory sequence and also satisfies a set of unity conditions, designed to ensure that the constituents of the theory are connected in a relational structure. However, the original formulation of the apperception task had one fundamental limitation: it assumed the raw sensory input had already been parsed using a set of discrete categories, so that all the system had to do was receive this already-digested symbolic input, and make sense of it. But what if we don't have access to pre-parsed input? What if our sensory sequence is raw unprocessed information? The central contribution of this paper is a neuro-symbolic framework for distilling interpretable theories out of streams of raw, unprocessed sensory experience. First, we extend the definition of the apperception task to include ambiguous (but still symbolic) input: sequences of sets of disjunctions. Next, we use a neural network to map raw sensory input to disjunctive input. Our binary neural network is encoded as a logic program, so the weights of the network and the rules of the theory can be solved jointly as a single SAT problem. This way, we are able to jointly learn how to perceive (mapping raw sensory information to concepts) and apperceive (combining concepts into declarative rules).


Some Forms of Collectively Bringing About or ‘Seeing to it that’

April 2021

·

43 Reads

·

1 Citation

Journal of Philosophical Logic

One of the best known approaches to the logic of agency are the ‘stit’ (‘seeing to it that’) logics. Often, it is not the actions of an individual agent that bring about a certain outcome but the joint actions of a set of agents, collectively. Collective agency has received comparatively little attention in ‘stit’. The paper maps out several different forms, several different senses in which a particular set of agents, collectively, can be said to bring about a certain outcome, and examines how these forms can be expressed in ‘stit’ and stit-like logics. The outcome that is brought about may be unintentional, and perhaps even accidental; the account deliberately ignores aspects such as joint intention, communication between agents, awareness of other agents’ intentions and capabilities, even the awareness of another agent’s existence. The aim is to investigate what can be said about collective agency when all such considerations are ignored, besides mere consequences of joint actions. The account will be related to the ‘strictly stit’ of Belnap and Perloff ( Annals of Mathematics and Artificial Intelligence 9 (1–2), 25–48 1993) and their suggestions concerning ‘inessential members’ and ‘mere bystanders’. We will adjust some of those conjectures and distinguish further between ‘potentially contributing bystanders’ and ‘impotent bystanders’.


Making sense of sensory input

January 2021

·

375 Reads

·

44 Citations

Artificial Intelligence

This paper attempts to answer a central question in unsupervised learning: what does it mean to “make sense” of a sensory sequence? In our formalization, making sense involves constructing a symbolic causal theory that both explains the sensory sequence and also satisfies a set of unity conditions. The unity conditions insist that the constituents of the causal theory – objects, properties, and laws – must be integrated into a coherent whole. On our account, making sense of sensory input is a type of program synthesis, but it is unsupervised program synthesis. Our second contribution is a computer implementation, the Apperception Engine, that was designed to satisfy the above requirements. Our system is able to produce interpretable human-readable causal theories from very small amounts of data, because of the strong inductive bias provided by the unity conditions. A causal theory produced by our system is able to predict future sensor readings, as well as retrodict earlier readings, and impute (fill in the blanks of) missing sensory readings, in any combination. In fact, it is able to do all three tasks simultaneously. We tested the engine in a diverse variety of domains, including cellular automata, rhythms and simple nursery tunes, multi-modal binding problems, occlusion tasks, and sequence induction intelligence tests. In each domain, we test our engine's ability to predict future sensor values, retrodict earlier sensor values, and impute missing sensory data. The Apperception Engine performs well in all these domains, significantly out-performing neural net baselines. We note in particular that in the sequence induction intelligence tests, our system achieved human-level performance. This is notable because our system is not a bespoke system designed specifically to solve intelligence tests, but a general-purpose system that was designed to make sense of any sensory sequence.


Evaluating the Apperception Engine

July 2020

·

62 Reads

The Apperception Engine is an unsupervised learning system. Given a sequence of sensory inputs, it constructs a symbolic causal theory that both explains the sensory sequence and also satisfies a set of unity conditions. The unity conditions insist that the constituents of the theory - objects, properties, and laws - must be integrated into a coherent whole. Once a theory has been constructed, it can be applied to predict future sensor readings, retrodict earlier readings, or impute missing readings. In this paper, we evaluate the Apperception Engine in a diverse variety of domains, including cellular automata, rhythms and simple nursery tunes, multi-modal binding problems, occlusion tasks, and sequence induction intelligence tests. In each domain, we test our engine's ability to predict future sensor values, retrodict earlier sensor values, and impute missing sensory data. The engine performs well in all these domains, significantly outperforming neural net baselines and state of the art inductive logic programming systems. These results are significant because neural nets typically struggle to solve the binding problem (where information from different modalities must somehow be combined together into different aspects of one unified object) and fail to solve occlusion tasks (in which objects are sometimes visible and sometimes obscured from view). We note in particular that in the sequence induction intelligence tests, our system achieved human-level performance. This is notable because our system is not a bespoke system designed specifically to solve intelligence tests, but a general-purpose system that was designed to make sense of any sensory sequence.


Making sense of sensory input

October 2019

·

147 Reads

This paper attempts to answer a central question in unsupervised learning: what does it mean to "make sense" of a sensory sequence? In our formalization, making sense involves constructing a symbolic causal theory that explains the sensory sequence and satisfies a set of unity conditions. This model was inspired by Kant's discussion of the synthetic unity of apperception in the Critique of Pure Reason. On our account, making sense of sensory input is a type of program synthesis, but it is unsupervised program synthesis. Our second contribution is a computer implementation, the Apperception Engine, that was designed to satisfy the above requirements. Our system is able to produce interpretable human-readable causal theories from very small amounts of data, because of the strong inductive bias provided by the Kantian unity constraints. A causal theory produced by our system is able to predict future sensor readings, as well as retrodict earlier readings, and "impute" (fill in the blanks of) missing sensory readings, in any combination. We tested the engine in a diverse variety of domains, including cellular automata, rhythms and simple nursery tunes, multi-modal binding problems, occlusion tasks, and sequence induction IQ tests. In each domain, we test our engine's ability to predict future sensor values, retrodict earlier sensor values, and impute missing sensory data. The Apperception Engine performs well in all these domains, significantly out-performing neural net baselines. We note in particular that in the sequence induction IQ tasks, our system achieved human-level performance. This is notable because our system is not a bespoke system designed specifically to solve IQ tasks, but a general purpose apperception system that was designed to make sense of any sensory sequence.


Fig. 1 The entailment lattice for {p, q}. If there is a line between two nodes, then the lower node entails the higher node
Fig. 7 Interpreting the Table of Judgements in KL
Formalizing Kant’s Rules: A Logic of Conditional Imperatives and Permissives

July 2019

·

668 Reads

·

8 Citations

Journal of Philosophical Logic

This paper formalizes part of the cognitive architecture that Kant develops in the Critique of Pure Reason. The central Kantian notion that we formalize is the rule. As we interpret Kant, a rule is not a declarative conditional stating what would be true if such and such conditions hold. Rather, a Kantian rule is a general procedure, represented by a conditional imperative or permissive, indicating which acts must or may be performed, given certain acts that are already being performed. These acts are not propositions; they do not have truth-values. Our formalization is related to the input/output logics, a family of logics designed to capture relations between elements that need not have truth-values. In this paper, we introduce KL3 as a formalization of Kant’s conception of rules as conditional imperatives and permissives. We explain how it differs from standard input/output logics, geometric logic, and first-order logic, as well as how it translates natural language sentences not well captured by first-order logic. Finally, we show how the various distinctions in Kant’s much-maligned Table of Judgements emerge as the most natural way of dividing up the various types and sub-types of rule in KL3. Our analysis sheds new light on the way in which normative notions play a fundamental role in the conception of logic at the heart of Kant’s theoretical philosophy.


Citations (85)


... In addition, the truth-condition for [i ] is given by: 24 In logics of agency in the tradition of seeing to it that, causality is not often studied directly (exceptions include Xu, 1997;Lorini et al., 2014;Baltag et al., 2021;Sergot, 2022). 25 Although the but-for and the NESS test are rarely discussed in this literature (a notable exception is Baltag et al. (2021)), similar concerns arise in the analysis of joint agency and collective action and, in particular, in questions pertaining to whether individual members are essential (see Belnap and Perloff, 1993;Sergot, 2021). ...

Reference:

A Logical Study of Moral Responsibility
Actual Cause and Chancy Causation in Stit: A Preliminary Account
  • Citing Chapter
  • January 2022

... Here, NSAI is used to reason directly about the inner workings of a perceptual model, often for a downstream task involving an explanation of the perceptual results. One example of such work is [14] where a binarized neural network is used to produce a symbolic theory of perception used in a downstream task of appreciation, providing an explanation of the perceptual results. Another application of NSAI to transparency deals with the use of concept induction [11] to map activations in a neural network to an explanation using description logic -thereby providing transparency. ...

Making sense of raw input
  • Citing Article
  • May 2021

Artificial Intelligence

... In addition, the truth-condition for [i ] is given by: 24 In logics of agency in the tradition of seeing to it that, causality is not often studied directly (exceptions include Xu, 1997;Lorini et al., 2014;Baltag et al., 2021;Sergot, 2022). 25 Although the but-for and the NESS test are rarely discussed in this literature (a notable exception is Baltag et al. (2021)), similar concerns arise in the analysis of joint agency and collective action and, in particular, in questions pertaining to whether individual members are essential (see Belnap and Perloff, 1993;Sergot, 2021). 26 ...

Some Forms of Collectively Bringing About or ‘Seeing to it that’

Journal of Philosophical Logic

... Rule selection. Many systems formulate the ILP problem as a rule selection problem (Corapi, Russo, and Lupu 2011;Kaminski, Eiter, and Inoue 2019;Si et al. 2019;Raghothaman et al. 2020;Evans et al. 2021;Bembenek, Greenberg, and Chong 2023). These approaches precompute every possible rule in the hypothesis space and then search (often using a constraint solver) for a subset that entails all the positive and none of the negative examples. ...

Making sense of sensory input

Artificial Intelligence

... As we know today, Kant himself made a decisive contribution to Eulerian logic (Lemanski 2023). On the other hand, a formal system of Kantian logic can be developed by using paraconsistent logics and incorporating the subject-predicate structure of natural logic (Kovac 2008;Achourioti and van Lambalgen 2011;Evans et al. 2019). And recently, the early Kantian writing The False Subtlety of the Four Syllogistic Figures has been interpreted as an anticipation of prooftheoretical semantics (de Castro Alves 2022). ...

Formalizing Kant’s Rules: A Logic of Conditional Imperatives and Permissives

Journal of Philosophical Logic

... Various formalisms were used to formally describe distributed systems based on agents. These attempts made it possible to associate a f ormal semantics the modeled As what we have noted above, we can find in the literature several attempts for the formal specification of multi-agent systems, which tend to describe an agent in mathematical terms, and those based on Petri nets, finite state automata, X-machine such as : [17,18,19,20,21,22,23,24], … etc. In our opinion, the formalisms used for the formalization of multi-agents system can be classified into five principal families: ...

The bit transmission problem revisited
  • Citing Conference Paper
  • January 2002

... Les agents dans un SMA ouvert sont, par définition, de différents types et architectures et disposent chacun de son propre but. C'est-à-dire, ils peuvent être conçus par différentes équipes dont la mise en oeuvre nécessite l'intervention des méthodes et technologies différentes (concepts, architectures, langages de programmation, etc.) [3],parfois une situation de conflits aura lieu si deux ou plusieurs agents veulent arriver à des buts contradictoires [4]. Bien que l'aspect interne des agents soit inaccessible, la communication des agents est le seul moyen avec lequel on peut observer le comportement des agents à travers l'échange d'informations [5]. ...

Specifying and Executing Open Multi-agent Systems
  • Citing Chapter
  • August 2016

... Several methods leverage on this reasoning, e.g. functional class scoring (FCS) [26,27]. In FCS, an overlap is calculated between the observed proteins and each complex. ...

Erratum: Comparative network-based recovery analysis and proteomic profiling of neurological changes in valproic acid-treated mice (Journal of Proteome Research (2013) 12:5 (2116-2127) DOI: 10.1021/pr301127f)
  • Citing Article
  • October 2013

... Shet et al. [18] proposed a framework integrating computer vision algorithms with logic programming to describe and identify video surveillance activities in a parking lot. Other human activity recognition systems were proposed by [19] these systems are based on an Event Calculus logic programming implementation [20]. ...

A Logic-Based Approach to Activity Recognition
  • Citing Article
  • January 2013