Figure 1 - uploaded by Alexander Feldman
Content may be subject to copyright.
A subtractor circuit

A subtractor circuit

Source publication
Article
Full-text available
We propose a StochAstic Fault diagnosis AlgoRIthm, called Safari, which trades off guarantees of computing minimal diagnoses for computational efficiency. We empirically demonstrate, using the 74XXX and ISCAS85 suites of benchmark combinatorial circuits, that Safari achieves several orders-of-magnitude speedup over two well-known determinis-tic alg...

Similar publications

Conference Paper
Full-text available
Most algorithms for computing diagnoses within a model-based diagnosis framework are deterministic. Such algorithms guarantee soundness and completeness, but are Σ2P-hard. To overcome this complexity problem, which prohibits the computation of high-cardinality diagnoses for large systems, we propose a novel approximation approach for multiple-fault...
Conference Paper
Full-text available
We propose a StochAstic Fault diagnosis AlgoRIthm, called Safari, which trades off guarantees of computing minimal diagnoses for computational efficiency. We empirically demonstrate, using the 74XXX and ISCAS85 suites of benchmark combinatorial circuits, that Safari achieves several orders-of-magnitude speedup over two well-known determinis- tic al...

Citations

... with a health variable h ∈ {⊤, ⊥}, and a behaviour model c = a ∧ b. In this case the behaviour model is a classical Boolean AND-Gate with inputs a, b ∈ {⊤, ⊥} and output c ∈ {⊤, ⊥} (Feldman et al., 2010), where ⊤, ⊥ denote logical true and false values, respectively. The system description contains as many expressions of the form of Equation 1 as are necessary to describe a system. ...
Conference Paper
Fault diagnosis algorithms compute faulty components by comparing actual observations against some model of known behaviour. A major challenge for fault diagnosis lies in creating such a suitable model. In the past, models were usually assumed to be given by experts. But in modern cyber-physical systems this assumption cannot be held, as experts are expensive and system architectures may be subject to change. This article presents a novel algorithm to obtain those models automatically and apply them for fault diagnosis. The evaluation was done on the Tennessee Eastman process and on two benchmarks of multiple-tank systems.
... The field of fault diagnosis was established by De Kleer [25] and Reiter [7]. Over the years several fault diagnosis algorithms have been developed [26]- [28]. Rodler [29] provides a good overview over the respective diagnosis algorithms. ...
Conference Paper
With the increasing complexity of highly automated cyber-physical systems (CPS), monitoring their behavior has become crucial. Failures in these systems can be costly, halt production, or even pose risks to human safety. Effective diagnosis depends on understanding the system's components, connections, and the influences among them, knowledge typically provided by experts. However, the shift towards self-diagnosing systems necessitates this knowledge be machine-readable and interpretable. This paper introduces a novel methodology that utilizes an ontology to encode knowledge about cyber-physical systems and systematically generate propositional logical expressions. These expressions can then be evaluated using state-of-the-art diagnostic algorithms to identify failure causes. Our methodology was validated using an established AI benchmark for diagnostics. We constructed an ontology description for the underlying cyber-physical system, deduced influences of system sensors from data, and successfully diagnosed induced failures, demonstrating the efficacy and applicability of our approach.
... An elaborate process is required to declare every faulty signature from all component combinations in a complex system according to its causalities. In contrast, weak-fault models indicate a system with its normal behavior [6]. Therefore, the system description only needs to be formulated in the healthy state, without every single fault mode of a system. ...
Preprint
The increasing complexity of Cyber-Physical Systems (CPS) makes industrial automation challenging. Large amounts of data recorded by sensors need to be processed to adequately perform tasks such as diagnosis in case of fault. A promising approach to deal with this complexity is the concept of causality. However, most research on causality has focused on inferring causal relations between parts of an unknown system. Engineering uses causality in a fundamentally different way: complex systems are constructed by combining components with known, controllable behavior. As CPS are constructed by the second approach, most data-based causality models are not suited for industrial automation. To bridge this gap, a Uniform Causality Model for various application areas of industrial automation is proposed, which will allow better communication and better data usage across disciplines. The resulting model describes the behavior of CPS mathematically and, as the model is evaluated on the unique requirements of the application areas, it is shown that the Uniform Causality Model can work as a basis for the application of new approaches in industrial automation that focus on machine learning.
... The algorithm ServiceDiag is a more automated approach, but with the assumption that observations are already available from, for example, process data. Given a set of propositional logic rules, common in the model-based diagnosis community [1], [25], the algorithm automates DiagHypotheses. The algorithm first gathers all necessary observations and then checks if the observations satisfy the system model specified in propositional logic. ...
... The algorithm first gathers all necessary observations and then checks if the observations satisfy the system model specified in propositional logic. The output of the algorithm is a minimal cardinality diagnosis [25]. ...
Conference Paper
Full-text available
Service technicians serving machines in small- and medium sized enterprises face the challenge to diagnose machines with increasing complexity in less time. To help them cope with the task of diagnosis (i.e. finding faults) this article introduces a novel fault diagnosis algorithm, and a web-based implementation for industrial fault diagnosis. When a fault occurs the diagnosis algorithm proposes observations for the service technicians and generates likely causes according to the observations taken. This helps technicians to find faults faster, facilitates management of expert knowledge, and ultimately decreases system downtime. We have evaluated our approach with a Monte Carlo simulation of an industrial packaging machine and through the implementation of some prototype software. Both evaluations show that our approach is usable for realworld service technicians and operators of production machinery.
... For each structure pattern we analyse fault propagation and compare how the pattern behaves with and without the calculation of residuals. We then show how those residuals can be used to extend common definitions of consistency-based diagnosis [11,6] by means of a SMT-logic (Satisfiablility Modulo Theory) (RQ 2). ...
... The research field of model-based fault diagnosis goes back to the seminal works of 1987 by De Kleer [7] and Reiter [33], many ideas, especially the search for causes, go back even further to Brown in 1974 [34]. Nowadays, many diagnosis algorithms exist [35,36,11,37]. In our article we attempt to formulate a theory which explains how to use these diagnosis algorithms in the context of physical systems and through the use of residual values. ...
... Rules have the advantage to be interpretable and editable by human experts. Additionally, in the last 30 years a significant amount of research and experiments have been carried out to develop efficient algorithms to solve large numbers of symbolic rules to perform diagnosis [7,11,6]. ...
Article
In this article we describe a novel diagnosis methodology for physical systems such as industrial production systems. The article consists of two parts: Part one analyzes the differences between using sensor values and using residual values for fault diagnosis. Residual values denote the health of a component by comparing sensor values to a predefined model of normal behaviour. We further analyse how faults propagate through components of a physical system and argue for the use of residual values for diagnosing physical systems. In part two we extend the theory of established consistency-based diagnosis algorithms to use residual values. We also illustrate how users of the presented diagnosis methodology are free to substitute the residual generating equations and the diagnosis algorithm to suit their specific needs. For diagnosis, we present the algorithm HySD, based on Satisfiability Modulo Linear Arithmetic. We present an implementation of HySD using threshold values and a symbolic diagnosis approach. However, the approach is also suitable to integrate modern machine learning methods for anomaly detection and combine them with a multitude of diagnosis approaches. Through experiments on the process-industry benchmark Tennessee Eastman Process and another benchmark consisting of multiple tank systems we show the feasibility of our approach. Overall we show how our novel diagnosis approach offers a practical methodology that allows industry to advance from current state of the art anomaly detection to automated fault diagnosis. Keywords: Diagnosis; Fault detection and isolation; Qualitative physics; Satisfiability
... The diagnostic problem occurs when there exists an inconsistency in the model of the system and observations. Since Reiter raised MBD problem and its algorithm, a large number of significant and improved [2][3][4][5][6][7][8][9][10][11][14][15][16][17][18][19][20][21][22][23][24]. These algorithms provide a way to analyze the complex system in various areas, including software fault localization, type error debugging, debugging of relational specifications, the automotive industry, and design debugging, among many others. ...
... For traditional algorithms, system description for diagnosis is formed as propositional logic, exceptionally, there is a novel model Simulated Annealing (SA) [17] which translates MBD into a polynomial minimization problem. For the mainstream algorithms with propositional logic formulation, SAFARI [2,18] gets a solution with mutiple times stochastic search. Although this method reduces the runtime, it cannot guarantee that its solution is a cardinality-minimal diagnosis. ...
Article
Full-text available
Model-based Diagnosis (MBD) with multiple observations is a currently complicated problem with many applications and solving this problem is attracting more and more attention. This paper propose an improved algorithm, called Improved implicit Hitting Set Dualization (IHSD), which is the integration of gate domination in recent works for computing cardinality-minimal aggregated diagnoses in MBD problems. First, our approach works by separating components into dominated components and non-dominated components according to structure of diagnosis system. The separated components are modelled as hard clauses and soft clauses separately. Additionally, two feasible approaches, called IHSDa and IHSDb, are proposed to expand one cardinality-minimal aggregated diagnosis to more diagnoses. Experimental results on 74XXX and ISCAS85 benchmarks clearly show that IHSD algorithm improves HSD, DC and DC*. Moreover, IHSDa and IHSDb outperform HSD on solving more diagnoses.
... proaches to diagnose physical systems have been presented . But only very few approaches are actually usable outside of limited use-cases (Feldman, Provan, & van Gemund, 2009;Feldman, Provan, & Van Gemund, 2010;Stern, Kalech, & Elimelech, 2014;Khorasgani & Biswas, 2017). However, other domains seem to have tackled the problem (Sampath, Sengupta, Lafortune, Sinnamohideen, & Teneketzis, 1995;Leitão, Rosso, Leal, & Zoitl, 2020; M. J. Daigle et al., 2010). ...
... Consequently, we create the rules set Φ = {φ 0 , φ 1 , ...}. This rule set is the basis to use traditional diagnosis algorithms such as GDE (De Kleer & Williams, 1987), Reiter's diagnosis lattice (Reiter, 1987), or SAFARI (Feldman et al., 2010). ...
Conference Paper
Full-text available
This article presents a novel approach to diagnose faults in production machinery. A novel data-driven approach is presented to learn an approximation of dependencies between variables using Spearman correlation. It is further shown, how the approximation of the dependencies are used to create propositional logic rules for fault diagnosis. The article presents two novel algorithms: 1) to estimate dependencies from process data and 2) to create propositional logic diagnosis rules from those connections and perform consistency-based fault diagnosis. The presented approach was validated using three experiments. The first two show that the presented approach works well for injection molding machines and a simulation of a four-tank system. The limits of the presented method are shown with the third experiment containing sets of highly correlated signals.
... proaches to diagnose physical systems have been presented . But only very few approaches are actually usable outside of limited use-cases (Feldman, Provan, & van Gemund, 2009;Feldman, Provan, & Van Gemund, 2010;Stern, Kalech, & Elimelech, 2014;Khorasgani & Biswas, 2017). However, other domains seem to have tackled the problem (Sampath, Sengupta, Lafortune, Sinnamohideen, & Teneketzis, 1995;Leitão, Rosso, Leal, & Zoitl, 2020; M. J. Daigle et al., 2010). ...
... Consequently, we create the rules set Φ = {φ 0 , φ 1 , ...}. This rule set is the basis to use traditional diagnosis algorithms such as GDE (De Kleer & Williams, 1987), Reiter's diagnosis lattice (Reiter, 1987), or SAFARI (Feldman et al., 2010). ...
Article
Full-text available
This article presents a novel approach to diagnose faults in injection molding machines. A novel data-driven approach is presented to learn an approximation of dependencies between variables using Spearman correlation. It is further shown, how the approximation of the dependencies are used to create propositional logic rules for fault diagnosis. The article presents two novel algorithms: 1) to estimate dependencies from process data and 2) to create propositional logic diagnosis rules from those connections and perform consistency-based fault diagnosis. The presented approach was validated using three experiments. The first two show that the presented approach works well for injection molding machines and a simulation of a four-tank system. The limits of the presented method are shown with the third experiment containing sets of highly correlated signals.
... Another incomplete, but computationally efficient approach is SAFARI, which uses stochastic local search to compute diagnoses. SAFARI does not guarantee minimal cardinality nor completeness, although it has shown to be able to diagnose far larger models than competing algorithms (Feldman, Provan, and van Gemund 2010). ...
... Having conflicts of small cardinality in Λ will result in faster HS search and less iterations of PDDS. We perform the PDDS minimization of conflicts and diagnoses (line 9) by employing a greedy algorithm similar to the one used in SAFARI (Feldman, Provan, and van Gemund 2010). First we flip a random health variable in Lit + (λ), and use a consistency check to see if it is still a conflict. ...
... On larger circuits CDA* was not able to solve any instance. We used a partial set of the observations used by (Feldman, Provan, and van Gemund 2010). Table 2 shows the percentage of instances where the minimal cardinality was found under the 30 second time-limit. ...
Article
A model-based diagnosis problem occurs when an observation is inconsistent with the assumption that the diagnosed system is not faulty. The task of a diagnosis engine is to compute diagnoses, which are assumptions on the health of components in the diagnosed system that explain the observation. In this paper, we extend Reiter's well-known theory of diagnosis by exploiting the duality of the relation between conflicts and diagnoses. This duality means that a diagnosis is a hitting set of conflicts, but a conflict is also a hitting set of diagnoses. We use this property to interleave the search for diagnoses and conflicts: a set of conflicts can guide the search for diagnosis, and the computed diagnoses can guide the search for more conflicts. We provide the formal basis for this dual conflict-diagnosis relation, and propose a novel diagnosis algorithm that exploits this duality. Experimental results show that the new algorithm is able to find a minimal cardinality diagnosis faster than the well-known Conflict-Directed A*.
... The hardware limitations play an important role when solving problems naturally formulated as a Hamiltonian with k-local interactions with k > 2. There are various optimization problems in fundamental physics, computer science, and applications that are natively k-local. Examples in physics, as well as computer science, are computing the partition function of a four-dimensional pure lattice gauge theory [33,34], measuring the fault-tolerance in topological colour codes [35], and solving k-SAT problems with k > 2. Examples of practical applications are circuit fault diagnosis [36,29], molecular similar-ity measurement [37], molecular conformational sampling [38], and traffic light synchronization [39]. ...
Article
Full-text available
Recently, there has been considerable interest in solving optimization problems by mapping these onto a binary representation, sparked mostly by the use of quantum annealing machines. Such binary representation is reminiscent of a discrete physical two-state system, such as the Ising model. As such, physics-inspired techniques—commonly used in fundamental physics studies—are ideally suited to solve optimization problems in a binary format. While binary representations can be often found for paradigmatic optimization problems, these typically result in k-local higher-order unconstrained binary optimization cost functions. In this work, we discuss the effects of locality reduction needed for the majority of the currently available quantum and quantum-inspired solvers that can only accommodate 2-local (quadratic) cost functions. General locality reduction approaches require the introduction of ancillary variables which cause an overhead over the native problem. Using a parallel tempering Monte Carlo solver on Microsoft Azure Quantum, as well as k-local binary problems with planted solutions, we show that post reduction to a corresponding 2-local representation the problems become considerably harder to solve. We further quantify the increase in computational hardness introduced by the reduction algorithm by measuring the variation of number of variables, statistics of the coefficient values, and the population annealing entropic family size. Our results demonstrate the importance of avoiding locality reduction when solving optimization problems.