Figure 9 - uploaded by Mohammad Badiul Islam
Content may be subject to copyright.
FLT Storage Size Comparison

FLT Storage Size Comparison

Source publication
Article
Full-text available
A rule based knowledge system consists of three main components: a set of rules, facts to be fed to the reasoning corresponding to the data of a case, and an inference engine. In general, facts are stored in (relational) databases that represent knowledge in a first-order based formalism. However, legal knowledge uses defeasible deontic logic for k...

Context in source publication

Context 1
... measured the FLT storage size for the above 9 rule sets and the corresponding FLT construction time. Figure 9 shows the index sizes. Given the same numP , the larger the 835 number of predicates (maxP ) involved in a rule , the larger the index size. ...

Citations

... More recently, authors proposed a theoretical framework of legal rule-based system for the criminal domain, named CORBS (El Ghosh et al., 2017). The system is founded on a homogeneous integration of a criminal domain ontology with a set of logic rules (Liu et al., 2021). Thus, CORBS stands as a unified framework that supports efficient legal reasoning. ...
Article
Full-text available
Decisions made by legal adjudicators and administrative decision-makers often found upon a reservoir of stored experiences, from which is drawn a tacit body of expert knowledge. Such expertise may be implicit and opaque, even to the decision-makers themselves, and generates obstacles when implementing AI for automated decision-making tasks within the legal field, since, to the extent that AI-powered decision-making tools must found upon a stock of domain expertise, opacities may proliferate. This raises particular issues within the legal domain, which requires a high level of accountability, thus transparency. This requires enhanced explainability, which entails that a heterogeneous body of stakeholders understand the mechanism underlying the algorithm to the extent that an explanation can be furnished. However, the “black-box” nature of some AI variants, such as deep learning, remains unresolved, and many machine decisions therefore remain poorly understood. This survey paper, based upon a unique interdisciplinary collaboration between legal and AI experts, provides a review of the explainability spectrum, as informed by a systematic survey of relevant research papers, and categorises the results. The article establishes a novel taxonomy, linking the differing forms of legal inference at play within particular legal sub-domains to specific forms of algorithmic decision-making. The diverse categories demonstrate different dimensions in explainable AI (XAI) research. Thus, the survey departs from the preceding monolithic approach to legal reasoning and decision-making by incorporating heterogeneity in legal logics: a feature which requires elaboration, and should be accounted for when designing AI-driven decision-making systems for the legal field. It is thereby hoped that administrative decision-makers, court adjudicators, researchers, and practitioners can gain unique insights into explainability, and utilise the survey as the basis for further research within the field.
... More recently, authors proposed a theoretical framework of legal rule-based system for the criminal domain, named CORBS (El Ghosh et al., 2017). The system is founded on a homogeneous integration of a criminal domain ontology with a set of logic rules (Liu et al., 2021). Thus, CORBS stands as a unified framework that supports efficient legal reasoning. ...
Preprint
Full-text available
Decisions made by legal adjudicators and administrative decision-makers often found upon a reservoir of stored experiences, from which is drawn a tacit body of expert knowledge. Such expertise may be implicit and opaque, even to the decision-makers themselves, and generates obstacles when implementing AI for automated decision-making tasks within the legal field, since, to the extent that AI-powered decision-making tools must found upon a stock of domain expertise, opacities may proliferate. This raises particular issues within the legal domain, which requires a high level of accountability, thus transparency. This requires enhanced explainability, which entails that a heterogeneous body of stakeholders understand the mechanism underlying the algorithm to the extent thatanexplanationcanbefurnished.However,the’black-box’nature ofsomeAIvariants,suchas deep learning, remains unresolved, and many machine decisions therefore remain poorly understood. This survey paper, based upon a unique interdisciplinary collaboration between legal and AI experts, provides a review of the explainability spectrum, as informed by a systematic survey of relevant research papers, and categorises the results. The article establishes a novel taxonomy, linking the differing forms of legal inference at play within particular legal sub-domains to specific forms of algorithmic decision-making. The diverse categories demonstrate different dimensions in explainable AI (XAI) research. Thus, the survey departs from the preceding monolithic approach to legal reasoning and decision-making by incorporating heterogeneity in legal logics: a feature which requires elaboration, and should be accounted for when designing AI-driven decision-making systems for the legal field. It is thereby hoped that administrative decision-makers, court adjudicators, researchers, and practitioners can gain unique insights into explainability, and utilise the survey as the basis for further research within the field.
... Several works on knowledge graph reasoning do not involve input text for inferring new relation edges. These tasks have been solved using different approaches including logic rules [2,33,41,63], bayesian models [47,58,70], distributed representations [36,55,79], neural networks [38,45,60,74], and reinforcement learning [19,37,64]. Relation prediction, as a special class of knowledge graph reasoning, predicts missing relation edges between two entities [12,13,44,50,65]. ...
Article
Full-text available
Contextual Path Generation (CPG) refers to the task of generating knowledge path(s) between a pair of entities mentioned in an input textual context to determine the semantic connection between them. Such knowledge paths, also called contextual paths, can be very useful in many advanced information retrieval applications. Nevertheless, CPG involves several technical challenges, namely, sparse and noisy input context, missing relations in knowledge graphs, and generation of ill-formed and irrelevant knowledge paths. In this paper, we propose a transformer-based model architecture. In this approach, we leverage a mixture of pre-trained word and knowledge graph embeddings to encode the semantics of input context, a transformer decoder to perform path generation controlled by encoded input context and head entity to stay relevant to the context, and scaling methods to sample a well-formed path. We evaluate our proposed CPG models derived using the above architecture on two real datasets, both consisting of Wikinews articles as input context documents and ground truth contextual paths, as well as a large synthetic dataset to conduct larger-scale experiments. Our experiments show that our proposed models outperform the baseline models, and the scaling methods contribute to better quality contextual paths. We further analyze how CPG accuracy can be affected by different amount of context data, and missing relations in the knowledge graph. Finally, we demonstrate that an answer model for knowledge graph questions adapted for CPG could not perform well due to the lack of an effective path generation module.
... The development of mobile hybrid system technology in this new framework system model can make transactions and medicine distribution supervision done online and real time [14]. The purpose of this research is to create a new framework system model by combining supply chain [15] and expert system [16] regarding medicine distribution using the rule-based reasoning method [17]. The rule-based reasoning method is very suitable to be used in this research because it can adopt the regulations and knowledge of pharmaceutical experts into a system in the form of an algorithm, even the rule-based reasoning method allows experts to be directly involved in research [18]. ...
... Likewise, the output in the system itself can be the input for the supply chain information system. Combined framework of supply chain and expert system using rule-based reasoning method can encourage better system work [17], [41]. The combined architecture in this research can be seen in Figure 5. ...
Article
Full-text available
The medicine distribution supply chain is important, especially during the COVID-19 pandemic, because delays in medicine distribution can increase the risk for patients. So far, the distribution of medicines has been carried out exclusively and even some medicines are distributed on a limited basis because they require strict supervision from the Medicine Supervisory Agency in each department. However, the distribution of this medicine has a weakness if at one public Health center there is a shortage of certain types of medicines, it cannot ask directly to other public Health center, thus allowing the availability of medicines not to be fulfilled. An integrated process is needed that can accommodate regulations and leadership policies and can be used for logistics management that will be used in medicine distribution. This study will create a new model by combining supply chains with information systems and expert systems using the rule-based reasoning method as an inference engine that can be developed for medicine distribution based on a mobile hybrid system in the Demak District Health Office, Indonesia. So that a new framework model based on a mobile hybrid system can facilitate the distribution of medicines effectively and efficiently.
... Still, there exist only comparably few systems that, in fact, automate reasoning processes based on normative knowledge. Notable examples are provided by Liu et al. who interpret legal norms in a defeasible deontic logic and provide automation for it [26], and the SPINdle prover [24] for propositional (modal) defeasible reasoning that has been used in multiple works in the normative application domain. ...
Preprint
LegalRuleML is a comprehensive XML-based representation framework for modeling and exchanging normative rules. The TPTP input and output formats, on the other hand, are general-purpose standards for the interaction with automated reasoning systems. In this paper we provide a bridge between the two communities by (i) defining a logic-pluralistic normative reasoning language based on the TPTP format, (ii) providing a translation scheme between relevant fragments of LegalRuleML and this language, and (iii) proposing a flexible architecture for automated normative reasoning based on this translation. We exemplarily instantiate and demonstrate the approach with three different normative logics.
... Due to its simplicity in codifying the knowledge of human experts, rule-based reasoning systems have been widely used in various knowledge-intensive expert systems. For example, a rule-based system has been used for legal reasoning (Liu et al., 2021), safety assessment (Tang et al., 2020), emergency management (Jain et al., 2021), and online communication (Akbar et al., 2014). Specifically, in the biodiversity research area, rulebased systems are also widely used, for example, for predicting the impact of land-use changes on biodiversity (Scolozzi & Geneletti, 2011), molecular biodiversity database management (Pannarale et al., 2012), or for generating linked biodiversity data (Akbar et al., 2020). ...
Article
Full-text available
Aim/Purpose Although the significance of data provenance has been recognized in a variety of sectors, there is currently no standardized technique or approach for gathering data provenance. The present automated technique mostly employs workflow-based strategies. Unfortunately, the majority of current information systems do not embrace the strategy, particularly biodiversity information systems in which data is acquired by a variety of persons using a wide range of equipment, tools, and protocols. Background This article presents an automated technique for producing temporal data provenance that is independent of biodiversity information systems. The approach is dependent on the changes in contextual information of data items. By mapping the modifications to a schema, a standardized representation of data provenance may be created. Consequently, temporal information may be automatically inferred. Methodology The research methodology consists of three main activities: database event detection, event-schema mapping, and temporal information inference. First, a list of events will be detected from databases. After that, the detected events will be mapped to an ontology, so a common representation of data provenance will be obtained. Based on the derived data provenance, rule-based reasoning will be automatically used to infer temporal information. Consequently, a temporal provenance will be produced. Contribution This paper provides a new method for generating data provenance automatically without interfering with the existing biodiversity information system. In addition to this, it does not mandate that any information system adheres to any particular form. Ontology and the rule-based system as the core components of the solution have been confirmed to be highly valuable in biodiversity sci�ence. Findings Detaching the solution from any biodiversity information system provides scalability in the implementation. Based on the evaluation of a typical biodiver�sity information system for species traits of plants, a high number of temporal information can be generated to the highest degree possible. Using rules to en�code different types of knowledge provides high flexibility to generate temporal information, enabling different temporal-based analyses and reasoning. Recommendations for Practitioners The strategy is based on the contextual information of data items, yet most in�formation systems simply save the most recent ones. As a result, in order for the solution to function properly, database snapshots must be stored on a fre�quent basis. Furthermore, a more practical technique for recording changes in contextual information would be preferable. Recommendations for Researchers The capability to uniformly represent events using a schema has paved the way for automatic inference of temporal information. Therefore, a richer represen�tation of temporal information should be investigated further. Also, this work demonstrates that rule-based inference provides flexibility to encode different types of knowledge from experts. Consequently, a variety of temporal-based data analyses and reasoning can be performed. Therefore, it will be better to in�vestigate multiple domain-oriented knowledge using the solution. Impact on Society Using a typical information system to store and manage biodiversity data has not prohibited us from generating data provenance. Since there is no restriction on the type of information system, our solution has a high potential to be widely adopted. Future Research The data analysis of this work was limited to species traits data. However, there are other types of biodiversity data, including genetic composition, species population, and community composition. In the future, this work will be expanded to cover all those types of biodiversity data. The ultimate goal is to have a standard methodology or strategy for collecting provenance from any biodiversity data regardless of how the data was stored or managed.
... Due to its simplicity in codifying the knowledge of human experts, rule-based reasoning systems have been widely used in various knowledge-intensive expert systems. For example, a rule-based system has been used for legal reasoning (Liu et al., 2021), safety assessment (Tang et al., 2020), emergency management (Jain et al., 2021), and online communication (Akbar et al., 2014). Specifically, in the biodiversity research area, rule-based systems are also widely used, for example, for predicting the impact of land-use changes on biodiversity (Scolozzi & Geneletti, 2011), molecular biodiversity database management (Pannarale et al., 2012), or for generating linked biodiversity data (Akbar et al., 2020). ...
Article
Aim/Purpose: Although the significance of data provenance has been recognized in a variety of sectors, there is currently no standardized technique or approach for gathering data provenance. The present automated technique mostly employs workflow-based strategies. Unfortunately, the majority of current information systems do not embrace the strategy, particularly biodiversity information systems in which data is acquired by a variety of persons using a wide range of equipment, tools, and protocols. Background: This article presents an automated technique for producing temporal data provenance that is independent of biodiversity information systems. The approach is dependent on the changes in contextual information of data items. By mapping the modifications to a schema, a standardized representation of data provenance may be created. Consequently, temporal information may be automatically inferred. Methodology: The research methodology consists of three main activities: database event detection, event-schema mapping, and temporal information inference. First, a list of events will be detected from databases. After that, the detected events will be mapped to an ontology, so a common representation of data provenance will be obtained. Based on the derived data provenance, rule-based reasoning will be automatically used to infer temporal information. Consequently, a temporal provenance will be produced. Contribution : This paper provides a new method for generating data provenance automatically without interfering with the existing biodiversity information system. In addition to this, it does not mandate that any information system adheres to any particular forms. Ontology and the rule-based system as the core components of the solution have been confirmed to be highly valuable in biodiversity science. Findings : Detached the solution from any biodiversity information system provides scalability in the implementation. Based on the evaluation of a typical biodiversity information system for species traits of plants, a high number of temporal information can be generated to the highest degree possible. Using rules to encode different types of knowledge provides high flexibility to generate temporal information, enabling different temporal-based analyses and reasoning. Recommendations for Practitioners : The strategy is based on the contextual information of data items, yet most information systems simply save the most recent ones. As a result, in order for the solution to function properly, database snapshots must be stored on a frequent basis. Furthermore, a more practical technique for recording changes in contextual information would be preferable. Recommendations for Researchers : The capability to uniformly represent events using a schema has paved the way for automatic inference of temporal information. Therefore, a richer representation of temporal information should be investigated further. Also, this work demonstrates that rule-based inference provides flexibility to encode different types of knowledge from experts. Consequently, a variety of temporal-based data analyses and reasoning can be performed. Therefore, it will be better to investigate multiple domain-oriented knowledge using the solution. Impact on Society : Using a typical information system to store and manage biodiversity data has not prohibited us from generating data provenance. Since there is no restriction on the type of information system, our solution has a high potential to be widely adopted. Future Research : The data analysis of this work was limited to species traits data. However, there are other types of biodiversity data, including genetic composition, species population, and community composition. In the future, this work will be expanded to cover all those types of biodiversity data. The ultimate goal is to have a standard methodology or strategy for collecting provenance from any biodiversity data regardless of how the data was stored or managed. Keywords: temporal data provenance, biodiversity, ontology, rule-based reasoning
... Deontic logic is a formal means for expressing norms-statements of moral obligation-and it is a useful tool in philosophy. Several authors, including Dimishkovska (2017), Governatori (2018), Navarro and Rodriguez (2014), and Liu et al. (2021) have applied deontic approaches to the law. Azzopardi et al. (2016) extend deontic techniques specifically to contracts. ...
Article
Full-text available
We show that the fundamental legal structure of a well-written financial contract follows a state-transition logic that can be formalized mathematically as a finite-state machine (specifically, a deterministic finite automaton or DFA). The automaton defines the states that a financial relationship can be in, such as “default,” “delinquency,” “performing,” etc., and it defines an “alphabet” of events that can trigger state transitions, such as “payment arrives,” “due date passes,” etc. The core of a contract describes the rules by which different sequences of events trigger particular sequences of state transitions in the relationship between the counterparties. By conceptualizing and representing the legal structure of a contract in this way, we expose it to a range of powerful tools and results from the theory of computation. These allow, for example, automated reasoning to determine whether a contract is internally coherent and whether it is complete relative to a particular event alphabet. We illustrate the process by representing a simple loan agreement as an automaton.
... Clearly, given the fact that FAERS data that is readily available is already 10 times larger compared to the ones processed in [48], batch processing would require processing times in the order of days processing records one-by-one sequentially, unless there is a significant increase in available computational power. However, standard database optimisation techniques such as query grouping, optimisation, use of cursors, parallelising queries and reasoning and custom indexing can reduce this time significantly [53]. ...
Article
Full-text available
Traditionally, computational knowledge representation and reasoning focused its attention on rich domains such as the law. The main underlying assumption of traditional legal knowledge representation and reasoning is that knowledge and data are both available in main memory. However, in the era of big data, where large amounts of data are generated daily, an increasing range of scientific disciplines, as well as business and human activities, are becoming data-driven. This chapter summarises existing research on legal representation and reasoning in order to uncover technical challenges associated both with the integration of rules and databases and with the main concepts of the big data landscape. We expect these challenges lead naturally to future research directions towards achieving large scale legal reasoning with rules and databases.
Article
Full-text available
Penelitian ini bertujuan untuk membangun sistem evaluasi tingkat pemahaman hukum menggunakan algoritma fuzzy logic berbasis bahasa python. Dengan memanfaatkan komponen input untuk menerima data kelompok pemahaman hukum dan komponen output untuk menghasilkan nilai tingkat pemahaman hukum. Metode algoritma fuzzy logic digunakan untuk mengolah data jawaban responden berdasarkan kelompok data, hasil uji sistem menunjukkan keakurasian sistem yang tepat. Temuan ini menegaskan kemampuan sistem dalam memberikan penilai dasar dari tingkat pemahaman hukum seseorang secara akurat. Implikasi dari sistem ini menunjukkan potensi sebagai alat evaluasi yang efektif untuk mengukur pemahaman hukum dari berbagai sampel kelompok tingkatan pemahaman seseorang khususnya di bidang hukum. Dengan memperhitungkan diversitas partisipan maka penelitian ini dapat memberikan landasan kuat bagi pengembangan sistem serupa dalam berbagai konteks evaluasi pemahaman yang akan datang.