Fig 1 - uploaded by Kathrin Dentler
Content may be subject to copyright.
Classification performance for SNOMED CT over time.  

Classification performance for SNOMED CT over time.  

Source publication
Article
Full-text available
This paper provides a survey to and a comparison of state-of-the-art Semantic Web reasoners that succeed in classifying large ontologies expressed in the tractable OWL 2 EL profile. Reasoners are characterized along several dimensions: The first dimension comprises underlying reasoning characteristics, such as the employed reasoning method and its...

Contexts in source publication

Context 1
... clas- sified SNOMED CT in under a minute and FaCT++ in around 20 minutes. Figure 1 shows how the classifica- tion performance for SNOMED CT has been improv- ing over the recent years. Only successful outcomes (i.e. ...
Context 2
... a result of this study we found that the Snorocket Protégé plugin did give correct results when run with Java 1.5 but produced some incorrect results when run with Java 1.6. The root cause of this problem has been fixed in the subsequent release (1.3.3). Furthermore, a new version (0.4.1) of CEL was released, which does not output the singletons for equivalent properties and concepts. ...

Similar publications

Conference Paper
Full-text available
This paper presents an analysis of the state of the art solutions for mapping a relational database and an ontology by adding reasoning capabilities and offering the possibility to query the inferred information. We analyzed four approaches: Jena with D2RQ, Jena with R2RML, KAON2 and OWL API. In order to highlight the differences between the four a...
Article
Full-text available
Human studies are the most important source of evidence for advancing our understanding of health and disease. Yet there is no standard method for investigators to query for studies that are relevant to their scientific hypothesis. Querying data and meta-data across clinical trials and observational studies is difficult because of the lack of seman...
Conference Paper
Full-text available
UEL is a system that computes unifiers for unification prob-lems formulated in the description logic EL. EL is a description logic with restricted expressivity, but which is still expressive enough for the formal representation of biomedical ontologies, such as the large medical ontology SNOMED CT. We propose to use UEL as a tool to detect redun-da...

Citations

... The former assumes that further knowledge, data, and relations can be introduced, whereas the latter assumes that all knowledge, and hence all data, is available. These assumptions are processed using different inference engines, taking into account not only the performance of description logic inference but also the type of description logic and the acceptance of mixed description logic [16]. ...
... Semantic artifacts, ranging from XML [17] to RDF [18] and OWL syntax, are interconnected and can be combined with related artifacts. For instance, axioms in an ontology can be written using SWRL rules, necessitating the use of an SWRL-compatible reasoner for inference [16]. It's essential to note that ontologies inferred with different reasoners may not be compatible, especially when one reasoner lacks features present in another. ...
... Given that the modeling of reactions within this ontology is already axiom-intensive, incorporating additional conditions could potentially outweigh the benefits. Additionally, these mathematical expressions [28] would currently require the addition of other logic syntaxes such as SWRL or more sophisticated reasoning engines, and would thereby interfere with some reasoning engines [16]. While reasoners exist that can infer this compounded description logic, ways of how to model mathematical relations, which are quite important for limiting reactions and catalysts to certain reaction conditions, are not listed here. ...
Article
Full-text available
Maximizing the use of digitally captured data is a key requirement for many of the late adopters of digital infrastructure. One of the newcomers is the chemical industry in the area of digitized laboratories. Here, tools and services that satisfy individual needs still need to be developed and distributed within the community. This work explores the potential of using graph databases — specifically those modeled via ontological knowledge graphs — to describe complex data linkages and draw logical conclusions. While knowledge graphs are not widely utilized in catalysis research, this study introduces a methodology to highlight their usability for semantic description and integration into diverse value chains with contact to the domain of (bio)chemistry and catalysis. A demonstration is performed how ontologies and their knowledge graphs can be applied to perform essential functions of semantic annotation to chemical reactions, which are difficult to model relational. Traditional data description methods can be neglected using description logic, showing how logical inferences at the machine level can enrich data. This work also illustrates the seamless integration of this enhanced data into process simulations, connecting semantic description with practical applications. The immediate benefits for catalysis research are emphasized and the development of new tools and services envisioned. By clarifying how these graphs can be integrated into existing workflows, researchers are empowered to make the most of digitally acquired data in catalytic processes. This practical methodology lays the foundation for improved decision-making and innovation, fostering advancements in the field of catalysis research.
... A reasoner is a tool that enables reasoning tasks, based primarily on RDF and OWL, that support the identification and repair of inconsistencies and incoherence in the ontology [88]. In addition, this reasoner presents a proper performance to assess complex and expressive ontologies over other alternatives, as well its compatibility with Protégé ontology editor [89], used in this research. ...
Article
Full-text available
Project-based organizations (PBOs) rely on project management as their core business process. These organizations are characterized by their disaggregated structure and diverse authority distribution, leading to complications in synchronizing projects, governance, and functional divisions. In this regard, there is a need of a tool to manage this complexity. This paper introduces a novel approach-the IModel-for modeling, designing and analyzing PBOs, following the Design Science Research Method, that seeks to is creating an artifact that solves a real-world problem in a specific context. This artifact is developed through a conceptual integration between project management (PM) and enterprise architecture (EA) sources based on ontologies. Ontologies are powerful, machine-readable graph-structured models which enables sophisticated models' reasoning and querying. IModel fuses the organizational domain from the ArchiMate EA language with PM insights from PMBOK 7th edition. The resulting IModel has been submitted to an expert's judgement to assess its intended purposes as well, its potential to be applied in the real-world to support the PBO.
... While the introduction of highly optimised implementations of Tableau-based algorithms [21] has enabled the use of higher expressivities in practical applications, care nevertheless has to be exercised to limit the expressivity as far as possible, especially for ontologies involving tens of thousands of axioms. Reasoners have different strengths in relation to different types of ontology structures, but it is not immediately clear which structure is more amenable to a given reasoner [22][23][24][25][26][27][28][29]. ...
Article
Full-text available
Simple Summary Ontology-based AI promises a more flexible and cost-effective approach to the validation of cancer registry data over traditional approaches and may allow a more straightforward means of federating centralised data-harmonisation processes. The advantage of using ontologies is that they unite the conceptual and logical aspects of a given domain and, therefore, provide an inherent means for expressing data validation rule sets. The idiosyncrasies of data validation, however, impose certain constraints on the ontology structure of AI-based reasoning and require design patterns that are not immediately obvious. The design patterns presented in this work are generic to other domains beyond cancer registries and serve to point out the types of issues that validation ontologies have to confront, with proposed solutions for tackling them. Abstract Data validation in cancer registration is a critical operation but is resource-intensive and has traditionally depended on proprietary software. Ontology-based AI is a novel approach utilising machine reasoning based on axioms formally described in description logic. This is a different approach from deep learning AI techniques but not exclusive of them. The advantage of the ontology approach lies in its ability to address a number of challenges concurrently. The disadvantages relate to computational costs, which increase with language expressivity and the size of data sets, and class containment restrictions imposed by description logics. Both these aspects would benefit from the availability of design patterns, which is the motivation behind this study. We modelled the European cancer registry data validation rules in description logic using a number of design patterns and showed the viability of the approach. Reasoning speeds are a limiting factor for large cancer registry data sets comprising many hundreds of thousands of records, but these can be offset to a certain extent by developing the ontology in a modular way. Data validation is also a highly parallelisable process. Important potential future work in this domain would be to identify and optimise reusable design patterns, paying particular attention to avoiding any unintended reasoning efficiency hotspots.
... There is also the possibility of exploiting the strengths of the various DL reasoners and future work will seek to understand the reason behind the performance differences observed in Table 2 in order to improve performance on the basis of the types of axioms. Whereas others have addressed comparisons of reasoners (16)(17)(18)(19)(20)(21)(22), work has generally focused on their accuracy, the types of operations and platforms they support, and overall performance rather than the strengths of the reasoners given a particular ontology structure. The OWL2Bench however provides a promising approach (23). ...
Article
Full-text available
Ontologies can provide a valuable role in the work of cancer registration, particularly as a tool for managing and navigating the various classification systems and coding rules. Further advantages accrue from the ability to formalise the coding rule base using description logics and thereby benefit from the associated automatic reasoning functionality. Drawing from earlier work that showed the viability of applying ontologies in the data validation tasks of cancer registries, an ontology was created using a modular approach to handle the specific checks for childhood cancers. The ontology was able to handle successfully the various inter-variable checks using the axiomatic constructs of the web ontology language. Application of an ontological approach for data validation can greatly simplify the maintenance of the coding rules and facilitate the federation of any centralised validation process to the local level. It also provides an improved means of visualising the rule interdependencies from different perspectives. Performance of the automatic reasoning process can be a limiting issue for very large datasets and will be a focus for future work. Results are provided showing how the ontology is able to validate cancer case records typical for childhood tumours.
... The experimental results in scenario 1 show that the Debugger does not scale well for SPL architectures that include more than 225 elements. The expressiveness obtained by description logics is achieved at the expense of higher computational complexity that is present in current ontology reasoning engines [46]. This issue is exacerbated by the fact that the Debugger employs multiple queries and consistency checks in the debugging process. ...
Article
Full-text available
There has been increasing interest in modeling software product lines (SPLs) using architecture description languages (ADLs). However, sometimes it is required to reverse engineer an SPL architecture from a set of product architectures. This procedure needs to be performed manually as currently does not exist tool support to automate this task. In this case, verifying consistency between the product architectures and the reverse engineered SPL architecture is still a challenge; particularly, verifying component interconnection aspects of product architectures with respect to the commonality and variability of an SPL architecture represented in an ADL. Current approaches are unable to detect whether the component interconnections in a product architecture have inconsistencies with the component interconnections defined by the SPL architecture. To tackle these shortcomings, we developed the Ontology-based Product Architecture Verification (OntoPAV) framework. OntoPAV relies on the ontology formalism to capture the commonality and variability of SPLs architectures. Reasoning engines are employed to automatically identify component interconnection inconsistencies among SPL and product architectures. Our evaluation results show that our verifier has a high accuracy for detecting consistency errors and that it scales linearly for architectures from 1000 to 5000 architecture elements.
... Several reasoners exist to achieve this task. These include Pellet, Fact++, Hermit and Racer [38]. They enable several inferencing supports, like fontology consistency checking, subsumption inference and class equivalence. ...
... This requires the perception system to be equipped with the minimum basic capabilities in order to sense-in whatever way-81 elements involved in the ERE training. Moreover, the ADSB perception system should also be able to sense 44 pre-crash scenes typologized by NHTSA in [38]. Possible sensing channels are listed in Table 8. ...
Preprint
Although a typical autopilot system far surpasses humans in term of sensing accuracy, performance stability and response agility, such a system is still far behind humans in the wisdom of understanding an unfamiliar environment with creativity, adaptivity and resiliency. Current AD brains are basically expert systems featuring logical computations, which resemble the thinking flow of a left brain working at tactical level. A right brain is needed to upgrade the safety of automated driving vehicle onto next generation by making intuitive strategical judgements that can supervise the tactical action planning. In this work, we present the concept of an Automated Driving Strategical Brain (ADSB): a framework of a scene perception and scene safety evaluation system that works at a higher abstraction level, incorporating experience referencing, common-sense inferring and goal-and-value judging capabilities, to provide a contextual perspective for decision making within automated driving planning. The ADSB brain architecture is made up of the Experience Referencing Engine (ERE), the Common-sense Referencing Engine (CIE) and the Goal and Value Keeper (GVK). 1,614,748 cases from FARS/CRSS database of NHTSA in the period 1975 to 2018 are used for the training of ERE model. The kernel of CIE is a trained model, COMET-BART by ATOMIC, which can be used to provide directional advice when tactical-level environmental perception conclusions are ambiguous; it can also use future scenario models to remind tactical-level decision systems to plan ahead of a perceived hazard scene. GVK can take in any additional expert-hand-written rules that are of qualitative nature. Moreover, we believe that with good scalability, the ADSB approach provides a potential solution to the problem of long-tail corner cases encountered in the validation of a rule-based planning algorithm.
... In ontologies the relationships are not defined for each two terms for which the relationship holds but only for their most specific ancestors. If their connection is not explicitly stated it could be inferred using chain rules, as provided by a specific theorem prover named reasoner [66]. ...
Chapter
Artificial Intelligence (AI) methods, and in particular machine learning (ML), deep learning (DL), Bayesian nets (BNs) and probabilistic reasoning are offering new tools to computer assisted drug design (CADD). AI methods are accompanying, extending or even changing existent practices in generating and searching chemical libraries in de novo drug design, in checking for properties, in optimizing drug candidates. After considering in short the computer practices in the drug development process, open challenges are individuated. The AI methods considered include the learning methods loosely inspired by the brain structure, as the neural networks (NN) and the derived deep learning, the applications of mathematical logic to express hierarchical knowledge, and the integration of symbols and probability to reason on data. How those AI methods are working on various CADD and clinical studies is finally presented through the analysis of some recent literature. Warnings about the limitations of the AI algorithms are also reported.
... hasPhaseSwitching: This case represents terminating current phase and switch to another phase that has an emergency situation. It is inferred according to the following rule: This work was conducted using the protégé resource [14], Pellet reasoner [15] was used to check the consistency of the ontology, and SPARQL query was used to query in the testing stage. ...
Conference Paper
This article proposes a novel ontology design for intelligent controlling of traffic signals, considering the investigated factors, crowded factors, road factors, visibility conditions, and emergency situations. Essentially, the proposed method uses video-based knowledge and key feature from a monocular video camera only, capturing footage from either a traffic signals perspective or the top of the road lane. The key factors and entities in the traffic scene are formed into an ontology, which has been evaluated using synthetic datasets to interpret challenging cases. Semantic features related to the key factors in the scene are obtained and fed to the ontology. The experimental results indicate that the proposed method is capable of controlling traffic signals more efficiently than the fixed intervals protocol.
... Division of the classes by static definitions types production of new knowledge, and consistency check. The work[36] analyzes different OWL reasoners, whereas the work[37] compares their applications for large ontologies. ...
Article
Full-text available
The advancement of collaborative robotics increases process efficiency. However, humans are still part of the loop in many deployment scenarios. They are unpredictable factors that may potentially become at risk. This work proposes a safety system for Human-Robot Interaction (HRI), called HOSA, and discusses the decisions made from its modular architectural design phase to a real scenario implementation. HOSA is an end-to-end system that considers the information provided by sensors available in the environment, the communication network to transport the information, the reasoning in the information, and the interface to present the risks. It applies deep learning algorithms to detect HRI collision risk and the use of Personal Protective Equipment (PPE) based on surveillance camera images. Also, it considers knowledge representation based on Ontology, Software-Defined Wireless Networking (SDWN), and a user interface based on augmented reality. The benefits of the proposed design are evaluated through a use case of a HRI scenario for radio base station maintenance. The architecture scales with the number of devices due to semantic descriptions and an adequately provisioned communication network. It demonstrates the system’s efficiency in detecting risk during HRI tasks and alerting people in the scenario. The conducted experiment shows that the system takes 1.052 seconds to react to a risky situation.
... Inference on the Semantic Web has heavily relied on deductive and probabilistic reasoning. Deductive OWL reasoners work by inferring logical consequences from a set of explicitly asserted facts [536]. Constrained by firstorder predicate logic, description logic reasoners (e.g., ELK [365], FaCT++ [537]) are unable to account for uncertainty or incomplete knowledge [538]. ...
Thesis
Full-text available
Traditional computational phenotypes (CPs) identify patient cohorts without consideration of underlying pathophysiological mechanisms. Deeper patient-level characterizations are necessary for personalized medicine and while advanced methods exist, their application in clinical settings remains largely unrealized. This thesis advances deep CPs through several experiments designed to address four requirements. Stability was examined through three experiments. First, a multiphase study was performed and identified resources and remediation plans as barriers preventing data quality (DQ) assessment. Then, through two experiments, the Harmonized DQ Framework was used to characterize DQ checks from six clinical organizations and 12 biomedical ontologies finding Atemporal Plausibility and Completeness and Value Conformance as the most common clinical checks and Value and Relation Conformance as the most common biomedical ontology checks. Scalability was examined through three experiments. First, a novel composite patient similarity algorithm was developed that demonstrated that information from clinical terminology hierarchies improved patient representations when applied to small populations. Then, ablation studies were performed and showed that the combination of data type, sampling window, and clinical domain used to characterize rare disease patients differed by disease. Finally, an algorithm that losslessly transforms complex knowledge graphs (KGs) into representations more suitable for inductive inference was developed and validated through the generation of expert-verified plausible novel drug candidates. Interoperability was examined through two experiments. First, 36 strategies to align five eMERGE CPs to standard clinical terminologies were examined and revealed lower false negative and positive counts in adults than in pediatric patient populations. Then, hospital-scale mappings between clinical terminologies and biomedical ontologies were developed and found to be accurate, generalizable, and logically consistent. Multimodality was examined through two experiments. A novel ecosystem for constructing ontologically-grounded KGs under alternative knowledge models using different relation strategies and abstraction strategies was created. The resulting KGs were validated through successfully enriching portions of the preeclampsia molecular signature with no previously known literature associations. These experiments were used to develop a joint learning framework for inferring molecular characterizations of patients from clinical data. The utility of this framework was demonstrated through the accurate inference of EHR-derived rare disease patient genotypes/phenotypes from publicly available molecular data.