Conference PaperPDF Available

Digital Patient: Personalized and Translational Data Management through the MyHealthAvatar EU Project

Authors:

Abstract and Figures

The advancements in healthcare practice have brought to the fore the need for flexible access to health-related information and created an ever-growing demand for the design and the development of data management infrastructures for translational and personalized medicine. In this paper, we present the data management solution implemented for the MyHealthAvatar EU research project, a project that attempts to create a digital representation of a patient’s health status. The platform is capable of aggregating several knowledge sources relevant for the provision of individualized personal services. To this end, state of the art technologies are exploited, such as ontologies to model all available information, semantic integration to enable data and query translation and a variety of linking services to allow connecting to external sources. All original information is stored in a NoSQL database for reasons of efficiency and fault tolerance. Then it is semantically uplifted through a semantic warehouse which enables efficient access to it. All different technologies are combined to create a novel web-based platform allowing seamless user interaction through APIs that support personalized, granular and secure access to the relevant information.
Content may be subject to copyright.
Abstract The advancements in healthcare practice have
brought to the fore the need for flexible access to health-related
information and created an ever-growing demand for the
design and the development of data management
infrastructures for translational and personalized medicine. In
this paper, we present the data management solution
implemented for the MyHealthAvatar EU research project, a
project that attempts to create a digital representation of a
patient’s health status. The platform is capable of aggregating
several knowledge sources relevant for the provision of
individualized personal services. To this end, state of the art
technologies are exploited, such as ontologies to model all
available information, semantic integration to enable data and
query translation and a variety of linking services to allow
connecting to external sources. All original information is
stored in a NoSQL database for reasons of efficiency and fault
tolerance. Then it is semantically uplifted through a semantic
warehouse which enables efficient access to it. All different
technologies are combined to create a novel web-based
platform allowing seamless user interaction through APIs that
support personalized, granular and secure access to the
relevant information.
I. INTRODUCTION
A recent report by the eHealth Task Force entitled
“Redesigning health in Europe for 2020” [1] focuses on how
to achieve a vision of affordable, less intrusive and more
personalized care, ultimately, increasing the quality of life as
well as lowering mortality. Such a vision depends on the
application of ICT and the use of data and requires a radical
redesign of health to meet these challenges. A main driver for
change is currently taking place under the term liberate the
data”. The secondary use of care data for research, quality
assurance and patient safety is still rarely supported and the
main barriers to this are the lack of interoperability, common
standards and terminologies [2]. Large amounts of data
currently sit in silos within health and social care systems. If
these data are integrated and used effectively they could
transform the way that care is provided.
The MyHealthAvatar (MHA) EU project [3] is an attempt
for the digital representation of patient health status. The goal
*This work was partially support by the iManageCancer (H2020-
643529) and the MyHealthAvatar (FP7-600929) EU projects.
Haridimos Kondylakis, Emmanouil G. Spanakis, Stelios Sfakianakis,
Vangelis Sakkalis, Kostas Marias are with Institute of Computer Science
(ICS), Foundation for Research and Technology - Hellas (FORTH), (e-mail:
{kondylak, spanakis, ssfak, sakkalis, kmarias} @ics.forth.gr).
Manolis Tsiknakis is with the Institute of Computer Science (ICS),
Foundation for Research and Technology - Hellas (FORTH) and the
Technical University of Crete (e-mail: tsiknaki@ics.forth.gr).
Xia.Zhao, Hong Qing Yu and Feng Dong are with the Department of
Computer Science and Technology, University of Bedfordshire, Luton, UK
(e-mail: {xia.zhao, hongqing.yu, feng.dong}@beds.ac.uk).
is to create a digital avatar, i.e. a graphical
representation/manifestation of the user, acting as a mediator
between the end-users and health related data collections. It is
designed as a lifetime companion for individual citizens that
will facilitate the collection, the access and the sustainability
of health status information over the long-term. Among
others, key questions that should be answered in this context
is how to develop optimal frameworks for large-scale data-
sharing, how to exploit and curate data from various
Electronic and Patient Health Records, assembling them into
ontological descriptions relevant to the practice of systems
medicine and how to manage the problems of large scale
medical data.
In this paper, we attempt to provide answers to the
aforementioned questions by presenting a novel data
management infrastructure. This infrastructure is capable of
combining large-scale and multidimensional data that are
semantically enriched and intrgrated to be further used for a
variety of diverse use-cases. More specifically our
contributions are the following:
A modular ontology named MHA Semantic Core Ontology
capable of modeling all health-related information.
A scalable NoSQL Data Repository for storing all original
information received from external sources.
A novel Data Translation Module that uses mappings to
semantically uplift and translate the data to be stored in a
central Semantic Warehouse. These data can come either
from the NoSQL data repository or from other external
sources that are linked through mappings.
A variety of Linking Services to external sources to enable
interaction with them. Although these sources do not
support direct linking through mappings, they provide
standard interfaces for exporting data.
A wide range of programmatic interfaces (APIs) allowing
the granular and secure access to all relevant information
in the platform.
The remaining of this paper is structured as follows: In
Section 2 we present shortly the use-cases that the platform
should cover and the different types of data that should be
used. Then in Section 3 we demonstrate the building blocks
of our data management solution. Section 4 reviews other
related projects with similar goals and Section 5 summarizes
and presents an outlook for further work.
II. USE-CASE REQUIREMENTS
In MHA two general categories of scenarios are
investigated: a) system use cases, describing the
Digital Patient: Personalized and Translational Data Management
through the MyHealthAvatar EU Project*
Haridimos Kondylakis, Emmanouil G. Spanakis, Stelios Sfakianakis, Vangelis Sakkalis, Manolis
Tsiknakis, Kostas Marias, Xia Zhao, Hong Qing Yu, Feng Dong
functionalities of the MHA system from the perspectives of
both clinicians and citizens/patients and b) clinical use cases
describing how to use the data from the MHA system in real
clinical scenarios. After performing an extended requirement
analysis, four distinct and diverse clinical use cases were
selected for further implementation, demonstration and
evaluation: Diabetes, Nephroblastoma Simulation Model and
Clinical Trial, Personalized Congestive Heart Failure (CHF)
risk analysis and Osteoarthritis. From the above we present
important functional requirement of the last two use-cases:
Personalized CHF risk analysis:
Assist individualized self-monitoring of patient’s own
health-status through a CHF Real-time patient
monitoring” and aCHF Risk Assessment” service.
Provide risk analysis for personal risk monitoring for
developing a cardiovascular related episode in the
future.
Provide comorbidities and drug interaction
information in both the treating physicians, but also
the patient him/herself regarding negative drug
interactions.
Create a monitoring tool for Personalized CHF risk
assessment using medical sensors together with
mobile application and MHA’s schematics layer.
Link, through MHA, with external clinical
information systems to acquire specific EHR patient
related data.
Incorporate verified risk assessment models for
CHF.
Create individualized mobile apps for easy access to
the service and MHA platform.
Osteoarthritis:
Visual analytics should be used to display aggregated
lifestyle data aiming to easy interpretation by both
citizens (patients and healthy) and medical
professionals.
Data collection methods to easily upload health data.
Personal Diary managing patients/citizens’ health
status and behaviors, including diet, movement,
environment, mood, smoking, symptoms etc.
Guided interventions for patients/citizens.
Provide the means to be able to also incorporate
genomic predisposition evaluation for estimating the
risk of developing osteoarthritis.
To support all aforementioned requirements an advanced
data management infrastructure is required to enable real-
time analysis of big data and the interconnection of all
heterogeneous available information
III. CONCEPTUAL AND TECHNICAL ARCHITECTURE
The conceptual architecture of the data management
platform is shown in Figure 1. In the bottom layer, external
sources are pushing data to the original data repository by
using a variety of linking services. In addition, there are
sources that allow access to the available information (such
as the Linked Life Data
1
or the DrugBank
2
) directly from the
semantic integration module. The data are semantically
linked and integrated using the aforementioned module and
stored as triples at the Semantic Data Warehouse to be
served. On top of these repositories various APIs allow the
granular and secure access to the available data either directly
from the original data repository or from the semantically
integrated data warehouse. In the following sections we
present in detail each one of the aforementioned components.
Figure 1. The architecture of the Data Management Approach
A. MHA Semantic Core Ontology
The MHA Semantic Core Ontology [4] is used as the
virtual schema of all data stored within MHA. It is able to
semantically describe the different types of data required and
processed by the platform.
Figure 2. The modules of MHA Semantic Core Ontology
3
1
http://linkedlifedata.com/
2
http://www.drugbank.ca/
3
ACGT: ACGT Master Ontology, BFO: Basic Formal Ontology,
CHEBI: Chemical Entities of Biological Interest, CIDOC-CRM: CIDOC
Conceptual Reference Model, CTO: Clinical Trial Ontology, DO: Human
Disease Ontology, DTO: Disease Treatment Ontology, FHHO: Family
Health History Ontology, FMA: Foundation Model of Anatomy, FOAF:
Friend of a Friend Ontology, GALEN: Galen Ontology, GO: Gene
Ontology, GRO: Gene Regulation Ontology, IAO: Information Artifact
Ontology, ICD: International Classification of Diseases, ICO: Informed
Consent Ontology, LOINC: Logical Observation Identifier Names and
Codes, MESH: Medical Subject Headings, NCI-T: NCI theraurus, NIFSTD:
Neuroscience Information Framework Standardized ontology, NNEW: New
Weather Ontology, OBI: Ontology for Biomedical Investigation, OCRE:
Ontology for Clinical Research, OMRSE: Ontology of Medically Related
The development of the MHA Semantic Core Ontology
was based on the following principles: a) Reuse: Exploit
already established high quality ontologies; b) Granularity: A
single ontological resource is not adequate to model the
multi-faceted ecosystem of eHealth so multiple ontologies
should be used; c) Modularity: Create a framework where
different ontologies would be able to integrate many modules
through mappings and equivalences between ontology terms.
In our case, after an initial evaluation, 34 sub-ontologies
were selected and integrated through an extension of the
Translational Medicine Ontology [5] (eTMO). The result is
shown in Figure 2. The integration is achieved by introducing
terms from these sub-ontologies to the eTMO ontology and
via relations of equivalence (using owl:equivalentClass) and
subsumption (rdfs:subClassof) from eTMO to the various
ontology modules. These relations (~300) were manually
identified and verified using the NCBO BioPortal
4
.
B. NoSQL Data Repository
The lifelong patients’ data to be stored is complex, with
hundreds of attributes per patient record that will continually
evolve as new types of calculations and analysis/assessment
results are added to the record over time. Apache Cassandra
5
is used to store these large-scale original data. Cassandra is
an open-source, peer-to-peer and key value based store,
where data are stored in key spaces. Cassandra has also built-
in support for the Hadoop implementation of MapReduce [6],
considered currently state of the art for real-time data analysis
and has advanced replication functions. As such, Cassandra is
used to store all original data and to provide input to
applications that require big-data real-time analysis. In the
MHA platform the Cassandra repository is an instantiation of
a “data lake
6
concept that stores the raw data supplied by the
different information sources.
C. Semantic Integration & Data Warehouse
Although Cassandra is an excellent choice for storing and
processing large amounts of data, the restrictions imposed on
querying (e.g. lack of joins) prohibit the interconnection and
the real integration of the data. However, the integrated
information of the whole or parts of the patient profile is
required for the provision of specific health care services. To
achieve the semantic integration of the available data we use
an extension of the exelixis [7] [8] platform, a novel data
integration engine that has two main functionalities: a) It
achieves query answering by accepting SPARQL queries that
are rewritten to the data sources; b) it allows the
transformation of data from original models to RDF/S data
according to the MHA Ontology Suite. The platform allows
the integration of a variety of data sources such as relational
and NoSQL databases, XML, RDF/S and CSV documents,
web services etc. Using the aforementioned platform we
select which of the available data should be semantically
linked and integrated by establishing the appropriate
Social Entities, PATO: Phenotypic Quality Ontology, PLACE: Place
Ontology, PRO: Protein Ontology, RO: Relation Ontology, SBO: Systems
Biology Ontology, SNOMED-CT: SNOMED clinical terms, SO: Sequence
Ontology, SYMP: Symptom Ontology, TIME: Time Ontology, UMLS:
Unified Modeling Language System.
4
http://bioportal.bioontology.org/
5
http://cassandra.apache.org/
6
http://martinfowler.com/bliki/DataLake.html
mappings. Then these data are queried, transformed into
triples and loaded to the Semantic Warehouse where they are
available for further reasoning and querying. A benefit of the
approach is that we can recreate from scratch the resulting
triples at any time. However for reasons of efficiency the
exelixis transforms periodically only the newly inserted
information by checking the timestamps of the data.
As already described, in order to select the information
that is integrated, the proper mappings are established
between parts of the source schemata and the MHA Semantic
Core Ontology. However, the definition of those mappings is
a time-consuming, labor-intensive and error-prone activity.
To assist human in this difficult task, we created an
innovative mapping workflow that manages the core
processes needed to create, maintain and manage mapping
relationships between different data sources over the long
term, with high level of quality control. This novel workflow
is named X3ML [4] and is composed of two main steps,
shown in Figure 3: a) Schema Matching: The domain experts
define a matching between the individual schemata and the
ontology with the help of a graphical tool which is
documented in a schema matching definition file. This file is
human and machine readable and is the ultimate
communication mean on the semantic correctness of the
mapping; b) Mapping definition: In this step the actual
mappings are generated based on the input of the previous
step. In this step only the IT experts are involved and domain
experts have no interest or knowledge about it.
Figure 3. X3ML Mapping workflow
D. Linking Services to External Sources
Besides sources allowing the direct integration through
mappings, the data management infrastructure supports the
incorporation of parts of the patient’s clinical and social
history that are already stored and managed by third party
systems.
Figure 4. Linking MHA with external systems through well-defined
interfaces
For this reason the proper mechanisms are in place for
retrieving relevant user information from these external data
sources. This “linking” mechanism is based on well-known
and established standard interfaces since they allow the
building of generic ports and interfaces and the reuse of
existing code bases. Figure 4 shows some notable examples
for the realization of these links to external resources:
Clinical data are retrieved from Hospital Information
Systems (HIS) through the Clinical Document Architecture
(CDA
7
) guidelines and set of specifications, clinical trial
specific patient data are acquired using the Operational Data
Model (ODM) of the Clinical Data Interchange Standards
Consortium (CDISC
8
), whereas cross-border healthcare
provisioning is supported by the adoption of epSOS
9
Patient
Summary (again, based on CDA) interfaces. All these
interfaces enable pushing data that are stored within the
NoSQL data repository.
E. APIs for Third-Party Access
The APIs focus on providing functions to potential
applications for accessing and managing data stored in the
original data repository and the semantic data warehouse.
The APIs are implemented in RESTful style and JSON is
used as communication format. In addition oAuth
10
provides
secure delegated access to the available resources.
IV. RELATED WORK
Projects with similar goals for collecting, storing and
accessing eHealth data were the eHealthMonitor
11
[9] and the
INTEGRATE
12
projects whereas currently running projects
on the area include the p-Medicine
13
and the EURECA
14
projects. In all those projects, the need for integrating
disparate data sources has led to the adoption of multiple
ontological resources as well. However, it is the first time
that a separation is achieved between the original data
collected from a variety of sources and a semantic repository
to support the data needed integration and interconnection.
Besides European projects, there also exists a set of
initiatives and personal health record systems concerned with
the management of patient data. Some of them are Microsoft
HealthVault, PatientsLikeMe, Indivo-X, Tolven, Dossia etc.
(see [10] for a comparison). However, opposed to the work
presented here they act as a static knowledge spaces
providing only storage and role based access to the
knowledge resources.
To the best of our knowledge the proposed data
management solution is the only one allowing the separation
of the original and the semantically enhanced information,
combining Ontologies, Semantics and NoSQL databases
allowing a wide range of methods for pushing and retrieving
information. The added value of our approach is that real-
time analysis can directly be performed on the original data
whereas semantic queries on integrated data can be
efficiently answered using the Semantic Data Warehouse
allowing a clear separation of concerns between the
Cassandra and the Semantic Repository.
7
http://www.hl7.org/Special/committees/structure/index.cfm
8
http://www.cdisc.org/
9
http://www.epsos.eu/
10
http://oauth.net/
11
http://ehealthmonitor.eu/
12
http://www.fp7-integrate.eu/
13
http://www.p-medicine.eu/
14
http://eurecaproject.eu/
V. DISCUSSION & CONCLUSION
Our architecture adopts a variation of the command-query
responsibility segregation principle
15
where one uses a
different model to update information than the model one is
using to read. Although the mainstream approach people use
for interacting with an information system is to treat it as a
create, read, update and delete data-store, as the needs
become more sophisticated state of the art approaches
steadily move away from that model. In our case we rely on
NoSQL technologies to store the original data due to their
ability to handle enormous data sets and the “schema-less”
nature, which makes, to a large extent, the import of new
information to be frictionless. But their limitations in the
flexibility of query mechanisms are a real barrier for any
application that has not predetermined access use cases. The
Semantic Warehouse component in the MHA platform fills
these gaps by effectively providing a semantically enriched
and search optimized index to the unstructured contents of
the Cassandra repository. Therefore, our approach tries to
offer best of both worlds: efficient persistence and
availability of heterogeneous data, and semantic integration
and searching of the “essence” of the ingested information.
A key next step is to evaluate the whole platform in a
real-world context in the four clinical scenarios in three
European countries (United Kingdom, Greece and Germany).
Preliminary evaluation performed, provided initial evidences
about the added value and the usability of our approach
which will be extensively reported in a follow-up paper.
Without a doubt data managements is an important area for
healthcare that will only become more critical as healthcare
delivery continues to grapple with current challenges.
REFERENCES
[1] eHealth Task Force Report, “Redesigning health in Europe for 2020”,
2012.
[2] E.G. Spanakis, P. Lelis, F. Chiarugi, C. Chronaki, “R&D challenges in
developing an ambient intelligence eHealth platform”, EMBEC, pp.
1727-1983, 2006.
[3] E.G. Spanakis, D. Kafetzopoulos, P. Yang, K. Marias, Z. Deng, M.
Tsiknakis, V. Sakkalis, F. Dong, "MyHealthAvatar: Personalized and
empowerment health services through Internet of Things
technologies", Mobihealth, pp. 331-334, 2014.
[4] MyHealthAvatar Consortium: D4.2 Extension of the Semantic Core
Ontology, February 2015.
[5] J.S. Luciano, S. Joanne S et al., “The Translational Medicine
Ontology and Knowledge Base: Driving Personalized Medicine by
Bridging the Gap between Bench and Bedside.” Journal of
Biomedical Semantics 2.Suppl 2 (2011): S1. 2015.
[6] J. Dean, S. Ghemawat, “Map Reduce: simplified data processing on
large clusters”, Communications of the ACM, vol. 51, no. 1, pp. 107
113, 2008.
[7] H. Kondylakis, D. Plexousakis, “Exelixis: Evolving Ontology-Based
Data Integration System”, ACM SIGMOD, pp. 1283-1286, 2011.
[8] H. Kondylakis, D. Plexousakis, “Ontology Evolution without Tears”,
Journal of Web Semantics, 19, pp. 42-58, 2013.
[9] H. Kondylakis, D. Plexousakis, V. Hrgovcic, R. Woitsch, M. Premm,
M. Schuele, “Agents, Models and Semantic Integration in support of
Personal eHealth Knowledge Spaces”, WISE, pp. 496-511, 2014.
[10] I. Genitsaridi, H. Kondylakis, L. Koumakis, K. Marias, M. Tsiknakis,
Towards Intelligent Personal Health Record Systems: Review,
Criteria and Extensions”, Procedia Computer Science, vol. 21, pp.
327-334, 2013.
15
http://en.wikipedia.org/wiki/Command%E2%80%93query_separation
... But the human being is of an immense degree of complexity, so currently practically impossible to be wholly represented. Actually, only partial human subsystems have been digitally modelled [77], such as the cardio twin [78], the carotid arteries twin [79], the possibility of assessing type 2 diabetes using digital twin [80], but surely not the whole human body. ...
Article
Full-text available
Current technologies allow acquiring whatever amount of data (even big data), from whatever system (object, component, mechanism, network, implant, machinery, structure, asset, etc.), during whatever time lapse (secs, hours, weeks, years). Therefore, potentially it is possible to fully characterize any system for any time we need, with the possible consequence of creating a virtual copy, namely the digital twin (DT) of the system. When technology of DT meets an augmented reality scenario, the augmented digital twin (ADT) arises, when DT meets an artificial intelligence environment, the intelligent digital twin (IDT) arises. DTs, ADTs and IDTs are successfully adopted in electronics, mechanics, chemistry, manufacturing, science, sport, and more, but when adopted for the human body it comes out the human digital twin (HDT) or alternatively named virtual human simulator (VHS). When the VHS incorporates information from surroundings (other VHSs and environment), taking a cue from the particle-wave duality (the mix of matter and energy), we can name this super-VHS as the human digi-real duality (HDRD). This work is focused on defining the aforementioned acronyms, on evidencing their differences, advantages and successful case adoptions, but highlighting technology limits too, and on foreseeing new and intriguing possibilities.
... Data sharing and accessibility have evolved into essential components of biological and clinical research. Nonetheless, private information regarding one's medical history, diagnosis, and prescriptions is frequently included in patient data [18]. Several rules and laws have been enacted worldwide to preserve people's privacy. ...
Chapter
Full-text available
Despite several disputes and the overall perceived lack of privacy on social media platforms among the general population, they are now widely used around the globe and have become commonplace. Vast volumes of data in various formats are being posted on these platforms. Over the past decade, the widespread adoption and proliferation of online medical forums and social platforms illustrates the variety of information being shared. Usually, when dealing with sensitive medical information, to ensure that all ethical and legal criteria are met, processing and maintaining calls for a high quality of security and privacy safeguards. We were curious if medical information on such socially available platforms goes through the same quality of privacy safeguards. Our literature study identified a significant lapse in the privacy attitude of users who post medical information online. In this paper, we report on the responses from our survey about how medical information and its privacy on social media is perceived. We look at the drivers and impediments to personal health information being shared online, focusing on privacy-related issues. We then use our findings to aid in developing a workflow for a tool that provides access to a general social media user with state-of-the-art anonymizing and privacy-protecting techniques.
... Till today, a prototype application, for personal health supporting systems (Spanakis et al., 2016;Kondylakis et al., 2015), was used to conduct an evaluation survey and preliminary results from 18 volunteers (including 9 doctors) in respect to the acceptability of a biometric platform as the one proposed from SpeechXRays (Spanakis et al., 2016). These evaluation results shown that the platform is a functional, efficient and user-friendly environment (for both medical specialists and patients) since it can respond to all tasks utilizing all necessary resources. ...
Chapter
Humans have various features that differentiates one person from another which can be used to identify an individual for security purposes. These biometrics can authenticate or verify a person's identity and can be sorted in two classes, physiological and behavioural. In this article, the authors present their results of experimentation on publicly available facial images and the efficiency of a prototype version of SpeechXRays, a multi-modal biometric system that uses audio-visual characteristics for user authentication in eHealth platforms. Using the privacy and security mechanism provided, based on audio and video biometrics, medical personnel are able to be verified and subsequently identified for two different eHealth applications. These verified persons are then able to access control, identification, workforce management or patient record storage. In this work, the authors argue how a biometric identification system can greatly benefit healthcare, due to the increased accuracy of identification procedures.
... Obviously, the amount of information available, the heterogeneity of the information, and the wide range of proposed ontologies dictate the identification of a solution able to handle all this information, especially in the context of the present Elder Care platform with its multiple devices, sensors, and self-assessment and data collection tools. As such, and based on experiences from projects like MyHealthAvatar [26], a modular ontology is used in the Elder Care platform as a global scheme to integrate all internal and external data, enabling a homogeneous view of all available and heterogeneous data. This enables uninterrupted access to all relevant information by the modules on the top as if all data were completely normalized, cleaned, and transformed to a single relational database [27]. ...
Article
Full-text available
Informal care is considered to be important for the wellbeing and resilience of the elderly. However, solutions for the effective collaboration of healthcare professionals, patients, and informal caregivers are not yet widely available. The purpose of this paper is to present the development of a digital platform that uses innovative tools and artificial intelligence technologies to support care coordination and shared care planning for elder care, with a particular focus on frailty. The challenges of shared care planning in the coordination of frailty care are demonstrated, followed by presentation of the design and technical architecture of an integrated platform. The platform incorporates all elements essential for the support of daily activities, coordinated care, and timely interventions in case of emergency and need. This paper describes the challenges involved in implementing the platform and concludes by reporting the necessary steps required in order to establish effective smart care for the elderly.
... This avatar plays a role similar to a personal digital health-related collection bag, carried by individual citizens throughout their lifetime and capable of sustaining all collected information in a meaningful manner. This information is related to multi-level personal health data that is collected from heterogeneous data sources such as clinical data, genetic data, and medical sensor data (Kondylakis et al., 2015). (Barricelli et al., 2020) propose an extension to SmartFit, which is a computational framework utilizing wearable sensors and Internet applications. ...
Thesis
Hospitals are demanding workplaces that have real-time services and require extensive human interaction at the resource level (doctors, nurses, etc.), the location level (exam rooms, operating rooms, etc.), the process level (pathways), and the user level (patients). Designing and managing such a health care facility can be inherently challenging due to the critical nature of the services and their wide variety and variability, as well as the difficult adjustment to a partially unforeseeable demand of care. To increase the efficiency and the quality of care while curbing healthcare costs, a hospital requires decision-making support tools. These can be based on organizational engineering methods and tools for monitoring the current state of the organization in real time, for predicting its behavior in the near-future, and for improving its processes. Among the tools, discrete event simulation (DES) is able to model the operational process behavior and to assess performance by simulating a chain of events that occur over time. However, despite important features provided by DES software tools, they are often limited to building “offline” simulation models that are not connected to the real world in real time. These simulation models may not be suitable for retrieving the current state of the organization, and they cannot be considered a "Digital Twin". Furthermore, these simulations start with an “empty” and “idle” state, which can be different from the real-world state and imply a bias in the statistics reports at the end of the simulation run. This research work deals with a DES-based Digital Twin (DT) approach. It is based on DES models which are used (1) for real-time and online monitoring of patient pathways, and (2) for near-future offline prediction when facing unexpected behavior or unpredictable situations. The major goal of this research is to provide a framework for building a Digital Twin of patient pathways that health care practitioners and decision makers can use as a decision support tool. Some specific issues are also addressed: initialization of the DES models, real-time synchronization with the real world, and the connection between monitoring and prediction models. As a proof of concept, experiments are carried out using an emulator of a hospital service that is connected to a Digital Twin that follows our approach.
... Relevant ontologies, such as the MHA semantic core ontology (28), and other and approaches can be used for semantically uplifting available data through mapping and/ or annotating using ontology terms. This allows the rapid re-use of the available data, enabling a common understanding and offering a rich set of terms for documenting and adding metadata to the data provided by the platform. ...
Article
Full-text available
The lives of millions of people have been affected during the coronavirus pandemic that spread throughout the world in 2020. Society is changing establishing new norms for healthcare education, social life, and business. Digital health has seen an accelerated implementation throughout the world in response to the pandemic challenges. In this perspective paper, the authors highlight the features that digital platforms are important to have in order to support integrated care during a pandemic. The features of the digital platform Safe in COVID-19 are used as an example. Integrated care can only be supported when healthcare data is available and can be sharable and reusable. Healthcare data is essential to support effective prevention, prediction, and disease management. Data available in personal health apps can be sharable and reusable when apps follow interoperability guidelines for semantics and data management. The authors also highlight that not only technical but also political and social barriers need to be addressed in order to achieve integrated care in practice.
Article
Objective We summarized a decade of new research focusing on semantic data integration (SDI) since 2009, and we aim to: (1) summarize the state-of-art approaches on integrating health data and information; and (2) identify the main gaps and challenges of integrating health data and information from multiple levels and domains. Materials and Methods We used PubMed as our focus is applications of SDI in biomedical domains and followed the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) to search and report for relevant studies published between January 1, 2009 and December 31, 2021. We used Covidence—a systematic review management system—to carry out this scoping review. Results The initial search from PubMed resulted in 5,326 articles using the two sets of keywords. We then removed 44 duplicates and 5,282 articles were retained for abstract screening. After abstract screening, we included 246 articles for full-text screening, among which 87 articles were deemed eligible for full-text extraction. We summarized the 87 articles from four aspects: (1) methods for the global schema; (2) data integration strategies (i.e., federated system vs. data warehousing); (3) the sources of the data; and (4) downstream applications. Conclusion SDI approach can effectively resolve the semantic heterogeneities across different data sources. We identified two key gaps and challenges in existing SDI studies that (1) many of the existing SDI studies used data from only single-level data sources (e.g., integrating individual-level patient records from different hospital systems), and (2) documentation of the data integration processes is sparse, threatening the reproducibility of SDI studies.
Book
Full-text available
This edited collection attempted to explore the new post-pandemic health realities with a focus on various policy and response strategies spearheaded by the countries across the world to make public healthcare services more accessible to citizens in terms of cost, security and data privacy issues. The diverse studies published under the research topic collection looked into the new structural and institutional shifts taking place to make healthcare services safe and convenient with the help of empowered frontline care leveraging technologies. To make the healthcare services more inclusive and accessible, healthcare organizations and institutions, both public and private need to orchestrate the myriad interconnected changes required to design, implement and sustain digitally-enabled healthcare delivery platforms. The edited volume addressed various policy and response strategies adopted to deal with restricted physical access to socio-economic infrastructure, facilities and services amid the pandemic with a focus on cutting-edge health-technologies at its core. The topic collection strongly advocates that health technologies and innovations are going to be one of the significant sectors for investment and innovations over the next 30 years. This will not only transform the global health care sector in terms of diagnosis, disease management, treatment and prevention but also help to better prepare for future emergencies.
Conference Paper
Full-text available
The industrial paradigm of a Digital Twin (DT), a virtual representation of a physical object, promises an impactful opportunity to advance digital healthcare. Especially in telemedicine, the application of DTs could yield various benefits for patients, providers, and payers. However, the development of DTs for healthcare and specifically telemedicine is not yet mature. Therefore, this research in progress paper attempts to structure the research field and classify DTs for digital health and in future, for telemedicine. Based on a systematic literature review (SLR) and grounded theory analysis techniques, we derive 12 dimensions and 35 characteristics that support researchers and practitioners to develop, apply, refine and evaluate various DT applications. The taxonomy serves as a promising starting point for further research on implementing or adopting DTs in healthcare and telemedicine. An application of a real-world objective already shows the feasibility of our taxonomy.
Conference Paper
Full-text available
During the burst of the coronavirus pandemic, in early-midst 2020, public health authorities worldwide considered appropriate identification, isolation and contact tracing as the most appropriate strategy for infection containment. This work presents an outbreak response tool, designed for public health authorities to effectively track suspect, probable and confirmed incidence cases in a pandemic by means of a mobile app used by citizens to provide immediate feedback. It is developed based on an already existing personal health record app, which has been extended to properly accommodate specific needs that emerged during the crisis. The aim is to better support human tracers and should not be confused with proximity tracking apps. It respects safety and security regulations, while at the same time it conforms to international standards and widely accepted medical protocols. Issues relevant to privacy concerns, and interoperability with available patient registries and data analytics tools are also examined to better support public healthcare delivery and contain the spread of the infection.
Conference Paper
Full-text available
The interconnection of heterogeneous data sources could provide a comprehensive picture of health parameters, thereby triggering an intervention by the medical staff upon detection of conditions that may lead to health deterioration, thus realizing preventive care. Supported Internet of Things technologies can be used to allow health related information to be locally aggregated and transmitted for remote monitoring and response. We present MyHealthAvatar (MHA), a personal digital health related collection bag, carried by individual citizens throughout their lifetime able to sustain in a meaningful manner all collected information. MHA acts as a unique companion continually following and empowering citizen and patients through a number of health related services. We describe the efforts on creating MHA patient-centered healthcare services for accessing, collecting and sharing long term multilevel personal health data through an integrated environment including: clinical data, genetic data, medical sensor data and devices, human behavior data and activity data for clinical data analysis, prediction and prevention for the individual citizen.
Article
Full-text available
The advancements in healthcare practice have brought to the fore the need for flexible access to health-related information and created an ever-growing demand for the design, development and management of personalized knowledge spaces. In this paper, we present a web-based platform that generates a Personal eHealth Knowledge Space as an aggregation of several knowledge sources relevant for the provision of individualized personal services. To this end, novel technologies are exploited, such as knowledge on demand to lower the information overload for the end-users, agent-based communication and reasoning to support cooperation and decision making, and semantic integration to provide uniform access to heterogeneous information. All three different technologies are combined to create a novel web-based platform allowing seamless user interaction through a portal that supports personalized, granular and secure access to relevant information.
Article
Full-text available
Personal health record (PHR) systems are a constantly evolving area in the field of health information technology which motivates an ongoing research towards their evaluation in several different aspects. In this direction, we present an evaluation study on PHR systems that provides an insight on their current status with regard to functional and technical capabilities and we present our extensions to a specific PHR system. Essentially, we provide a requirement analysis that formulates our composite evaluation model which we use to perform a systems review on numerous available solutions. Then, we present our development efforts towards an intelligent PHR system.
Article
Full-text available
The evolution of ontologies is an undisputed necessity in ontology-based data integration. Yet, few research efforts have focused on addressing the need to reflect the evolution of ontologies used as global schemata onto the underlying data integration systems. In most of these approaches, when ontologies change their relations with the data sources, i.e., the mappings, are recreated manually, a process which is known to be error-prone and time-consuming. In this paper, we provide a solution that allows query answering in data integration systems under evolving ontologies without mapping redefinition. This is achieved by rewriting queries among ontology versions and then forwarding them to the underlying data integration systems to be answered. To this purpose, initially, we automatically detect and describe the changes among ontology versions using a high level language of changes. Those changes are interpreted as sound global-as-view (GAV) mappings, and they are used in order to produce equivalent rewritings among ontology versions. Whenever equivalent rewritings cannot be produced we a) guide query redefinition or b) provide the best “over-approximations”, i.e., the minimally-containing and minimally-generalized rewritings. We prove that our approach imposes only a small overhead over traditional query rewriting algorithms and it is modular and scalable. Finally, we show that it can greatly reduce human effort spent since continuous mapping redefinition is no longer necessary.
Conference Paper
Full-text available
The evolution of ontologies is an undisputed necessity in ontology-based data integration. Yet, few research efforts have focused on addressing the need to reflect ontology evolution onto the underlying data integration systems. We present Exelixis, a web platform that enables query answering over evolving ontologies without mapping redefinition. This is achieved by rewriting queries among ontology versions. First, changes between ontologies are automatically detected and described using a high level language of changes. Those changes are interpreted as sound global-as-view (GAV) mappings. Then query expansion is applied in order to consider constraints from the ontology and unfolding to apply the GAV mappings. Whenever equivalent rewritings cannot be produced we a) guide query redefinition and/or b) provide the best "over-approximations", i.e. the minimally-containing and minimally-generalized rewritings. For the demonstration we will use four versions of the CIDOC-CRM ontology and real user queries to show the functionality of the system. Then we will allow conference participants to directly interact with the system to test its capabilities.
Article
Full-text available
Translational medicine requires the integration of knowledge using heterogeneous data from health care to the life sciences. Here, we describe a collaborative effort to produce a prototype Translational Medicine Knowledge Base (TMKB) capable of answering questions relating to clinical practice and pharmaceutical drug discovery. We developed the Translational Medicine Ontology (TMO) as a unifying ontology to integrate chemical, genomic and proteomic data with disease, treatment, and electronic health records. We demonstrate the use of Semantic Web technologies in the integration of patient and biomedical data, and reveal how such a knowledge base can aid physicians in providing tailored patient care and facilitate the recruitment of patients into active clinical trials. Thus, patients, physicians and researchers may explore the knowledge base to better understand therapeutic options, efficacy, and mechanisms of action. This work takes an important step in using Semantic Web technologies to facilitate integration of relevant, distributed, external sources and progress towards a computational platform to support personalized medicine. TMO can be downloaded from http://code.google.com/p/translationalmedicineontology and TMKB can be accessed at http://tm.semanticscience.org/sparql.
Conference Paper
Several R&D issues need to be resolved in tomorrow's eHealth care platforms to support personal health management services as well as mobility of patients and health professionals, by seamlessly integrating these services with smart surroundings that use self-configuring devices, intelligent agent technology and tools for ambient awareness and decision support. Such services, when interoperating with a life-long electronic health record of a citizen, can become the cornerstone for supporting continuity of care and cost effective health care for all. This paper discusses the architectural considerations and the R&D issues addressed in the design, development and implementation of the Ambient Intelligent platform for Cardiology (AmICa), a modular and flexible ambient intelligent eHealth platform for remote multiparametric monitoring of patients, able to be adapted to different care provision modes and daily life situations.
Conference Paper
MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.