Content uploaded by Alti Adel
Author content
All content in this area was uploaded by Alti Adel on Aug 25, 2018
Content may be subject to copyright.
Cloud Semantic-based Dynamic Multimodal Platform
for Building mHealth Context-aware Services
Adel ALTI
LRSD, University of Setif-1,
19000, Setif – Algeria
altiadel2002@yahoo.fr
Sébastien LABORIE
LIUPPA/IUT Bayonne,
64600, Anglet–France
Sebastien.Laborie@iutbayonne.univ-
pau.fr
Philippe ROOSE
LIUPPA, University of Pau,
64600, Anglet–France
Philippe.Roose@iutbayonne.univ-pau.fr
Abstract—Currently, everybody wish to access to applications
from a wide variety of devices (PC, Tablet, Smartphone, Set-top-
box, etc.) in situations including various interactions and
modalities (mouse, tactile screen, voice, gesture detection, etc.). At
home, users interact with many devices and get access to many
multimedia oriented documents (hosted on local drives, on cloud
storage, online streaming, etc.) in various situations with multiple
(and sometimes at the same time) devices. The diversity and
heterogeneity of users profiles and service sources can be a
barrier to discover the available services sources that can come
from anywhere from the home or the city. The objective of this
paper is to suggest a meta-level architecture for increasing the
high level of context concepts abstracting for heterogeneous
profiles and service sources via a top-level ontology. We
particularly focus on context-aware mHealth applications and
propose an ontologies-based architecture, OntoSmart (a top-
ONTOlogy SMART), which provides adapted services that help
users to broadcast of multimedia documents and their use with
interactive services in order to help in maintaining old people at
home and achieving their preferences. In order to validate our
proposal, we have used Semantic Web, Cloud and Middlewares
by specifying and matching OWL profiles and experiment their
usage on several platforms.
Keywords—Cloud; Profil; Modality; Health; Context-aware;
Location-Based Smart Service.
I. INTRODUCTION
Nowadays, everyone can access to a variety of services at
anytime, anywhere (from the home or the city) and anyhow
(using laptop, Smartphone, Set-top-box, etc.). These mobile
devices include variety of sensors and are becoming smarter.
Meanwhile, hosted services are becoming richer with
contextual information and more complex, especially in the
field of smart-* environments (Home, City, etc.). The diversity
of context information is generated by platforms heterogeneity,
user’s preferences and context variations. For instance; at
home, the variety of situations and contexts can rely on the set
of home sensors and mostly manipulated media contents of
home services from heterogeneous mobile terminals
(Smartphones, tablets, etc.) and/or remotely on the cloud.
In order to efficiently manage the explosion of users
profiles and service sources in smart-* environments (Home,
Health, etc.) and relations among them, in this article we define
a top-ontology called OntoSmart (a top-ONTOlogy SMART).
Our goal is describing heteregnous service sources and user
profiles of smart-* environments (Home, Home, City, etc.)
from semantic-independent level, and to present a new cloud
management tool that provides a great flexibility and enables
automatic semantic adaptation and customization of mobile
client services. All resources (smart and interactive services,
user context information and multimedia documents) are
managed and classified in a dynamic way. We also discuss the
usefulness and the importance of ontologies to specify
semantics of smart health clearly.
Our proposed cloud ontology captures a shared conceptual
schema common in the smart-* environments and maintains
semantic smart and interactive service information in
heterogeneous service sources for service model. It includes
reasoning properties in higher level of system, providing a set
of generic and common concepts of smart-* domains, by the
identification of general concepts and their relations. This will
cope with the context of ubiquitous environments, where top-
ontology is presented as “a convenient way for isolating
concerns of a system”. [4].
Our approach based on three layers architectures consisting
of a top-level dedicated to manage heterogeneous user profiles
and context constraints (media and modality preferences)
across multiple smart domains. The middle layer consists in
the smart domain specific ontologies (City, Home, Vehicule,
etc.) including digital health preoccupations. The basic layer
consists in Kalimucho [1] [4] as Middleware platform for local
or cloud (re-)deployment of services, the discovery services
and the adaptation services components. The separation of
smart domains reduces the scale of context knowledge, and the
separation of adaptation from contexts, makes these concerns
reusable. Thus, the user can access services from anywhere
from the home or the city. For instance, when the doctor leaves
his office to city, our platform will redeploy dynamically
services with a high-level set of rules using upper ontology and
a new set of context-aware services in the city now can be
accessed. Thanks to ontologies, to exploit heterogeneous
profiles and situations and create an interesting knowledge for
context-aware services reasoning.
In the next section, we will summarize related works. In the
section 3, we detail a top-level ontology OntoSmart and high
level inference rules in context-aware mHealth applications.
We give the general architecture of the platform in section 4
and evaluation results in section 5. Finally, section 6 concludes
our article.
e-Health Pervasive Wireless Applications and Services (eHPWAS'15)
978-1-4673-7701-0/15/$31.00 ©2015 IEEE 357
II. OBJECTIVES AND RELATED WORKS
The first related area of research are some works involving
adaptations of component-based applications refer to the
evolving needs of the users and the execution context by
exploiting a descriptive view of context information (e.g.
CC/PP, SGP, CSCP). CC/PP (Composite Capability/
Preference Profiles) [2] is a W3C recommendation for
specifying device capabilities and user preferences. This profile
language is based on RDF and was maintained by the W3C
Ubiquitous Web Applications descriptive since it lists sets of
values which correspond to the screen size, the browser
version, the memory capacity, etc. However, the CC/PP
structure lacks functionality, for instance it limits complex
structure description by forcing a strict hierarchy with two
levels. Furthermore, it does not consider the description of
relationships and constraints between some context
information. CSCP (Comprehensive Structured Context
Profiles) [3] uses RDF and is also based on CC/PP. In contrast
to CC/PP, CSCP has a multi-level structure and models
alternative values according to predefined situations. Even if
CSCP provides a description of the context, which is not
limited to two hierarchical levels, this proposal does not allow
the specification of complex user constraints (e.g., avoiding
playing videos while the mobile phone battery level is lower
than 15%) Moreover, authors stated that this proposal was
developed as a proprietary model for specific domains. SGP
(Semantic Generic Profile) [5] organizes profile information
into tree kind of facets; the device characteristics, the user
context information and the document composition. The SGP
links profile information with the specification of high-level
explicit constraints. These constraints enable to model different
types of actions under rich conditions. Complex explicit
constraints can be specified and context information about the
device and the user are provided by services, thus guiding the
adaptation process in order to provide an adapted document
that complies with rich user constraints. Hence, profiles may
migrate from different platforms without modifying several
values. However, SGP uses manual constraints (if condition
then action). Furthermore, SGP is specified in RDF/XML
which lucks the semantic relationships expressiveness.
Mohamed BAHAJ et al [6] [7] applied a semantic approach to
multimedia documents and their adaptation through an
ontology that allows users to control how to receive their
multimedia documents in a given condition. They extended
UPOS (User-Profile Ontology with Situation-Dependent
Preferences Support) for multimedia document adaptation
UPOMA. They defined a sub-profile, as a subset of the profile.
In their case, such a sub-profile contains a set of multimedia
documents preferences. This work didn’t talk exactly about
cloud model adaptation of the documents to have full control
of how documents should be received in different devices
(Laptop, Tablet, Smartphone…) in order to be played and nor
multi-parts profiles.
The second related areas of research are smart health
projects that have been proposed for monitoring of the old
people at home [8] [10] [11]. However, these works only have
kept the focus under the old person’s welfare. Roberto et al. [8]
proposed a framework for assisting old people in
heterogeneous and dynamic contexts related to the user’s
location. The framework describes the parameters for
customizing the service information; target the use of
ontologies and semantic models to share knowledge among
mobile devices. This work provides mechanisms to allow end-
users to play a central role in the selection service of their
contexts. This work does not consider the use of automatic
knowledge sharing techniques and explicit user multimodal
preferences in the semantic representation and composition
selection process. Primal Pappachan et al. [10] developed a
framework called Rafiki for guiding health community in order
to provide an adapted diagnostic process that complies with
users requirements (health professionals, patients, elderly,
dependent persons). The framework provides facilities for
collaboration among patients and health-care providers; target
the use of patient contexts (e.g., age, gender, location,
profession, etc.) to identify possible diseases. This work
exploits correlations about context parameters between
diseases, symptoms, and patient. Internet-based and P2P-based
approachs are two intercations mechanisms used in the
community health-care. The first approach represents
dissemination of knowledge between health-care providers and
Community Health Workers (CHW). However, in remote
areas, Internet access is not guaranteed and can be unreliable.
Therefore, Rafiki leverages P2P ad hoc networks using other
wireless communication mechanisms (e.g., Bluetooth and
WiFi) to update their knowledge base and share interesting
facts. The authors have presented an interesting approach for
describing service with semantic data. However, neither
various multimodal services, not heterogeneous profiles are
discussed. Lemlouma and al. [12] proposed a framework for
automatic dependency evaluation of the elderly people and the
management of the heterogeneity of elderly profiles and
service sources that can come from anywhere: from the home
or the city. The proposed home profile service is extended with
context service taken into account. This architecture is context-
aware but suffered from a semantic-based multimodal
evaluation for achieving an efficient and flexible framework.
To improve performance trade-offs of smart mobile healthcare
applications for intelligent environments, Joyce Chai et al. [11]
uses mobile cloud computing technologies to access remote
running health applications especially that require contextual
intelligence. The idea has exploited the cloud environment
hosts a Service Adaptation Module to optimize the runtime
deployment of the required components and acts as an entry
point for the adapter on the mobile client. The authors have
presented an interesting approach for annotating service
description with QoS data. However, neither various
multimodal interactions, not cloud profiles are exploited.
The methods and contributions of this paper are different
from above related works. Firstly, the paper exploits cloud
resources (smart and interactive services, hardware resources
and multimedia documents) for maximizing user QoS under
user constraint in mobile health applications. Secondly, the
paper proposes an intelligent service management system,
which is based on the on upper-level ontology model, as a way
to inference users’ preferences according to different sensitive
situations. The success of this approach provides its members
remote access (e.g. on the cloud) to external interactive
services (e.g. modalities changes) and sensors according to
user context (e.g. preferences, activity and resources changes).
e-Health Pervasive Wireless Applications and Services (eHPWAS'15)
358
Due to heterogeneous context profiles as well as its various
service sources (home, city, bus, etc.) can be specified more
easily using a semantic representation to better support
context-awareness. The main objectives are: (1) - develop a
top-level ontology model in order to describe heterogeneous
user profiles, (2) - identify critical situations, inferring user
constraints and determining the necessary adaptation so that
the user has the full exploitation of digital health documents,
(3) - maximize sharing and reuse of resources in different
contexts, (4) - group some specific profiles into generic and
optimized profiles by activity, preferences, location and time.
Our framework will define the semantic profile matcher
which compares the user profile constraints (implicit or
explicit) with other profiles constraints. It produces an
adaptation guide that contains some adaptation directives for
solving the detected matches. Our approach motivates mobile
users to become actively involved in groups that have yet
shared experiences. We prefer the hybrid approach that gives
more flexibly to select freely the adaptation emplacement.
Consequently, we follow to execute services locally at the
home (where data can be easily updated) and/or remotely on
the cloud, depending on the capabilities of the client and the
context application.
III. ONTOSMART: A TOP-ONTOLOGY SMART
A top-level ontology OntoSmart conceptualizes knowledge
that addresses several smart-* domains with the discipline of
context-aware service modeling. The motivation for creating
the ontology is the ability to reason, to re-use existing profile
models and to share previous user experiences and preferences.
The OntoSmart define common concepts as well as
relationships among those concepts to represent context-aware
services that support all activities related to a user and his
current context. It can be used to publish metadata about their
services within their context constraints in the UDDI registry.
An overview of a top-level ontology structure, where the most
important concepts are shown, is given in Fig.1. The most
important concepts in the ontology are the Context class and its
specialization, user profile, sub-context profile and user
preferences. We divided context sub-ontology into seven main
sub-contexts:
• The RessourceContext describes current usage of
processing power, memory, etc, is a prerequisite to
guarantee a minimum quality of service.
• The HardContext are mobile devices like tablets or
Smartphone, are constrained in their resources
(memory size, CPU speed, battery energy, etc) and act
as execution environment for smart and interactive
services.
• The UserContext contains personnel, location and
multimedia information. User profile describes user’s
personnel information such as name, role, ID, phone,
address, email, multimedia_content. The user
preference includes some preferred languages and
preferred modalities (voice, gesture, pen click, mouse
click, etc.). This is very interesting for a user to explicit
its constraints between some contextual information.
For example: For instance, a user can specify the
following constraint in his profile: “If it is lunch time,
remind me to take some medicines using audio
contents”. We can find also a description of the user’s
health situation, as the user can be healthy or handicap.
• The PlaceContext describes related informations about
user’s location {longitude, altitude and attitude} and
the available resources sharing.
• The ActivityContext: according to a schedule, a user
can engage in scheduled activity.
• The DocumentContext describes the nature of the
documents (text, video, audio).The document context
specifies a set of properties related to a specific media
type: Text, Image, Video, Sound
• The BiomedicalContext gathers all contextual
information related to smart-health. The
BiomedicalContext is divided into two sub-contexts:
BioContext and MedicalContext.
Fig. 1. An overview of the structure of the common Smart Ontology.
e-Health Pervasive Wireless Applications and Services (eHPWAS'15)
359
User wishes new media/modality preferences, have new
requirements. Media and modality preferences and context
variations require documents adaptation according to execution
constraints, e.g., audio contents and voice modality may not be
played while a user is participating at a meeting.
User interactions are now potentially spread over multiple
devices with different interacting modalities and should be
interpreted according to the user’s situation and context
variations. Modality preferences in mobile and ubiquitous
environments, evolved dynamically as it is one of the
important tasks that can be provide a high felxibity to mobile
clients. An explicit modality constraint is a set of conditions
and a set of actions:
9 Include modality: according to the user situation, this
action includes a modality for him. For example, if the
user has a visual handicap, voice modality is included.
9 Exclude modality: according to the user situation, this
action excludes a modality for him. For example, if the
user has a visual handicap, touch and gesture
modalities are excluded.
9 Equivalent modality: equivalent modality is required
under the same circumstances as the previous used
modality.
9 Complement modality: an extra modality is added as a
need to fulfill a specific task.
A. Cloud-based Service Ontology
Nowadays environment are getting smarter in order to reply
for user requests anytime and anywhere according to his
location, interaction between users seeks to get quatliy services
from providers. Service could be either smart or interactive.
Any service can be used in a local way or can be used on the
cloud (see Fig.2). The smart service handles the data storage
that users need to run their applications. An interactive service
allows unimodal and multimodal interactions.
Fig. 2. Cloud-based Service Ontology.
B. Health-Domain Ontology
User profiles in the mHealth domain are poor semantically,
not generic and not enough dynamic to take into account the
evolving situation of a patient. To model mHealth domain,
specialized classes and their corresponding attributes and
relations have been specialized in the subclasses, which are:
• mHealth users’ profiles: The core of OntoSmart is the
conceptualization of the User Context Profile concept.
It should be noted that the goal is not only to provide a
useful categorization of the user profiles, but also to
provide each category with its own specific attributes.
There are many subclasses (Doctor, Nurse, Pharmacist
and Patient). The Patient has symptoms and a
document and attaches it with sensors.
• mHealth services: mHealth domain includes a variety
of services that collect data about patient activity and
analyze these data to extract clinical knowledge.
• mHealth documents: mHealth documents consist on
three subclasses (Report, Monitoring and Treatment).
Our goal is to provide the most appropriate treats in
some critical situations (e.g. hypoglycemic diabetic
coma).
• mHealth hardware and sensors: mHealth situations are
accessible on a wide variety of embedded sensors. The
heterogeneity of such sensors and the diversity of
user’s needs require management, quality of service
and adapt to critical situations.
• mHealth symptoms: A symptom is used during
diagnostics to detect possible diseases. Symptoms may
be multiple for a given disease. Our study includes all
symptoms (fever, weakness, nausea, hypoglycemic
diabetic coma).
• Signs of disease : Signes are indicators of the patient's
health, glucose level, weight temperature, etc.
• mHealth institutions: defined as hospitals, clinics,
health professional polyclinic and academic health
center.
• mHealth preferences: In our health-domain ontology,
we specify explicit semantic constraints with
qualitative and quantitative information. For instance,
a use can specify the following constraint in his
profile: “If it is lunch time, remind me to take some
medicines using audio contents”.
C. Context-aware services and ontology reasoning
Developer rules: The developer defines a set of rules in
order to assure the continuity of services and to provide
context-aware service to execution constraints and user’
preferences.
Rule 1: Looking for modality preferences in the Cloud, the
user modality type should be equal to other users’ modality
types, modality operator is “Include” and target interactive
services are transport services. This rule can be described in
SWRL [15] as follows:
e-Health Pervasive Wireless Applications and Services (eHPWAS'15)
360
Fig. 3. Health-domain ontology extended from Top-level OntoSmart
Profil(?p) ^ ComposedOf(?p, ?sp) ^
DependsOn(?sp, ?act) ^ HasPreferences(?sp,? pref) ^
ModalityPreferences (?pref) ^
HasModalitypreferences(?pref, ?mdpref)
HasModalityOperatortype (?pref, ?mop) ^
swrlb:equal(?mop, "Include") ^
InteractiveService(?s) ^ ServiceCategory(?s, "Health") ^
IsDeployed(?s, ?host) ^ swrlb:equal(?host, "Cloud") ^
DependsOn(?sp3, ?s) ^ ServiceModality(?s, ?mp) ^
ModalityType (?mp, ?t) ^ swrlb:equal(?mdpref, ?t)
->sqwrl:select (?s)
Rule 2: The service is migrated on the cloud when the
battery level is low, so data could be stored separately and that
could help minimizing use of the energy. This rule can be
described as follows:
Profile(?p) ^ ComposedOf (?p, ?sp) ^ DependsOn (?sp, ?d) ^
Device(?d) ^ Battery_Level(?d, ?betterylevel) ^
swrlb:equal(?betterylevel, "Low") ^ HealthService(?s) ^
DeployedOn(?s, ?hot) ^ DefinedAs(?d, ?hot) ^ Cloud(?cloud)
->Migrated(?s, ?cloud)
Health-domain rules: The expert defines a set of rules in
order delivers relevant information to the clinician, to
include/exclude different interacting modalities and to detect
maladies.
Rule 3: Exclude undesirable preferences for visual
handicap users. This rule can be described in SWRL as:
Profil(?p) ^ ComposedOf(?p, ?sp) ^
DependsOn(?sp, ?user) ^User(?user) ^
HealthStatus(?user, "VisualHandicap") ^
HasPreferences (?sp, ?mp) ^ ModalityPreferences (?mp) ^
HasPreferences (?sp, ?medp) ^ MediaPreferences (?medp) ^
DependsOn(?sp, ?service) ^ InteractiveService (?service) ^
ServiceModality(?service, ?mt) ^
DependsOn (?sp, ?doc) ^Document(?doc) ^
-> HasMediapreferences(?medp, "Audio") ^
documentType (?doc, "Audio") ^ ModalityType(?mt, "Voice") ^
HasModalitypreferences(?mp, "Voice")
Rule 4 : The detection of possible diseases based on the
patient context (age, vital signs and symptom). This rule can be
described as follows:
Profile(?p) ^ ComposedOf(?p, ?sp) ^ DependsOn(?sp, ?d) ^
Device(?d) ^ DefinedAs(?d, ?hot) ^ HealthService(?s) ^
DeployedOn(?s, ?hote) ^ HasCategory(?s, "Digestive") ^
Inputs (?s, ?i1) ^ Fever (?i1) ^ isExistence(?i1, "YES") ^
Inputs (?s, ?i2) ^ Diar (?i2) ^ isExistence (?i2, "YES") ^
Inputs(?s, ?i3) ^ Abdominal_pain(?i3)^isExistence (?i3, "YES") ^
Inputs(?s, ?i4) ^ Nausee(?i4) ^ isExistence (?i4, "YES") ^
Inputs(?s, ?i6) ^ Vomissement(?i6) ^ isExistence (?i6, "YES") ^
Inputs(?s, ?i5) ^ Bio_Sensor_Temperature(?i5) ^
Qualitative_value(?i5, “High”) ^ Disease (?m)
-> Flu(?m)
e-Health Pervasive Wireless Applications and Services (eHPWAS'15)
361
IV. CLOUD SEMANTIC-BASED DYNAMIC ARCHITETURE
The proposed platform is built according to a layered
architecture as shown in Fig. 4. The architecture consists in
three layers: a top-level layer dedicated to the semantic query
layer. The middle layer consists in the generic cloud-based
semantic multimodal adaptation core layer. The basic layer is
the Kalimucho [1] layer offers service-level functions: (re)
deployment and reconfiguration strategy, according to its
system (Android, Laptop with QoS requirements, dynamic
supervision of adaptation components and communication
protocols between mobiles) nodes.
A. General Architecture
The cornerstone of our framework for dynamic interactive
services selection is the generic cloud-based semantic
multimodal adaptation core layer. It provides abstractions to
hide the complexity and the heterogeneity of the underlying
service-based P2P principle and implements a cloud-based
semantic social-based service strategy for each user according
to its context, which is inferred automatically from user
profiles and inferences rules based on OntoSmart. All
resources (smart services, user information and multimedia
documents) are managed and classified in dynamic platform,
which relies on the following components: (1) - Cloud
Management (CM) component; (2) - Profiles and Resource
Management (PRM) component and (3) - Dynamic
Multimodal Adaptation Management (DMAM) component;
The CM component allows setting and modeling a cloud-
based profile by specifying the characteristics of its members.
It allows modeling and managing adaptation process. It allows:
(1) - to enrich user profiles from different multimodal
techniques (gesture, voice, pen click and mouse click) with
various QoS (media quality, execution time, security, etc.), (2)-
to share health services in the online cloud registry and (2)- to
dynamic manage shared context aware adaptation services,
shared interactive health services with various modalities and
shared multimedia documents.
The PRM component allows the user of requests
multimedia documents about on-line relevant adaptation
services into shared cloud registry. Furthermore, this
component supports the association of shared adaptation
services by categories (health, transport, multimedia, etc.)
classified by QoS, and publish of shared services as well as
user experiments in the cloud registry as OWL files.
• Semantic profile matcher module compares the user
profile constraints (implicit and explicit) with other
profiles properties (context and modality reasoning). It
produces an adaptation guide that contains some
adaptation directives for solving the detected
constraints. For instance :
ª User A wants to get health services only with text
and sound
ª User B does have required health text services and
other extra health documents.
ª User C does have required health sound services
and other extra health documents.
Fig. 4. Layers and component of Cloud Dynamic Semantic-based Platform.
It starts with comparing the user A profile constraints with
user “B” profile, in our case user B have only one required
request which is (text), so user A will look for required the
sound services at the user C profile, in our case he get required
one which is sound, so research is done and accomplished .
In case user A get all required services from user B. So
research is accomplished without looking for
documents in user C profile.
In case User A didn’t get any required services from
users near him (B and C in our case) so he would look
for them in the cloud.
If an action corresponds to a modality inclusion or
exclusion, then the adaptation directive contains at
least one complement operator.
If an action corresponds to modality duplication then
the adaptation directive contains at least one
redundancy operator.
If an action corresponds to a modality update then the
adaptation directive contains at least one substitution
operator.
If an action corresponds to migrate component from a
device to another, then the adaptation directive
contains at least one substitution operator.
Add other adaptation directives related to implicit profile
constraints, such as the supported modalities, the supported
media types, the battery level, etc. For example, from a profile
and a current situation, if we confirm that we must exclude
voice modality (because the current noisy environment), the
adaptation process should start by executing a substitution and
propose voice with button OK.
e-Health Pervasive Wireless Applications and Services (eHPWAS'15)
362
• Semantic interactive services discovery module.
This module discovers some interactive services that
will match the semantic constraints of the user: action
type (complement operator, the substitution operator,
equivalent operator), inputs/outputs, modality types
media type (image, video, sound and text), a specific
user context information (user health context, user
location, user language, user age, screen resolution,
battery level, memory size, etc.). After the discovery
module finds the matched interactive services, it sends
them to the automatic adaptation decision component.
• Decision module. This module is responsible of
selecting the best interaction service after calculating
the score of each discovered services. The comparison
of score interactive services allows us to select the best
service. In this case, values of quality formula [13] are
used for classifying the relevant interactive services
that have potential benefit.
• Reconfiguration and redeployment module: This
module is responsible for the total or partial (re)
deployment of selected services.
B. Search Algorithm
Our strategy of semantic profile matching method tackles
different aspects (location, time, service category, media and
modality preferences). It would provide users with an
improved services research, as it gives the user the opportunity
to look for missing preferences in other near or cloud profiles.
It takes user’s current context (battery level, CPU load,
bandwidth, user location) and all his around resources as input
(local, remote), groups services on clusters based on (location,
time, activity) and filtering available services in cloud.
V. POSSIBLES SCENARIOS AND VALIDATION
A. Real-life Scenarios of Health and Social Care
Within the scope of our work let’s suggest this scenario:
Ahmed is an old man and has a visual handicap. Today,
when his health schedules activities start time begin, his Smart
TV remembers him to take medicines using a voice modality.
Then, when his serious health detected by camera sensor, an
urgent event is triggered, we identify categories of people who
are able to intervene (neighbor, family member, doctor) with an
order of preference. If the contacted was the Doctor, he takes
ambulance to go to Ahmed’s Home and uses his Smartphone to
follow health status of Ahmed using gesture modality. When it
comes a low battery level is detected, the Doctor wants to
continue flow the health status of Ahmed. Data flow could be
stored in the Cloud (See Fig.6).
We have utilized our approach for semantic user profiles
and interactive health services exchanges that is widespread in
the local or cloud environment. An identical semantic model is
created for every user profile and it is connected to other
profiles using the same context (location, time, service
category, user preferences). . These scenarios are done sending
continuous sensors data (GPS location, speed, video stream,
Wifi and 3G, etc.).
Input : User Profile Constraints
Output : Interactive services matching results
Found_1=false; /* required media preference is found */
Found_2=false; /* required modality preference is found */
IF (media preference is matched with local user profile)
Found_1= true;
IF (modality preference is matched with local user profile)
Found_2=true;
IF (Found_1==false || Found_2==false) {
/* Search interactive services at users nearby */
IF (Found_1==false){
IF (required media preference is found)
Found_1=true;}
IF (Found_2==false){
IF (required modality preference is found)
Found_2=true;}
}
IF (Found_1==false || Found_2==false) {
/* Cloud Search interactive and multimedia services */
IF (Found_1==false){
IF (required media preference is found)
Found_1=true;}
IF (Found_2==false){
IF (required modality preference is found)
Found_2=true;}
}
IF (Found_1 && Found_2) both preferences found
IF (Found_1 || Found_2) Only media preferences was found
IF (!Found_1 && !Found_2) None of preferences was found
Fig. 5. Alignment Algorithm.
First scenario (Health status): This scenario occurs when
a user has a visual handicap, service modalities and media
contents should change automatically according to his health
status, so we suggest that only audio media is authorized, video
text and images are excluded. Service modalities are excluded
except voice modality.A user can command his mobile device
using only his voice. In case a user has a good health status and
press the button to infer an exception occurs telling him that he
is not authorized to do it.
Second scenario (Schedule interactive health services): A
user has a daily schedule, so if any of his schedule activities
start time begins, the required one should be inferred and get
started in user’s device. A user is able to look for services at
users near him; they should be in the same area where is he
located and have the service he is up to get. The result service
should be the best one for user choice according to his media
and modality preferences. The cloud can handle a big data
storage so users are able to put their services in the cloud and
other users can look for their services.
Third scenario (Low battery level): This scenario occurs
when a user has a low battery level and wants to get his service
in the cloud in order to reduce the use of his device data,
exceptions were made so devices with medium or high battery
levels could not put their services in the cloud according to the
given scenario.
e-Health Pervasive Wireless Applications and Services (eHPWAS'15)
363
Fig. 6. Possible scenarios
B. Performance Comparison
Originally, the context searching service used SWRL rules
(available on standard OWL Java API) to access mHealth
services from the cloud ontology. We have performed these
experiments on a Laptop running Windows 7 (x64) with 6GB
of RAM and i7-2630QM quadruple coreprocessor (2 GHz).
After some performance tests, we realized that SWRL query
execution gets alarmingly slow when the ontology grows in
user profiles instances as can be seen in Table 1. Once this
problem was detected, we decided to use our JAVA API
algorithm, which made the query time considerably faster by
applying each rule separately. Moreover, the execution time
remains constant at any specific domain ontology size.
TABLE I. PERFORMANCE EVALUATIONS
Case 1: Looking all over and finding preferences in local devices
SWRL rules Elapsed time : 9975 ms
Java rules Elapsed time : 9487 ms
Case 2: Looking all over and finding preferences in nearby users
SWRL rules Elapsed time : 10289 ms
Java rules Elapsed time : 427 ms
Case 3 : Looking all over and finding preferences in the Cloud
SWRL rules Elapsed time : 12613 ms
Java rules Elapsed time : 12613 ms
VI. CONCLUSION
This paper presents a cloud semantic-
based dynamic multimodal adaptation platform. Our platform
proposes to identify situations, infer constraints and determine
the necessary adaptations. Novelty and originality of our
approach can be viewed in the combination of the following
points: (1) - minimize the execution time of SWRL queries
which facilitate alignment and inference in the ontology, (2) -
experimentations of ontology with more than hundred user
profiles. In a future work we would like to improve our Java
algorithms efficiency. Moreover, we suggest to test alignment
using more than one ontology.
REFERENCES
[1] C. Cassagnes, P. Roose, M. Dalmau, “Kalimucho: software architecture
for limited mobile devices”, ACM SIGBED Review, v.6 n.3, October
2009 DOI= http://doi.acm.org/10.1145/1851340.1851354.
[2] G. Klyne, F. Reynolds, C.Woodrow, O. Hidetaka, J. Hjelm, M. H.
Butler, L. Tran, "Composite Capability/Preference Profiles (CC/PP):
Structure and Vocabularies 1.0," 2004.
[3] S. Buchholz, T. Hamann, G. HUbsch, “Comprehensive Structured
Context Profiles (CSCP): Design and experiences”, 2nd IEEE Annual
Conference on Pervasive Computing and Communications Workshop
(PERCOMW'04). Washington, USA: IEEE Computer Society, pp. 43-48
[4] D. Keling, P. Roose, M. Dalmau and J. Novado, “Kali2Much: An
autonomic middleware for adaptation-driven platform”. In Proceedings
of the International Workshop on Middleware for Context-Aware
Applications in the IoT (M4IOT 2014), pp. 25-30
[5] C. Dromzee, S. Laborie, P. Roose, “A Semantic Generic Profile for
Multimedia Documents Adaptation”, Intelligent Multimedia
Technologies for Networking Applications: Techniques and Tools. IGI
Global, 225-246, 2013.
[6] H. Khallouki, M. Bahaj, P. Roose, S. Laborie, “SMPMA: Semantic
multimodal Profile for Multimedia documents adaptation”, 5th IEEE
Inter. Workshop on Codes, Cryptography and Communication Systems
(IWCCCS'14), 27-28 Nov 2014, 142-147, El Jadida, Morocco.
[7] M. BAHAJ, H. Khallouki, “UPOMA: User Profile Ontology for
Multimedia documents Adaptation”, The Journal of Technology. Vol. 06
(October. 2014), 448-457, 2014.
[8] R.Yus, E. Mena, S. Ilarri, A. Illarramendi, “SHERLOCK: Semantic
management of Location-Based Services in wireless environments”,
Pervasive and Mobile Computing, 5 (Dec. 2014), 87-99.
[9] N.Z. Naqvi, D. Preuveneers, Y. Berbers, “A Quality-aware Federated
Framework for Smart Mobile Applications in the Cloud”, Proceedings
of the 5th Int. Conf. on Ambient Systems, Networks and Technologies
(ANT 2014), Procedia Computer Science, pp. 253–260, 2014.
[10] A. Alti, S. Laborie, P. Roose, “USSAP: Universal Smart Social
Adaptation Platform”, In Proceedings of the 6th International Conference
on Ambient Systems, Networks and Technologies (ANT 2015),
Procedia Computer Science, pp. 670–674, 2015.
[11] J. Chai, S. Pan, , M.X. Zhou, “MIND: A Semantics-based Multimodal
Interpretation Framework for Conversational Systems”, In Proceedings
of the International Workshop on Natural, Intelligent and Effective
Interaction in Multimodal Dialogue Systems. Springer-Verlag, 37-46.
[12] T. Lemlouma, S.Laborie, P. Roose, A., Rachedi, K., Abdelaziz,
“mHealth Contents and Services Delivery and Adaptation Challenges
for Smart Environments”,mHealth Multidisciplinary Verticals, CRC
Press/Taylor & Francis, pp.295-314, 2014.
[13] A. Alti, S. Laborie, P. Roose, “USSAP: Universal Smart Social
Adaptation Platform”, Proceedings of the 6th International Conference
on Ambient Systems, Networks and Technologies (ANT 2015),
Procedia Computer Science, pp. 670–674, 2015.
[14] Q.P. Hai, S. Laborie, P. Roose, “On-the-fly Multimedia Document
Adaptation Architecture”, In Proceedings of International Workshop on
Service Discovery and Composition in Ubiquitous and Pervasive
Environments (SUPE), 10 :1188-1193, 2012.
[15] Protégé. 2007. Why Protégé. http://protege.stanford.edu/.
e-Health Pervasive Wireless Applications and Services (eHPWAS'15)
364