ArticlePDF Available

Volume 6, Issue No. 3 - International Journal of Digital Information and Wireless Communications (IJDIWC)

Authors:

Figures

Content may be subject to copyright.
Volume 6, Issue 3
2016
ISSN 2225-658X (ONLINE)
ISSN 2412-6551 (PRINT)
Editors-in-Chief
Prof. Hocine Cherif, Universite de Bourgogne, France
Dr. Rohaya Latip, University Putra Malaysia, Malaysia
Editorial Board
Ali Sher, American University of Ras Al Khaimah, UAE
Altaf Mukati, Bahria University, Pakistan
Andre Leon S. Gradvohl, State University of Campinas, Brazil
Azizah Abd Manaf, Universiti Teknologi Malaysia, Malaysia
Bestoun Ahmed, University Sains Malaysia, Malaysia
Carl Latino, Oklahoma State University, USA
Dariusz Jacek Jakóbczak, Technical University of Koszalin, Poland
Duc T. Pham, University of Bermingham, UK
E.George Dharma Prakash Raj, Bharathidasan University, India
Elboukhari Mohamed, University Mohamed First, Morocco
Eric Atwell, University of Leeds, United Kingdom
Eyas El-Qawasmeh, King Saud University, Saudi Arabia
Ezendu Ariwa, London Metropolitan University, United Kingdom
Fouzi Harrag, UFAS University, Algeria
Genge Bela, University of Targu Mures, Romania
Guo Bin, Institute Telecom & Management SudParis, France
Hadj Hamma Tadjine, Technical university of Clausthal, Germany
Hassan Moradi, Qualcomm Inc., USA
Hocine Cherifi, Universite de Bourgogne, France
Isamu Shioya, Hosei University, Japan
Jacek Stando, Technical University of Lodz, Poland
Jan Platos, VSB-Technical University of Ostrava, Czech Republic
Jose Filho, University of Grenoble, France
Juan Martinez, Gran Mariscal de Ayacucho University, Venezuela
Kaikai Xu, University of Electronic Science and Technology of China, China
Khaled A. Mahdi, Kuwait University, Kuwait
Ladislav Burita, University of Defence, Czech Republic
Maitham Safar, Kuwait University, Kuwait
Majid Haghparast, Islamic Azad University, Shahre-Rey Branch, Iran
Martin J. Dudziak, Stratford University, USA
Mirel Cosulschi, University of Craiova, Romania
Mohamed Amine Ferrag, Guelma University, Algeria
Monica Vladoiu, PG University of Ploiesti, Romania
Nan Zhang, George Washington University, USA
Noraziah Ahmad, Universiti Malaysia Pahang, Malaysia
Pasquale De Meo, University of Applied Sciences of Porto, Italy
Paulino Leite da Silva, ISCAP-IPP University, Portugal
Piet Kommers, University of Twente, The Netherlands
Radhamani Govindaraju, Damodaran College of Science, India
Ramadan Elaiess, University of Benghazi, Libya
Rasheed Al-Zharni, King Saud University, Saudi Arabia
Su Wu-Chen, Kaohsiung Chang Gung Memorial Hospital, Taiwan
Talib Mohammad, University of Botswana, Botswana
Tutut Herawan, University Malaysia Pahang, Malaysia
Velayutham Pavanasam, Adhiparasakthi Engineering College, India
Viacheslav Wolfengagen, JurInfoR-MSU Institute, Russia
Waralak V. Siricharoen, University of the Thai Chamber of Commerce, Thailand
Wen-Tsai Sung, National Chin-Yi University of Technology, Taiwan
Wojciech Zabierowski, Technical University of Lodz, Poland
Su Wu-Chen, Kaohsiung Chang Gung Memorial Hospital, Taiwan
Yasin Kabalci, Nigde University, Turkey
Yoshiro Imai, Kagawa University, Japan
Zanifa Omary, Dublin Institute of Technology, Ireland
Zuqing Zhu, University of Science and Technology of China, China
Publisher
The Society of Digital Information and Wireless Communications
Miramar Tower, 132 Nathan Road, Tsim Sha Tsui, Kowloon, Hong Kong
Further Information
Website: http://sdiwc.net/ijdiwc, Email: ijdiwc@sdiwc.net,
Tel.: (202)-657-4603 - Inside USA; 001(202)-657-4603 - Outside USA.
Overview
The SDIWC International Journal of Digital Information and Wireless
Communications is a refereed online journal designed to address the
networking community from both academia and industry, to discuss
recent advances in the broad and quickly-evolving fields of computer and
communication networks, technology futures, national policies and
standards and to highlight key issues, identify trends, and develop visions
for the digital information domain.
In the field of Wireless communications; the topics include: Antenna
systems and design, Channel Modeling and Propagation, Coding for
Wireless Systems, Multiuser and Multiple Access Schemes, Optical
Wireless Communications, Resource Allocation over Wireless Networks,
Security, Authentication and Cryptography for Wireless Networks, Signal
Processing Techniques and Tools, Software and Cognitive Radio, Wireless
Traffic and Routing Ad-hoc networks, and Wireless system architectures
and applications. As one of the most important aims of this journal is to
increase the usage and impact of knowledge as well as increasing the
visibility and ease of use of scientific materials, IJDIWC does NOT CHARGE
authors for any publication fee for online publishing of their materials in
the journal and does NOT CHARGE readers or their institutions for
accessing to the published materials.
Permissions
International Journal of Digital Information and Wireless Communications
(IJDIWC) is an open access journal which means that all content is freely
available without charge to the user or his/her institution. Users are
allowed to read, download, copy, distribute, print, search, or link to the
full texts of the articles in this journal without asking prior permission
from the publisher or the author. This is in accordance with the BOAI
definition of open access.
Disclaimer
Statements of fact and opinion in the articles in the International Journal
of Digital Information and Wireless Communications (IJDIWC) are those
of the respective authors and contributors and not of the International
Journal of Digital Information and Wireless Communications (IJDIWC) or
The Society of Digital Information and Wireless Communications (SDIWC).
Neither The Society of Digital Information and Wireless Communications
nor International Journal of Digital Information and Wireless
Communications (IJDIWC) make any representation, express or implied,
in respect of the accuracy of the material in this journal and cannot accept
any legal responsibility or liability as to the errors or omissions that may
be made. The reader should make his/her own evaluation as to the
appropriateness or otherwise of any experimental technique described.
Copyright © 2016 sdiwc.net, All Rights Reserved
The issue date is July 2016.
International Journal of
DIGITAL INFORMATION AND WIRELESS COMMUNICATIONS
PAPER TITLE
AUTHORS
PAGES
DEVELOPMENT OF AN AUTOMATIC HEALTH SCREENING SYSTEM
FOR RELIABLE AND SPEEDUP MEASUREMENT OF PERSONAL
HEALTH DATA
Eiichi Miyazaki, Hiroshi Kamano,
Kazuaki Ando, Yoshiro Imai
153
AN E-HEALTHCARE SYSTEM FOR UBIQUITOUS AND LIFE-LONG
HEALTH MANAGEMENT
Eiichi Miyazaki, Hiroshi Kamano,
Kazuaki Ando, Yoshiro Imai
163
ENCAPSULATION OF REAL TIME COMMUNICATIONS OVER
RESTRICTIVE ACCESS NETWORKS
Rolando Herrero
173
DEVELOPMENT OF A WEB-BASED PROXIMITY BASED MEDIA
SHARING APPLICATION
Erol Ozan
184
SIMULATION OF KNOWLEDGE SHARING IN BUSINESS
INTELLIGENCE
Pornpit Wongthongtham, Behrang
Zadjabbari , Hassan Marzooq Naqvi
192
KIWILOD: A FRAMEWORK TO TRANSFORM NEW ZEALAND OPEN
DATA TO LINKED OPEN DATA
Rivindu Perera, Parma Nand
206
AN EFFICIENT ALGORITHM FOR SHORTEST COMPUTER NETWORK
PATH
Muhammad Asif Mansoor, Taimoor
Karamat
213
TABLE OF CONTENTS
Original Articles
IJDIWC
Volume 6, Issue No. 3 2016
ISSN 2225-658X (Online)
ISSN 2412-6551 (Print)
Development of an Automatic Health Screening System for Reliable and Speedup
Measurement of Personal Health Data
Eiichi Miyazaki*1, Hiroshi Kamano*2, Kazuaki Ando*3 and Yoshiro Imai*3
{*1 Faculty of Education, *2 Health Center and *3 Faculty of Engineering} Kagawa University
{*1 1-1 Saiwai-cho, *2 2-1 Saiwai-cho, *3 2217-20 Hayashi-cho} Takamatsu, Japan
{miya@ed, kamano@cc, {ando, imai}@eng }.kagawa-u.ac.jp
ABSTRACT
This paper describes the detail of an Automatic Health
Screening System, which has been designed and
implemented with PC connected to campus network,
IC card reader and special interfaces for physical
measuring devices such as height meter, weight meter,
blood pressure monitor and so on. Major
characteristics of our screening system are using IC
card for user identification, interfacing several kinds of
physical measuring devices and transferring measured
data with ID-label into the health management
database of Health Center in our university. Human
errors and incorrect identification can be reduced by
means of automatic identification with IC card.
Speedup of health screening can be realized through
special interface between physical measuring devices
and computers. And after accumulating student's health
data (a kind of Personal Health Record), efficient
retrieval of measured data in such database can be
performed with Web-DB system and distributed
campuses network environment. With our automatic
health screening system, almost students can easily
participate in university-level health screening during
Routine Physical Examination and reliable
measurement of physical/medical data can be also
performed in a relatively short period.
KEYWORDS
Health screening, Personal Health Record (PHR), IC
card, Interfacing physical measuring devices, Web-DB
system for health retrieval.
1 INTRODUCTION
It is very much important for every student of
university to receive sufficient Health Education
from his or her university. Of course, almost
universities want to provide effective Health
Education for their students and possibly their
staffs. Sometimes, the above universities have
prepared to offer trial of their Health Education
examples, such as Healthcare consultancy by
dedicated doctor(s) and/or Web-based Health
consultant services[1]. Although such Healthcare
supports and Health Education are very useful and
necessary for students as well as staffs, they may
not be effective nor expected because students
and/or staffs must apply their Routine Physical
Examinations.
More than medium-scale universities sometimes
suffer from lack of environment to provide
physical screening in a short period, so that
students of the universities have very few
opportunities to receive their Routine Physical
Examinations in their universities. Because of the
above reason, universities need to equip their
effective environment of Routine Physical
Examination for their students and/or staffs. In
Kagawa University, form the same above situation,
we have designed and been implemented an e-
Healthcare Management Scheme[7] in order to
provide an effective Health Education for our
students.
In particular, there is very important and necessary
to equip some kind of rapid physical screening
with automatically-controlled measuring devices
which can be interfaced with computers.
With such smart measuring devices, it is possible
to realize automatic health screening in a short
period and reduce human errors and misjudgments
in physical screening for a large number of
students in universities.
This paper introduces our research background for
development of our Automatic Health Screening
System in the next(second) section. It illustrates
system configuration in detail in the third section.
It demonstrates the current state of our system for
153
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 153-162
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
the previous Routine Physical Examination in
Kagawa University and its qualitative evaluation
in the fourth section. It finally summarizes our
concluding remarks of this work in the fifth
section.
2 BACKGROUND OF SYSTEM
DEVELOPMENT
This section introduces our research background
of system development in order to perform
effective health screening at Routine Physical
Examination. The first half of the section deals
with some related works of our research. And then
the second one talks about problems of the past
physical examination, particularly, in our
university.
2.1 Related Work
Omar et.al. discussed an experimental scenario for
an e-health monitoring system (EHMS) that uses a
service-oriented architecture (SOA) as a model for
deploying discovering, integrating, implementing,
managing, and invoking e-health services[2]. They
said ''Such a model can help the healthcare
industry to develop cost efficient and dependable
healthcare services.''
Caceres et.al. said, ''E-health is one of the fastest-
growing application areas for intelligent mobile
services. The ever-growing number and variety of
health-related devices and tasks calls for
mechanisms to automatically discover, invoke,
and coordinate the corresponding services. This, in
turn, requires languages and tools that support a
semantically rich description of e-health services.''
Their paper[3] focused on service discovery for
medical-emergency management. And it presented
a new mechanism for semantic service discovery
complements existing approaches by considering
relevant parts of the organizational context in
which e-health services are used, to improve
system usability in emergencies.
Toninelli et.al. said, ''Mobile e-health has great
potential to extend enterprise hospital services
beyond traditional boundaries, but faces many
organizational and technological challenges. In
pervasive healthcare environments, characterized
by user/service mobility, device heterogeneity,
and wide deployment scale, a crucial issue is to
discover available healthcare services taking into
account the dynamic operational and
environmental context of patient-healthcare
operator interactions. In particular, novel
discovery solutions should support interoperability
in healthcare service descriptions and ensure
security during the discovery process by making
services discoverable by authorized users only.''
Their paper[4] proposed a semantic-based secure
discovery framework for mobile healthcare
enterprise networks that exploits semantic
metadata (profiles and policies) to allow flexible
and secure service search/retrieval.
Phunchongharn et.al. said, ''Wireless
communications technologies can support efficient
healthcare services in medical and patient-care
environments.'' By the ways, they pointed out
using wireless communications in a healthcare
environment to raise two crucial issues:
The RF transmission can cause electromagnetic
interference (EMI) to biomedical devices, which
could critically malfunction.
The different types of electronic health applications
require different quality of service.
Their paper[5] introduced an innovative wireless
access scheme, called EMI-aware prioritized
wireless access, to address these issues. %, namely,
the system architecture for the proposed scheme
was introduced, and then an EMI-aware
handshaking protocol was proposed for e-Health
applications in a hospital environment. Their
protocol provided safety to the biomedical devices
from harmful interference by adapting transmit
power of wireless devices based on the EMI
constraints.
The paper of De Meo et.al.[6] presented a
multiagent system to support patients in search of
healthcare services in an e-health scenario. Their
proposed system was HL7-aware in that it
represented both patient and service information
according to the directives of HL7, the
information management standard adopted in
medical context. Their system built a profile for
each patient and used it to detect Healthcare
Service Providers delivering e-health services
potentially capable of satisfying his needs. In
154
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 153-162
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
order to handle this search it could exploit three
different algorithms: the first, called PPB, used
only information stored in the patient profile; the
second, called DS-PPB, considered both
information stored in the patient profile and
similarities among the e-health services delivered
by the involved providers; the third, called AB,
relied on A*, a popular search algorithm in
Artificial Intelligence.
2.2 Problems of the Past Physical Examination
in University
At first of this subsection, we should explain our
problems about Routine Physical Examination in
the past several years. Health Center of Kagawa
University must play the role to provide and
manage Routine Physical Examinations at the
beginning of April, the first semester of our
university because Japanese universities start their
first semester in April(, while they do their second
one in October). All the new faces of the
university, every year, have to receive their
Routine Physical Examinations just only on the
specified day before lectures begin.
All the staffs of Health Center have finished the
preparation to provide such physical examinations
in a short period in order to reduce some accidents
and probable mistakes, but they might be
sometimes disappointed because of several kinds
of human errors and unpreventable troubles.
The flow of Routine Physical Examination is
illustrated in the Figure1.
This sample flow shows following three major
steps, namely,
1. Using physical measuring devices: An
examinee can receive their measured data
which are described in paper. For example, in
this case, a blood pressure monitoring device
shows him/her information about maximal
and minimal values of blood pressure and
heart pulses per minute, such as 110mmHg,
71mmHg and 81 pulses/min. respectively.
2. Hand writing of data into formatted paper:
Staffs of Health Center have to record
measured physical data and examinee's profile
into the formatted paper. At the same time,
the staffs are required to check whether
measured data are suitable or not as well as
who is the relevant examinee by means of
his/her identification. So it used to become
time-consuming and probably human-error-
occurred task.
3. Entering data on the paper to computer:
Almost thousands sheets of data must be input
to computer(s) as fast as possibly, because
some examinees want to receive health
certificates or medical examination reports
which are probably necessary for the job-
hunting process. So Health Center must pay
out-sourcing cost for such entering data into
computer in order to obtain computer-
readable data rapidly. At the same time, for
such data handling, it must also pay very close
attention because relevant data belong to
privacy and security domain.
Figure 1. Flow of Physical Examination with Time-
consuming Procedures.
155
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 153-162
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
These are the reasons to improve the past flow of
physical examination described above. It is fact
that such a flow are really necessary to have time-
consuming and extra-costed procedures. The next
section will give some ways to reduce such time-
consuming procedures and save probably
unneeded out-sourcing cost.
3 SYSTEM CONFIGURATION
This section illustrates system configuration of our
automatic health screening system. The section
will be separated into the following four
subsections, namely, identification of examinee
with student IC card for rapid registration of
physical examination, software design for
interface of Physical Measuring Devices to
computers, construction of simple Web-based data
monitoring facility for healthcare self-checking,
and then future plans of our system expansion in
order to generalize system for multiple purpose of
Health Management Services.
3.1 Subheadings Identification of Examinee
with Student IC Card
Our Kagawa University has employed FeliCa-
based IC card for student and staff identification.
The detail about FeliCa will be found at the
relevant Web information[8]. In our case, such
cards have been used for identification of
examinee during Routine Physical Examination.
Figure2 illustrates identification of examinee of
physical screening with student IC card of
Kagawa University.
Some profiles about examinee will be read and
shown on the computer only when the relevant IC
card is placed on the IC card reader connected to
the computer, which can be interfaced with
dedicated physical measuring device such height
meter, weight meter, blood pressure monitor, and
so on. This way is a solution to reduce the above
complicated task for identification of examinee.
And moreover, it is also a solution to remove
human errors for hand writing of examinee's
profiles and give us a suitable information about
examinee to achieve some cross-check.
The detail about reading and writing FeliCa-based
IC card has been already reported in our previous
paper[7] of the international conference of
DICTAP2011 held in wonderful Dijon, France.
With IC card identification, it has been possible to
perform speedup of examinee's identification
during several kinds of physical screening as well
as avoidance of unneeded human errors which
happen when information about examinee is
recorded into the formatted paper by means of
staffs' hand writing.
Figure 2. Identification of Examinee of Physical Screening
with IC Card.
3.2 Interfacing of Physical Measuring Devices
to Computers
A few years ago, new comers of physical
measuring devices have been replaced with
according old ones sequentially, which have
connectivity to computers by means of USB or
RS-232C interface. The former will be easily
connected to almost all types of personal
computers(PCs), however, the latter is sometimes
inconvenient and necessary to equip some kind of
156
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 153-162
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
connection module for interfacing computer. We
have prepared such modules for our portable note-
PCs in order to connect the relevant physical
measuring devices as may be necessary.
Especially, note-PCs are sufficient to be used for
Routine Physical Examination, because of their
portability and manipulation capability.
Figure3 illustrates an automatic health screening
system which is organized with computer, some
kinds of physical measuring devices, IC card
reader/writer and student IC card. Our automatic
health screening system can read not only the
relevant data from the target measuring device but
also examinee's profile from the IC card reader. It
stores(or saves) such data and profile according to
the examinee into its temporary storage. Finally, it
can write them into network server as well as
examinee's IC card as our occasion demands.
Figure 3. Interfacing among IC card reader, Physical
Measuring Devices and Computer.
The health screening system can select to write
data and profile into the specific network-attached
information server if it is used in the environment
with network connectivity, while it can select to
write them into the examinee's IC card tentatively
if it is done in another environment without
connectivity. In the latter's case, it can utilize
FeliCa-based IC card as temporary storage with
small capacity. During Routine Physical
Examination in gymnastic hall, for example, our
screening system must work without network
connectivity so that it should manipulate as the
following procedures;
1. At first, each examinee applies user
identification by attaching his/her IC card on
the entrance system. IC card is retrieved and
its previous data is stored into backup
memory of the entrance system, namely the
relevant IC card becomes vacant and available
to save the measuring data.
2. During each physical examination, the IC card
of each examinee can work as temporary
storage and it maintains measured data from
each automatic health screening system. Such
data are belonging to the relevant examinee,
obtained from physical measuring device, and
written by the health screening system
automatically.
3. At endpoint of physical examination, IC card
is checked once again whether all the menus
of examination have finished or not by means
of reading its measured data by the above
entrance system. Such entrance system can
play a role to perform the final operation, this
time. And finally, all the measured data read
by the above system will be transferred into
information server.
Of course, our screening system can work in the
environment of network connectivity so that it will
read examinee's profile from IC card, obtain data
from the interfaced measuring device at the same
time, combine profile, measured data, timestamp
and other attributes into data tuple, access to the
target information server, and transfer such tuple
into the server as database record for relevant
examinee. Such record can be retrieved or
browsed through Web-DB monitoring services,
157
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 153-162
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
which will be more precisely described in the next
subsection.
With our automatic health screening system, the
Health Center of Kagawa University can
reduce/minimize time-consuming tasks to make
up document related to both of measured data and
examinee's profile into the formatted paper by
means of hand writing. And moreover the Center
may not outsource potentially expensive tasks to
convert a large amount of data on the formatted
papers into machine-readable files.
3.3 Implementation of Simple Web-based Data
Monitoring Services
There was another problem how to utilize
efficiently the result from our automatic health
screening system for Health Education in
university. Although our automatic health
screening system can more easily take in
measured data together with examinee's profile
during Routine Physical Examination than past,
we have considered such a system to be not so
useful for Health Education because the according
examination is necessary only a few times, namely
once or twice, in the year. So we have thought that
our screening system must be applied to more
frequently used services.
One of expected services with the result from our
health screening system is to realize simple Web-
based data monitoring at any place or building in
our distributed campus network environment.
Doctors as well as nurses of our university may
retrieve measured data of the specified student
through our secured network service and Web-DB
browsing one. They can access the information
server which manages all the measured data of the
students by means of only pre-registered client
PCs and long length-sized passwords.
Figure4 shows a conceptual image of Web-based
data monitoring through campus network as
follows;
A doctor checks some measured data of the
specified student on the network environment.
Such data have been accumulated in the
database and classified in a style of time-
series clustering.
They can be manipulated to illustrate how
they fluctuate in chronological order.
Doctors and/or nurses can achieve some kinds
of longitudinal data analysis in order to check
whether measured data of the relevant student
move in on warning/illegal region.
Figure 4. Browsing Web-DB server and Monitoring
measured data.
In addition, a more effective service with our
screening system is to provide such a system to
any student at any time at the satellite office of
Health Center in our distributed campus. Namely,
any student may be allowed to manipulate an
automatic health screening system in order to
investigate his/her sampling data for health even if
it is not during Routine Physical Examination.
The relevant student can recognize his/her
parameter(s) of health in chronological order and
check by himself/herself whether his/her health is
within goodness or not. That is very nice
motivation to improve student's health
management and pay good attention to health
maintenance in university. It is suitable to increase
the level of Health Education in our University.
158
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 153-162
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
4 QUALITATIVE EVALUATION
This section demonstrates current state and
qualitative evaluation of our automatic health
screening system.
4.1 Previous Result of Routine Physical
Examination
In 2011, we had already applied prototype of our
automatic health screening system into some parts
of the previous Routine Physical Examination of
Kagawa University. Figure5 shows our health
screening system for Routine Physical
Examination on the table. This system consists of
note-PC, IC card reader and Blood Pressure
Monitoring device
Figure 5. Note-PC connected with IC card reader and
Blood Pressure Monitoring device.
Such system was set on the tables for examinees
during our Routine Physical Examination in our
Gymnasium just like shown in Figure6.
Figure 6. Support staffs for blood pressure monitoring.
And we had equipped some supporting staffs in
order not to disturb our measuring at Routine
Physical Examination in our Gymnasium by
unexpected troubles.
The upper one of Figure7 shows photo of
demonstration at the rehearsal for Blood Pressure
Monitoring by authors. The lower one of Figure7
also shows displaying measured data on the PC
obtained from blood pressure monitoring device.
Figure 7. Demonstration of rehearsal for blood pressure
monitoring test.
In the practical case of the previous Routine
Physical Examination with our system, the below
procedures had been performed. Additionally, we
had prepared some conditional warning or
detection services into our system for the sake of
reducing human-error or trouble of man-machine
interaction.
First of all, our system had prospectively-
incorporated range(s) of measuring ready to
obtain adequate data. This was a useful
159
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 153-162
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
mechanism to avoid potential machine
troubles.
Our system had shown the relevant measured
data in the categorized form so that each
examinee could recognize his/her measured
data by visual judgment. And the system
could provide warning prompt for examinee if
his/her measured data was outside of
prospectively-incorporated range(s). For
example, at blood pressure measuring, we had
set prospectively-incorporated range(s) of
blood pressure as follows;
where BPmax is maximal measured data of
blood pressure meter, and BPmin is minimal
one.
A report about blood pressure monitoring test
told us that there was ten times of occurrences
to generate warning of mis-measuring for 300
numbers of total examinees. All the above
case instructed that the relevant examinees
must apply their measuring once again.
It can be considered that these evidences explain
certain effectiveness of our system as qualitative
evaluation.
4.2 Trial utilization of system for pre-Routine
Physical Examination
We had recognized that there occurred many
problems to be resolved in real application of our
health screening system for Routine Physical
Examination in 2011. So we decided that our
system would be applied for pre-Routine Physical
Examination in 2012. Figure8 shows a photo of
environment with our system for pre-Routine
Physical Examination in 2012. It is summarized to
utilize our health screening system for pre-Routine
Physical Examination;
Preparing a set of our health screening system
at the main office of Health Center (at the
front desk of office) just shown in Figure8. Its
upper photo shows overall of our system and
its lower one focuses blood pressure
monitoring subsystem, which is separated into
two parts, interfaces for examinee located in
the outside of a room of Health Center office
and PC connected to Campus LAN located in
the inner side of the room.
Informing students of university that they can
receive pre-Routine Physical Examination if
and only if they accept some regulations from
Health Center (e.g. specified date, number
limit per day, etc. for pre-Routine Physical
Examination)
Selecting a set of the relevant students who
want to receive pre-Routine Physical
Examination at Health Center on the specified
date
Serving the same menu of ''normal'' Routine
Physical Examination for a few number of
students who want to receive such an
examination before ''normal'' one.
Figure 7. Photo of environment with our system for pre-
Routine Physical Examination in 2012.
160
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 153-162
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
4.3 Perspective Problems of Current System
Our automatic health screening system has been
depending on the Web-based data monitoring
service with regard to data retrieval and
modification for the data set of measured data. Of
course, the Web-based data monitoring service is
another application of our project so we can
customize this service to be more flexible and
improvable. But there are some difficult problems
to be resolved, namely improving user interface,
enhancing security/privacy and employing
standard methodology.
We will prepare the following three future projects
in order to resolve the above problems as follows;
1. Improving user interface: Our prototype
version had only simple but fixed user
interface for client PCs. It could not satisfy
more flexible user client environment[9]. So
we were required to develop user interface of
Web-based data monitoring for tablet PC,
smart phones and so on.
2. Enhancing security and privacy of Web-DB
services: Security is very much important and
necessary for global access, in particular,
outside of university. It would be expensive
and time-consuming tasks for us to enhance
the current security level up to the reliable
usage of our Web-based data monitoring in
the external network environment[10]. So we
were willing to search collaboration
partnership to develop database facility for
our data retrieval and modification.
3. Employing standard methodology: Our
prototype version was working in the very
domestic manner for the demands from
Kagawa University, namely it was not global
standard. We thought that other universities
and/or schools must need health education
support system. So we wanted to customize
our total system and strategy into more global
standard style[11].
We need to design and develop our new system
together with our collaboration partner(s) in order
to refresh our system with standard methodology.
5 CONCLUSION
This paper describes an Automatic Health
Screening System for Student Health Education in
Kagawa University. Our health screening system
has been applied to the previous Routine Physical
Screening Examination partly for staffs of Health
Center to avoid human errors and time-consuming
tasks related examination. With application of this
system during Routine Physical Examination, we
can obtain some useful evaluation from trial
performance. And such a system can also realize
pre-Routine Physical Examination in order to
reduce task-loading in the practical Routine
Physical Examination and provide more efficient
physical screening than before. Characteristics of
our system configuration are summarized as
follows;
User identification with student IC card: It is a
good idea to employ student IC card for user
identification through physical examination.
With IC card-based identification, it can be
reduced into small amount of time to register
and check examinees for physical screening
test.
Interface between measuring devices and
computers: In order to build some kinds of
interfaces between Physical measuring
devices and computers, it must be done to
connect device into computer's IO ports such
as USB, write dedicated software for
interrupts just like drivers and control such
devices by computers. With automatic control
of measuring devices by computers, we can
avoid probabilistically occurred human errors
as well as writing mistake of measured data.
Web-based data monitoring: After obtaining
data from physical measuring devices, such
data can be accumulated into suitable record
of physical screening examination such as e-
Health records in database. Such a database
will be retrieved by authenticated users
through distributed campus information
network environment. With some facilities of
Web-based data monitoring, it is useful for
students to check their periodic state of their
own health by themselves.
161
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 153-162
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
As result of application of our system to Routine
Physical Examination and others, it is confirmed
that our system has been qualitatively evaluated to
be very useful and effectively for examinees of
students and staffs of Health Center.
Acknowledgment
The authors would like to thank Dr. Seigo Nagao,
the President of Kagawa University, for his great
leadership and very important suggestion from his
own doctor's view. They are also grateful to Dr.
Kazuhiro Hara, Professor Emeritus of Kagawa
University and the Funding Leader of K-MIX
(Kagawa Medical Internet eXchange), for his
powerful and continuous supervisions to this study.
Our research needs practical comments and
continuous work of Mr. Daisuke Yamakata, Ms.
Tomomi Mori, dedicate nurse of Kagawa
University, and other staffs of Health Center of
Kagawa University.
REFERENCES
1. These were famous Web-based Healthcare Support
Services:
(e.g.) Google Health http://www.google.com/intl/en-
US/health/about/index.html
(e.g.) Microsoft HealthVault http://www.
healthvault.com/industry/index.html
2. W. M. Omar and A. Taleb-Bendiah : E-health support
services based on service-oriented architecture. IT
Professional, Vol.8, No.2, 35--41 (2006).
3. C. Caceres, A. Fernandez, S. Ossowski and M. Vasirani:
Agent-Based Semantic Service Discovery for
Healthcare: An Organizational Approach. IEEE
Intelligent Systems, Vol.21, No.6, 11--20 (2006).
4. A. Tonielli, R. Montanari and A. Coradi : Enabling
secure service discovery in mobile healthcare enterprise
networks. IEEE Wireless Communication, Vol. 16, No.
3, 24--32 (2009).
5. P. Phunchongharn, D. Niyato, E. Hossain and S.
Camorlinga : An EMI-Aware Prioritized Wireless
Access Scheme for e-Health Applications in Hospital
Environments. IEEE Transactions on Information
Technology in Biomedicine, Vol.14, No.5, 1247--1258
(2010).
6. P. De Meo, G. Quattrone and D. Ursino: An Integration
of the HL7 Standard in a Multiagent System to Support
Personalized Access to e-Health Services. IEEE
Transactions on Knowledge and Data Engineering,
Vol.23, No.8, 1244--1260 (2011).
7. Y. Imai, Y. Hori, H. Kamano, E. Miyazaki and T.
Takai,: A Trial Design of e-Healthcare Management
Scheme with IC-Based Student ID Card, Automatic
Health Examination System and Campus Information
Network. In: H. Cherifi, J.M.Zain and E.El-Qawasmeh
(Eds.) The International Conference on Digital
Information and Communication Technology and its
Applications (DICTAP2011), Part I, CCIS 166, pp.728-
-740 (2011).
8. FeliCa card information provided by SONY:
http://www.sony.net/Products/felica/about/index.html
9. K. Bessiere, S. Pressman, S. Kiesler and R. Kraut,:
Effects of Internet Use on Health and Depression: A
Longitudinal Study. Journal of Medical Internet
Research, Vol.12, No.1 (2010).
10. K.B.Wright, S. Rains and J. Banas: Weak-Tie Support
Network Preference and Perceived Life Stress Among
Participants in Health-Related, Computer-Mediated
Support Groups. Journal of Computer-Mediated
Communication, Vol.15, No.4, 606--624 (2010).
11. A..Horgan and J. Sweeney: Young students' use of the
Internet for mental health information and support.
Journal of Psychiatric and Mental Health Nursing,
Vol.17, No.2, 177--123 (2010).
162
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 153-162
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
An e-Healthcare System for Ubiquitous and Life-long Health Management
Eiichi Miyazaki*1 Hiroshi Kamano*2 Kazuaki Ando*3 Yoshiro Imai*3
{*1 Faculty of Education, *2 Health Center and *3 Faculty of Engineering} Kagawa University
{*1 1-1 Saiwai-cho, *2 22-1 Saiwai-cho, *3 2217-20 Hayashi-cho} Takamatsu, Japan
{miya@ed, kamano@cc, {ando, imai}@eng}.kagawa-u.ac.jp
ABSTRACT
An e-Healthcare system with IC card authentication,
automatic health screening sub-system and Web-based
health information monitoring, has been designed and
implemented for university health education. It looks
like a prototype of private Cloud service of e-
Healthcare for university students which can obtain
their health records from physical measuring devices
with their IC card-based authentication, manage their
health data in suitable database, investigate the relevant
data according to requests from doctors/nurses, and
provide such data as necessary information for self-
healthcare through Web-DB service. This paper
presents an organization of the above e-Healthcare
system, demonstrates its real usages in university
health education and describes its trial quantitative
evaluation, and describes expanding plan through
practical applications.
KEYWORDS
e-Healthcare, IC card authentication, health
information monitoring and retrieving, System
evaluation.
1 INTRODUCTION
People of the world have own natural rights to live
their healthy lives. They have been interesting in
their situations of health and nowadays almost all
of them are longing for their healthy environment
more strongly than ever before. Doctors always
point out that people need to keep their healthy
lives if they do not want to be ill and sick. It is
very important for everyone to maintain his/her
living environment at the healthy level. In other
words, everyone wants to have some facilities to
monitor his/her healthy level and needs some
visualizing tools to recognize whether his/her
healthy level becomes good or not.
In every higher education and/or university, even
in Japan, of course, its staffs and administration
must provide health education and equip health
managing environment for its students. Because it
is important for current students to study in good
condition during their university lives and for
external society including their family to welcome
the relevant students as its up-and-coming persons.
Strictly speaking, however, universities have faced
to some problems to be resolved in order to
provide efficient health education and they
are/have been suffering from the lack of staffs and
facilities to manage health keeping environment
for their students.
An approach of e-Healthcare seems to be one of
the most effective and efficient solutions to
improve such environment of health education
with the above lack of staffs and facilities in
universities as well as in general societies. This
approach may be able to provide a powerful
strategy to equip so-called Ubiquitous Healthcare
Service where users can always connect to the
information server, monitor their health
information in it and obtain suitable advises and
instructions for their health management(s).
This paper describes our e-Healthcare system for
university students with IC card authentication,
automatic health screening sub-system and Web-
based health information monitoring. First of all,
the next (second) section introduces some related
works for the sake of comparison and coordination
of our study with the state-of-the-art in the same
domain. The third section shows configuration of
a newly-developed e-Healthcare system and
illustrates some details of the system and its
facilities. The fourth section demonstrates its real
application in our university, explains some brief
evaluation of our e-Healthcare system and reports
our challenge to expand our system to overground
in the future market. Finally, the last (fifth) section
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 163-172
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
163
concludes our summaries for the perspective study
and shows acknowledgements and some useful
references.
2 RELATED WORKS
This section introduces some typical related works
(papers) in order to compare and coordinate our
study with the below the currently and/or
previously published papers in the same domain.
2.1 e-Healthcare Related Works
B. W. Trevor Rohm of Brigham Young University
and his son in [1] described ''Abstract: A vision of
the e-healthcare era is developed using scenarios
from today and for the future. The future of e-
healthcare is based on empowering individual
patients with current information about diagnosis
and treatment for personal decision-making about
their health without ever visiting a healthcare
facility. Empowering the patients is made possible
with a futuristic personal medical device (PMD). ''
And they added ''The PMD is a black box, which
works in conjunction with the internet and locally
stores expert system programs. The PMD has
various accessories available to help with
diagnosis besides voice and image capabilities. ''
Patrick C. K. Hung of University of Ontario
Institute of Technology (UOIT) described in [2]
Information privacy is usually concerned with the
confidentiality of personal identifiable information
(PII) and protected health information (PHI) such
as electronic medical records. Thus, the
information access control mechanism for e-
Healthcare services must be embedded with
privacy-enhancing technologies. ''
A. Mukherjee and J. McGinnis of Montclair State
University categorized and explained e-
Healthcare in paper article[3]. And they presented
the state-of-the-art to identify key themes in
research on e-healthcare. They pointed out ''E-
healthcare is contributing to the explosive growth
within this industry by utilizing the internet and all
its capabilities to support its stakeholders with
information searches and communication
processes. A review of the literature in the
marketing and management of e-healthcare was
conducted to determine the major themes pertinent
to e-healthcare research as well as the
commonalities and differences within these
themes. Based on the literature review, the five
major themes of e-healthcare research identified
are: cost savings; virtual networking; electronic
medical records; source credibility and privacy
concerns; and physician-patient relationships. E-
healthcare systems enable firms to improve
efficiency, to reduce costs, and to facilitate the
coordination of care across multiple facilities. ''
2.2 Ubiquitous Services of e-Healthcare in
Other Related Works
Nowadays, e-Healthcare has been tightly
connected with ubiquitous computing services.
Especially, mobile computing is a key technology
to realize e-Healthcare system effectively and
efficiently. The below papers are discussing about
relations and connections between mobile
computing and know-how of construction of e-
Healthcare system.
Zhuoqun Li and his supervisors of University of
Plymouth described at the relevant conference on
Computational Intelligence in Medicine and
Healthcare[4] ''The growing availability of
networked mobile devices has created a vast
collective potential of unexploited resources. Grid
computing with its model of coordinated resource
sharing may provide a way to utilize such
resources that are normally distributed throughout
a mobile ad-hoc network.'' They also discussed the
general challenges in implementing Grid
functionalities (e.g. service discovery, job
scheduling and Quality of Service (QoS)
provisioning) in the mobile environment and the
specific issues had arisen from realistic application
scenarios, i.e. the e-healthcare emergency.
Min Chen and his co-researchers of Seoul
National University described in [5] ''Radio
frequency identification technology has received
an increasing amount of attention in the past few
years as an important emerging technology. To
address this challenging issue, we propose an
evolution to second-generation RFID systems
characterized by the introduction of encoded rules
that are dynamically stored in RFID tags. This
novel approach facilitates the systems’ operation
to perform actions on demand for different objects
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 163-172
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
164
in different situations, and enables improved
scalability. Based on 2G-RFID-Sys, we propose a
novel e-healthcare management system, and
explain how it can be employed to leverage the
effectiveness of existing ones. It is foreseeable that
the flexibility and scalability of 2G-RFID-Sys will
support more automatic and intelligent
applications in the future.''
2.3 Design Concept based on Previous Related
Works
We have designed our new e-Healthcare system
based on not only facing problems to be resolved
in our university but also the above previously
announced in the public journals and conference
papers described in the above subsections. Our
design concepts are summarized as follows, the
former are our original design concepts introduced
from existing problems at Routine Physical
Examination for students in our university.
Namely,
Reduction of time-consuming tasks and
frequently occurred human-errors.
Avoidance of paper-oriented information
exchanging and sharing.
Applicability of newly designed system to
Health Education in our university.
Usage of IC card-based Student Identification
for user authentication.
And the latter ones are added through
investigation of previous related works in the
public papers. Namely,
Utilization of Mobile Computing technologies
including Wireless LAN, 3G/GSM telephone
communication and others for position-
independent services.
Employment of suitable Electronic Medical
Records and/or Personal Health(care)
Records for seamless healthcare services.
Capability of newly designed system as so-
called Ubiquitous Services or Cloud Services
in order to provide effective healthcare
environment.
3 e-HEALHCARE SYSTEM
This section shows configuration of our e-
Healthcare system which is already annunciated in
the paper[6] and illustrates some details of the
systems characteristics and its typical facilities.
3.1 Configuration of e-Healthcare System
Figure 1 shows a conceptual configuration of our
e-Healthcare System in order to resolve existing
problems at Routine Physical Examination for
students in our university. Its characteristics are
summarized as follows;
User (Examinee) authentication with IC card-
based student ID for simplification of
Examinee checking.
Automatic data obtaining of physical
measuring devices into personal computers in
order to reduce time-consuming tasks of
paper-based data recording.
Temporary data storage with IC card for
Routine Physical Examination in not-
networked environment.
Equipment of database for individual
healthcare record and health monitoring
through campus network.
Professional health education by university
doctors and/or nurses through analysis of
medical records from Routine Physical
Examination.
Information retrieval of medical records from
Web-based monitoring with user
authentication.
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 163-172
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
165
Figure 1. Conceptual Configuration of an e-Healthcare System (previously annunciated in [6])
At the IEEE International Conference on Services
Computing (SCC2007)[7] held in Lisboa in 2007,
F. Kart and co-researchers of University of
California, Santa Barbara described ''Large-scale
distributed systems, such as e-healthcare systems,
are difficult to develop due to their complex and
decentralized nature. The service oriented
architecture facilitates the development of such
systems by supporting modular design, application
integration and interoperation, and software reuse.
With open standards, such as XML, SOAP,
WSDL and UDDI, the service oriented
architecture supports interoperability between
services operating on different platforms and
between applications implemented in different
programming languages.'' They mentioned in
other article [8] of IT Professional (March-April
2008) ''Medical monitoring devices worn by the
patient, and frequent electronic communication
between the patient and a nurse, can ensure that
the prescribed treatment is being followed and that
the patient is making good progress. The e-
healthcare system can be readily extended to other
healthcare professionals, including medical
technicians who perform and report tests and
analyses requested by physicians.''
Their studies and results have provided some good
ideas and comprehensive strategy for us to
develop and improve our e-Healthcare system and
simultaneously taught us how to select several
kinds of technologies for implementation of
effective e-Healthcare system for our demand of
university. We do not employ such open standards
described in the above papers but we recognize
that it is very important to design our system with
modular system architecture / programming and
utilize standards of protocols and data formats. So
our system can have expandability not only to
connect with other systems but also to adapt for
several kinds of users with interoperability.
3.2 Sub-systems and Facilities of Our e-
Healthcare System
For example, at first, we introduce a dedicated
sub-system including physical measuring devices,
IC card reader/ writer and personal computer (PC)
for controlling. We had already developed these
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 163-172
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
166
sub-system as Automatic Health Screening
System. And Figure 2 shows a typical dedicated
sub-system with a blood pressure monitoring
device and vision analyzer as physical measuring
device. As you know, Figure 2 is a part of the left
hand of Figure 1 and focuses on a scheme to equip
note PC connected with IC card reader/writer and
some kinds of physical measuring devices so that
both of user authentication and data acquisition
can be realized simultaneously in the following
steps;
1. Placing IC card of Examinee on the IC card
reader connected to PC.
2. Authenticating ID of Examinee from IC card
and obtaining his/her relevant information.
3. Acquiring data from physical measuring
device connected to PC.
4. Combining measured data and regarding
information of Examinee into the formatted
record with time-stamping.
5. Storing the above time-stamped record in IC
card if the PC for physical examination is not
connected with network environment.
Figure 2. Dedicated Sub-system of Physical Measuring
Device, IC card Reader/Writer and Personal Computer
6. Database server can collect such records of
PC or IC card into its storage through network
environment.
We have already developed a mechanism to build
dedicated sub-systems in order to interface several
measuring devices such as a blood pressure
monitoring device, vision analyzer and other
devices to take height and weight [9]. So we will
be able to expand the above samples into other
types of dedicated sub-system relatively easily for
other kinds of physical measuring devices.
Secondly, we explain another facility of our e-
Healthcare system to realize health monitoring for
database server through campus network in
university. The right hand of Figure 1 illustrates a
scheme of health monitoring or health information
retrieval for database by university doctor through
campus LAN.
The dedicated sub-system of our e-Healthcare
system described above can accumulate the
formatted record combined with measured data
and regarding information of Examinee with time-
stamping. So every student (i.e. Examinee) has
his/her health records in database with when-
where information about Routine Physical
Examination or periodical health checking. Not
only the relevant students themselves but also
university doctors/nurses can investigate or trace
the history/changes of health information in time
series.
The relevant facility of our system can generate
some kinds of graph based on time-series analysis
in order to illustrate the history/changes of health
information. Of course, university doctors/nurses
can relatively easily perform their professional
medical suggestions and/or judgments for some
specified students by means of the above facility.
Moreover, students will be able to retrieve their
health information from database and understand
the according history/changes of such information
even by themselves.
One of the merits of employing graphical interface
for retrieval of health information is to find out a
specific change of health information with
irregularity efficiently even at glance. Students
can recognize such a case very easily through our
e-Healthcare system by themselves and then
consult their university doctors and/or nurses with
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 163-172
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
167
their evidences from our system. Doctors and/or
nurses in university also can perform periodical
monitoring for graphical retrieving results (shown
in Figure 3). Whenever they notice any suspicious
changes (phenomena) with the above mechanism,
they will be able to send some e-mail and/or do
other communication to the relevant student about
his/her health situation.
Figure 3. Ubiquitous health monitoring by doctors/nurses
through smart devices (i.e. iPad)
With our e-Healthcare system, the above Health
Education of university will be introduced and
managed effectively and efficiently. The next
section will demonstrates its real application in
our university, explains some brief evaluation of
our e-Healthcare system and reports our challenge
to expand our system to overground in the future
market.
4 APPLICATION of e-HEALTHCARE
SYSTEM and ITS EVALUATION
This section demonstrates real application of our
e-Healthcare system at Routine Physical
Examination in our university at first. And then it
explains brief evaluation of the system and
mentions our new challenge to expand our system
for potential market near future.
4.1 Quantitative Evaluation of System
It becomes an effective solution to reduce data
collecting time and time-consuming error
correction in order to obtain data from physical
measuring devices. Our e-Healthcare system can
achieve an expectable advantage for the above
problem which had occurred in the past Routine
Physical Examinations.
It can also recognize whether the relevant students
have finished all the contents of their Routine
Physical Examinations or not. Such a procedure is
very useful and attractive for students themselves
as well as staffs of Health Center. Because there
occurred many problems and tasks when the staffs
detected some students returned without finishing
their menus of examinations after all the works of
routine examination had been closed. And such
students must receive the rest of examination in
another date/schedule.
It is necessary and important to describe some
problems which had occurred during Routine
Physical Examination in 2011. Such problems are
itemized as follows;
There occurred many tasks and troubles for
preparation of our e-Healthcare system at
Routine Physical Examination in gymnasium
where we suffered from lack of electricity and
LAN connectivity.
Routine Physical Examination needed
technical staffs who can resolve computer-
related tasks and troubles in a short time and
moreover continue corrective working of
computers/physical measuring devices.
Gymnasium may be not a suitable space for
physical examination because it is very much
complicated and constrained environment to
equip and restore e-Healthcare system for
other usages in a short period. (pre-Routine
Physical Examination can allow e-Healthcare
system to work at the same place for a
relatively long time)
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 163-172
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
168
Figure 4. Photo of utilization of our system for pre-Routine
Physical Examination in 2012
Figure 4 shows a photo of real utilization of our
health screening system for pre-Routine Physical
Examination in 2012. Students of Sport Club
wanted to receive pre-Routine Physical
Examination because of their schedule in the case
of Figure 4. Many other students had also received
the pre-Routine Physical Examination during a
few weeks in 2012 because the relevant students
had their own reasons such as their convenience,
dislike of crowded normal examination, and so on.
As you know, our system can provide such pre or
post-Routine Physical Examination for examinees.
It can be considered that this evidence informs
another effectiveness of our system as qualitative
evaluation.
Frankly speaking, utilization of our system for
pre-Routine Physical Examination is to be a
solution for trial avoidance of problems occurred
in the above routine examination in 2011. An idea
of using system for pre-Routine Physical
Examination is not only reduction of problems in
routine examination but also improvement of
consultation rate of students who want to receive
physical examination as one of health education of
university.
Our e-Healthcare system can generate warning
alert when the measured blood pressure is out of
the above allowance. So there had been 10 times
of warning generation for measuring 300 numbers
of students. That was very much useful for Health
Center to detect misjudge of measuring blood
pressure by means of real time warning and
correcting (namely, retrial and rewriting at the
same time).
Secondary, we report quantitative evaluation of
our e-Healthcare system during the pre-Routine
Physical Examination in 2012. Evaluation of our
e-Healthcare system in 2012 is good and
remarkable because it is confirmed that there has
been evident fact about improvement of
consultation rate from the previous situation to the
current one. Figure 5 shows consultation rate of
Routine Physical Examination in 2011 and 2012.
Figure 5. Comparison of Consultation Rates in 2011 and
2012
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 163-172
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
169
Our university has six faculties such as Education,
Law, Economics, Medicine, Engineering and
Agriculture. Only faculty of medicine has students
from 1st grade to 6th one, others have students
from 1st grade to 4th one.
For example, in the Y-axis of the Figure 5,
Edu2011 specifies consultation rate for faculty
of Education in 2011, and Agr2012 also
specifies consultation rate for faculty of
Agriculture in 2012. On the other hand, the X-axis
of the figure means percentage (max. 100%) of
consultation rate. Naturally, in Figure 5, 100% of
consultation rate (namely bar chart indicates 100)
means that all the students of the relevant grade of
the specific faculty have participated in Routine
Physical Examination in 2011 or 2012.
Consultation rate for students of some faculty has
become increased in more 20.0% than the
previous one.
Table 1 shows Improvement of Consultation Rate
from 2011 to 2012.
Table 1. Improvement of Consultation Rate from 2011 to
2012
(NB) only faculty of Medicine has 5th and 6th grade of
students.
For example, the row of Eco (2012/11) means
improvement of consultation rate for Faculty of
Economics from 2011 to 2012. And in such a row:
Eco (2012/11) there are 1.00, 2.08, 1.12 and
1.06 of improvement from 2011 to 2012 for 1st
grade, 2nd one, 3rd one and 4th one, respectively.
Strictly speaking, it cannot be confirmed that there
are pure improvements (namely more than 1.00)
for all the grades and faculties. The first comers,
namely all the students of the 1st grade of all the
faculties must participate in the relevant Routine
Physical Examination based on our universitys
regulation, so that every 1st grade improvement is
exactly 1.00.
As shown in Figure 5, usual average of
Consultation Rate of Routine Physical
Examination becomes about 80% in a recent few
years. So we probably cannot expect a drastic
improvement of consultation rate now and in near
future, but it can be confirmed that there occurs
some remarkable improvements of consultation
rate in the 2nd grade of faculties of Law and
Economics from 2011 to 2012. Indeed, we can
recognize that these improvements are ones of the
effects based on utilization of our e-Healthcare
system for pre-Routine Physical Examination in
2012. Because Health Center has already
investigated how many students of each grade of
each faculty participate in such an examination,
more students of 2nd grade of Law and Economics
than others receive health screening in pre-
Routine Physical Examination.
At the same time, we must recognize that there
occur specific exchanges from 2011 to 2012 in
2nd grade of Faculty of Engineering (namely,
0.76) and 3rd of faculty of Law (namely, 0.87).
Health Center has also reported that the relevant
students have hardly participated in pre-Routine
Physical Examination, because they probably did
not know such an examination in 2012.
4.2 Statistical Testing for Improvement of
Consultation Rate from 2011 to 2012
Statistical testing such as t-test has been applied
for Improvement of Consultation Rate from 2011
to 2012 as follows;
1. All the students of 1st grade of each faculty
always participate in Routine Physical
Examination, so we can exclude all the cases
about 1st grade in Figure 5, namely the targets
are focused from 2nd grade to 6th one only.
2. Students t-statistic (t) can be calculated as
follows:
ns
X
t2
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 163-172
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
170
, where
X
is average of samples,
is target
expectation, s2 is sample variance and n is
number of samples.
3. From Table 1,
X
= 1.11, standard deviation:
3361.0
2ss
, and n = 3 * 6 + 2 = 20,
47214.4n
.
47214.43361.0 11.1
2
ns
X
t
If target expectation:
is assumed to be 1.00,
value of Students t-statistic can be calculated
as t = 1.52701.
4. Number of samples is n, so degree of freedom
is
1n
.
191201n
.
From Statistical Tables of Students t-statistic
distribution, we can obtain the following
values for two-sided 5% point (
025.0
),
two-sided 10% point (
05.0
) and two-
sided 20% point (
1.0
) respectively;
093.2)19(
025.0
t
,
729.1)19(
05.0
t
and
328.1)19(
1.0
t
.
, where
is Significance level of probability.
5. Two-sided t-test has been performed as
follows;
Null hypothesis H0 is assumed that
Consultation Rate has been significantly
improved from 2011 to 2012 by means of
utilizing our e-Healthcare system for pre-
Routine Physical Examination. In other words,
Improvement of Consultation Rate from
2011 to 2012 has been remarkably achieved in
the statistical meaning.
Concerning two-sided 5% point
(Significance level:
025.0
),
093.2)19(52701.1 025.0
tt
, so Null
hypothesis H0 cannot be rejected.
Concerning two-sided 10% point
(Significance level:
05.0
),
729.1)19(52701.1 05.0
tt
, so Null
hypothesis H0 cannot be rejected.
Concerning two-sided 20% point
(Significance level:
1.0
),
328.1)19(52701.1 1.0
tt
, so Null
hypothesis H0 can be rejected.
Therefore, we cannot confirm that
Consultation rate has been improved
statistically concerning Significance Level
025.0
nor
05.0
. Meanwhile, we may
confirm that Improvement of Consultation
rate from 2011 to 2012 has been significantly
achieved in a statistical meaning of
Significance Level: 20%.
As a result, we can recognize that many of the
relevant students have participated in the specified
pre-Routine Physical Examination in order to
bypass the crowded normal (= regular) Routine
Physical Examination. So we have confirmed that
it can be statistically effective to utilize our e-
Healthcare system for Routine Physical
Examination, especially for health screening in
pre-Routine Physical Examination.
5 CONCLUSION
This paper describes our e-Healthcare system for
Student Health Education in Kagawa University.
With our system, not only students can receive
efficient Health Education but also doctors/nurses
can provide fruitful medical suggestion and/or
judgment through health monitoring. Routine
Physical Examination can be improved into
reduction of time-consuming tasks as well as
avoidance of frequently-occurred human errors.
Characteristics of our e-Healthcare system are
summarized as follows;
Employment of modular system architecture
for easy maintenance, effective
interoperability and system expandability:
Our e-Healthcare system includes a dedicated
sub-system with some kinds of physical
measuring devices, IC card reader/writer and
controlling PC(s), a Database information
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 163-172
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
171
server and some facilities for user-side health
monitoring and retrieving. Each dedicated
sub-system can be relatively easily tailored for
other kinds of physical measuring devices. An
additional facility can be built into the system
for the sake of system expansion.
Utilization of student IC card for user
identification:
It is a good idea to employ student IC card for
user (i.e. examinee) identification during
physical examination. With such IC card-
based identification, our e-healthcare system
can reduce and shorten total amount of time to
register and authenticate examinees for
physical screening test.
Realization of mechanism to interface
between measuring devices and computers:
In order to build some kinds of interfaces
between physical measuring devices and
controlling PC in dedicated sub-system, it
must be done to connect device into
computer’s IO ports such as USB, write
specific software for interrupts just like
drivers and manipulate such devices by the
controlling PC. With automatic control of
measuring devices by such PC, our e-
Healthcare system can avoid probabilistically
happened human errors as well as writing
mistake of measured data.
Visualization of history/changes of health
information in time series:
The specific facility of our system can
generate graphic information based on time-
series analysis in order to illustrate the
history/changes of health information.
Doctors/nurses of university can relatively
easily perform their professional medical
suggestions and/or judgments by means of
such a facility. Students can also recognize
graphical history/changes of their health
information.
REFERENCES
1. B. W. T. Rohm and C. E. T. Rohm (Jr): A vision of the
e-healthcare era. International Journal of Healthcare
Technology and Management, Vol.4, No.1-2, 87--92 (2002).
2. P. C. K. Hung: Towards a Privacy Access Control Model for
e-Healthcare Services. In: Third Annual Conference on
Privacy, Security and Trust (PST2005), 4 pages, (2005).
3. A. Mukherjee and J. McGinnis: E-healthcare: an analysis
of key themes in research. International Journal of
Pharmaceutical and Healthcare Marketing, Vol.1, No.4, 349--
363, (2007).
4. Z. Li., L. Sun and E. C. Ifeachor: Challenges of Mobile ad-
hoc Grids and their Applications in e-Healthcare. In: 2nd
International Conference on Computational Intelligence in
Medicine and Healthcare (CIMED2005), 8 pages, (2005).
5. Min Chen, Sergio Gonzalez, Victor Leung, Qian Zhang and
Ming Li: A 2G-RFID-based e-healthcare system. IEEE
Wireless Communications, Vol.17, No.1, 37--43, (2010).
6. Y. Imai, Y. Hori, H. Kamano, E. Miyazaki and T. Takai: A
Trial Design of e-Healthcare Management Scheme with IC-
Based Student ID Card, Automatic Health Examination
System and Campus Information Network. In: H. Cherifi, J. M.
Zain, and E. El-Qawasmeh (Eds.) The International
Conference on Digital Information and Communication
Technology and its Applications (DICTAP2011), Part I, CCIS
166, pp.728--740 (2011).
7. Firat Kart, Gengxin Miao, L. E. Moser and P. M. Melliar-
Smith: A Distributed e-Healthcare System Based on the
Service Oriented Architecture. In: IEEE International
Conference on Services Computing (SCC 2007), pp.652--659
(2007).
8. Firat Kart, Louise E. Moser and P. Michael Melliar-Smith:
Building a Distributed E-Healthcare System Using SOA. IT
Professional, Vol.10, No.2, 24--30 (2008).
9. E. Miyazaki, H. Kamano, D. Yamakata, Y. Imai and Y. Hori:
Development of an Automatic Health Screening System for
Student Health Education of University. In: W. V. Siricharoen,
M. Toahchoodee, and H. Cherifi (Eds.) 2nd International
Conference on Digital Information and Communication
Technology and its Applications (DICTAP2012), pp.421--427
(2012).
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 163-172
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
172
Encapsulation of Real Time Communications over Restrictive Access Networks
Rolando Herrero
Northeastern University
360 Huntington Avenue, Boston, MA 02115, USA
r.herrero@northeastern.edu
ABSTRACT
The mechanisms of firewall traversal used to overcome
the problems that impact Real Time Communications
(RTC) in restrictive access networks introduce, among
other impediments, excessive latency that results in
degraded media quality when frames are dropped by
playout buffers running at the application layer. In this
paper we summarize the main mechanisms that are
used to traverse firewalls through a comparative
analysis that concludes with an extensive overview of
media encapsulation technologies. One drawback of
tunneling, however, is that it involves stream based
transport, that is incompatible with datagram based
media. Under this scenario and to evaluate how both,
speech and video, are ultimately affected by latency
and loss typical of mobile networks we use state-of-
the-art quality metrics. Moreover, in order to mitigate
these negative effects in the context of tunneled traffic,
we introduce and assess two separate methods that are
optionally applied on top of regular stream based
encapsulation.
KEYWORDS
RTC, Tunneling, Encapsulation, Firewall Traversal,
Security
1 INTRODUCTION
RTC mechanisms used for transmission of both,
speech and video, are an integral part of the
backbone of IP Multimedia Subsystem (IMS)
networks [1] that rely on different technologies
intended to provide reliable and secure real-time
delivery of media. One of the most important
methods used to accomplish these goals is by
means of the Internet Engineering Task Force
(IETF) Real-time Transport Protocol (RTP) [2]
which is an application layer protocol typically
running on top of User Datagram Protocol (UDP).
This protocol provides some minimal sequence
and timing control but lacks of data integrity
protection. Note that because media is time
sensitive, data reliability schemes like those that
exist on top of Transport Control Protocol (TCP)
introduce latency constrains that make their use
not as effective.
Firewalls introduce intentional and non-intentional
limitations that usually prevent UDP based traffic
from traversing them. In general, the efficient
traversal of firewalls has been widely studied and
several mechanisms have been proposed to
overcome these limitations. There are basically
two approaches; (1) through relaying and (2)
through transport concealment. Relaying involves
re-routing media packets so they avoid firewalls
by circumventing them through a series of relying
servers specially laid out to accomplish this task.
Transport concealment, on the other hand,
involves changing the transport protocol and port
of the media packets so they are compatible and
can traverse specific firewalls. Tunneling is the
preferred method of concealing transport because
the media packet including network and transport
layers is encapsulated on top of a, typically TCP,
firewall-friendly transport protocol and port. This
scenario results on packets that have two sets of
transport and network layers; an outer or external
one and an inner or internal one.
Each type of firewall traversal method has its own
challenges; for example, media relaying requires
additional servers distributed throughout the
network that impact in the network topology as
well as introduce undesired latency caused by the
longer path each packet must traverse. Moreover,
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 173-183
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
Figure 1: Traversal Using Relays around NAT
Figure 2: IPSec Tunnel Mode
Figure 3: Enhanced Security Gateway (eSEG)
since RTP is negotiated by the Session
Initialization Protocol (SIP) [3], a signaling
protocol that doesn’t suffer the same constrains
RTP does, media relying information must also be
negotiated requiring application layer changes that
lead to increased computational complexity
throughout the different network elements of the
deployment.
Similarly, media encapsulation introduces a
different set of problems; although it is transparent
to the application layer, specifically it doesn’t
require additional changes to either RTP or SIP
protocols, it does need support of tunneling client
and server functionalities that are usually
transparently integrated into media clients and
servers respectively. By far the biggest problem
with encapsulation is the additional latency
introduced by TCP transport that has its origin in
two different mechanisms; (1) the Nagle algorithm
that is an inherent part of TCP and is used to
buffer and group multiple packets before
transmission in order to improve the TCP header
to payload length ratio [4] and (2) the
retransmissions that are triggered by the network
packet loss affecting media paths. Note that these
retransmissions have a non-linear cumulative
effect that ranges from low impact dead air to
dropped calls. To a lesser extent, another negative
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 173-183
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
consequence of tunneling is a higher transmission
rate due to the additional overhead introduced by
the outer network and transport layer headers.
The remainder of the paper is organized as
follows: brief descriptions of the details of media
relying and media encapsulation mechanisms are
presented in Sections 2 and 3 respectively. In
Section 4 the experimental framework is
introduced. Comparative results obtained by
applying network impairments to the framework
and computing media quality scores are given in
Section 5. Finally the conclusions are provided in
Section 6.
2 MEDIA RELYING
Traversal Using Relays around NAT (TURN) is
one of the most well known and established
mechanisms for media relying intended to
overcome the problems introduced by firewall
traversal [5]. TURN allows a client behind a
firewall to request that a TURN server acts as a
relay. The client can arrange for the server to relay
packets to and from certain other hosts and can
control aspects of how the relaying is done. The
client does this by obtaining an IP address and
port on the server, called the relayed transport
address. When a peer sends a packet to the relayed
transport address, the server relays the packet to
the client. When the client sends a data packet to
the server, the server relays it to the appropriate
peer using the relayed transport address as the
source. TURN is an extension to the Session
Traversal Utilities for NAT (STUN) protocol [6].
Most, though not all, TURN messages are STUN
formatted messages.
Figure 1 shows a typical TURN scheme; the
TURN client and the TURN server are separated
by a firewall, with the client on its private side and
the server on its public side. The client sends
TURN messages to the server from a transport
address called client address. Clients typically
learn the TURN server address via configuration.
Since the client is behind a firewall, the server
sees packets from the client as coming from an
address on the firewall itself. This address is
known as the client server-reflexive address. In
general, packets sent by the server to this latter
address will be forwarded by the firewall to the
client address. The client uses TURN commands
to create and manipulate an allocation, which is a
data structure, on the server. This data structure
contains, among other things, the relayed address
for the allocation. The relayed address is that on
the server that peers can use to have the server
relay data to the client. An allocation is uniquely
identified by its relayed address.
Figure 4: FTT-IMS
Once an allocation is created, the client can send
application data to the server along with an
indication of to which peer the data is to be sent,
and the server will relay this data to the
appropriate peer. The client sends the application
data to the server inside a TURN message; at the
server, the data is extracted from the TURN
message and sent to the peer in a datagram. In the
reverse direction, a peer can send application data
in a datagram to the relayed address for the
allocation; the server will then encapsulate this
data inside a TURN message and send it to the
client along with an indication of which peer sent
the data. Since the TURN message always
contains an indication of which peer the client is
communicating with, the client can use a single
allocation to communicate with multiple peers.
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 173-183
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
When the peer is behind a firewall, then the client
must identify the peer using its server-reflexive
address rather than its peer address. Each
allocation on the server belongs to a single client
and has exactly one relayed address that is used
only by that allocation. Therefore, if a packet
arrives at a relayed address on the server, the
server knows for which client the data is intended.
3 MEDIA ENCAPSULATION
There are many media encapsulation mechanisms
including, among the most popular ones, Internet
Protocol Security (IPSec) that when operating in
tunnel mode is typically used to create Virtual
Private Networks (VPN) [7]. Figure 2 shows an
example of regular IPSec encapsulation, where the
IP Encapsulating Security Payload (ESP) layer is
used to provide both encryption and authentication
[8]. The main problem is that restrictive firewalls
typically block IPSec traffic. This limitation is,
however, overcome by Enhanced Security
Gateway (eSEG) that supports tunneling of IMS
services within a TCP encapsulation designed to
carry IPSec through restrictive firewalls. Figure 3
shows an example of eSEG encapsulation, where
ESP tunnel mode packets are sent over TCP by
means of TPKT framing [9]. The drawback of this
approach is that because outer or exterior TCP
transport is involved, induced latency due to
retransmissions can seriously affect the overall
media quality.
Figure 5: TSCF CM
3GPP Technical Specification 24.322 defines the
Enhanced Firewall Traversal Function (EFTF) that
relies on the Firewall Traversal Tunnel to IP
Network of IMS (FTT-IMS) protocol [10].
Essentially, a TCP connection is created and
Transport Layer Security (TLS) is used to encrypt
and encapsulate all inner traffic. Both, the tunnel
client and tunnel server, implement client and
server Dynamic Host Configuration Protocol
(DHCP) endpoints respectively. As shown in
Figure 4, once an inner or internal IP address is
assigned, it is used as source address of all traffic
originated at the client. Note that firewall traversal
is performed by means of firewall friendlier and
more permissive outer TCP transport. This
approach makes no difference between reliable
high latency inner data traffic and non-reliable low
latency inner media traffic and as in the eSEG
case it is subjected to the inefficiencies that result
from transmitting media over a TCP stream.
Figure 6: TSCF Inner Traffic
3GPP Tunneled Services Control Function
(TSCF) is an attempt to overcome the limitations
introduced by previous approaches, where outer
TCP based transport is complemented by a series
of mechanisms that are intended to mitigate the
negative effects of stream transport applied to low
latency inner media traffic [11]. TSCF defines
Control Messages (CM), shown in Figure 5, as the
standard method for tunnel creation, termination
and maintenance. CMs are transmitted on top of
TLS as a way to accomplish security and each CM
includes a number of fields, specifically, (1)
version used for negotiation, (2) type needed to
signal tunnel actions, (3) tunnel id (TID) intended
to uniquely identify the tunnel, (4) sequence
number use to keep track of CM requests and
responses and (5) a variable number of Type-
Length-Value (TLV) parameters. Regular RTP
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 173-183
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
media and SIP signaling traffic is also sent on top
of TLS as shown in Figure 6. A common
characteristic of media tunneling is that during call
establishment media is negotiated and internal IP
addresses are exchanged. In a mobile
environment, transition between networks causes
the outer layer and specifically its network
addresses to be continuously changing. These
outer layer changes must not cause the inner layer
addresses to change because this would force
media addresses to be renegotiated via signaling
introducing latency and potentially other more
serious impairments like dead air that negatively
affects the overall quality of the communication.
TSCF introduces a keep alive mechanism that
guarantees that the tunnel is kept functional all the
time and both parties, client as well as server, are
synchronized and aware of the tunnel status even
when no traffic is transmitted. This mechanism is
independent of outer layer keep alive methods like
the one provided by TCP. The idea is to
contemplate all cases, including simple low
weight TCP implementations that fail to
implement basic keep alive functionality.
If mobility or network impairments detected by
means of keep alives cause the outer layer to be
renegotiated the TSCF persistence mechanism
guarantees that inner layer parameters are
transparently and efficiently reprovisioned as
shown in Figure 7. First, the TCP connection is
created including TLS security. Then tunnel
parameters are negotiated and TID as well as an IP
address are assigned by means of CM
Configuration Request/Response exchange. Once
the tunnel is created, if connectivity is lost, no
response to CM Keep Alive messages is received
and eventually both, client and server sides,
release associated resources. Simultaneously the
tunnel server stores the tunnel information for that
specific TID and the tunnel client attempts to
restore the tunnel but this time issuing a CM
Configuration Request that includes the TID as
parameter. When received, this CM triggers the
server to retrieve the information and provision
the client accordingly.
Figure 7: TSCF Tunnel Persistence
As TSCF is intended for media transport, it
natively supports a Forward Error Correction
(FEC) mechanism that provides multipath
transmission of frames in a redundant fashion
minimizing the overall latency that results from
outer TCP retransmissions in lossy network
scenarios. Specifically this mechanism takes
advantage of the fact that most modern mobile
devices incorporate multiple network interfaces
(i.e. WiFI vs LTE) that can be used to transmit
traffic simultaneously through many possible
paths. Two FEC modes are possible; (1) fan out,
shown in Figure 8, where frames are
simultaneously sent over multiple tunnels and load
balancing, shown in Figure 9, where depending on
whether the frame number is even or odd it is sent
in the main tunnel or in the redundant one. Both
methods share the same set up procedure; once the
main tunnel is created a CM Service Request is
used to reserve a redundant tunnel. The server
responds back with the TID of the redundant
tunnel that is used, in turn, by a new client to
establish it. When the session is to be terminated
both tunnels, the original and the redundant one,
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 173-183
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
are simultaneously terminated by means of
standard tunnel release procedures.
Figure 8: TSCF FEC Fan Out
FEC, however, has the negative effect of a higher
transmission rate that depending on bandwidth
availability and network topology can result in
dropped packets and reduced media quality. TSCF
provides an additional mechanism, called
Dynamic Datagram Tunnels (DDT) that given a
main TCP based tunnel where both signaling and
media are encapsulated, a secondary UDP based
tunnel is started such that when successfully
negotiated all media traffic is transported through
it. Of course, successful negotiation of a UDP
based tunnel implies that firewalls between the
client and the server allow datagram traffic,
however, since this is not always possible, DDT is
highly dependent on the configuration of the
restrictive access networks. Figure 10 shows DDT,
specifically its negotiation as well as the signaling
and media encapsulation occurring in the main
stream based (TID#1) and datagram (TID#2)
based tunnels respectively. In general, in order to
guarantee the quality of the encapsulated media,
FEC and DDT provide an extra line of defense. In
the following sections, the performance
improvements due to these mechanisms are
extensively analyzed.
Figure 9: TSCF FEC Load Balancing
4 EXPERIMENTAL FRAMEWORK
In order to evaluate the effects of encapsulation,
we introduce a scenario that involves a set of
clients and tools [12] modified to support the
experimental framework shown in Figure 11. To
this end, a media reference, namely a speech or
video sequence, is encoded as well as packetized
through playback and subjected to controlled
network impairments responsible of packet loss
and latency before entering the Media over IP
(MoIP) cloud either as (1) clear or as (2)
encapsulated traffic. On the receiving side, media
is decapsulated when coming through a tunnel,
depacketized, decoded as well as recorded in a file
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 173-183
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
that together with the reference are used to obtain
offline media quality scores by means of well-
known algorithms like PESQ and PEVQ for
speech and video respectively.
Figure 10: TSCF Dynamic Datagram Tunnels
The media references are (1) a 60-second speech
sequence that has 50% of silence evenly
distributed and (2) a 15-second color video
sequence exhibiting a 352 × 288 Common
Intermediate Format (CIF) resolution and recorded
at 15 frames-per-second (fps).
For this framework, we consider a group of high-
bitrate (HBR) and low-bit-rate (LBR) speech
codecs as well as video codecs typically used in
MoIP scenarios. In general any speech codec that
provides a transmission rate below 32 Kbps is
considered LBR and any one providing a rate
above this threshold is considered HBR. In
addition, speech codecs can be either waveform or
parametric depending upon whether they preserve
the shape of the original speech wave they are
encoding. It can be seen that for the most cases
HBR codecs are considered waveform and LBR
codecs are considered parametric. In this
experimental framework, the following speech
codecs are considered (1) G.711 -law, (2)
G.729A, (3) AMR-NB, (4) EVRC, (5) AMR-WB
and (6) Opus. G.711 is a narrowband (8 KHz
sampling rate) HBR codec that preserves the
speech waveform through non-linear compansion
at the cost of increased transmission rate [13].
G.729A is a narrowband parametric LBR codec
that operates at 8 Kbps and relies on linear
prediction and prediction error encoding [14].
AMR-NB is also a narrowband LBR codec that
provides a wide range of compression rates at
different quality levels [15]. EVRC provides
narrowband speech compression at three different
rates [16]. AMR-WB is the wideband (16 KHz
sampling rate) version of AMR-NB that provides
multiple rates of operation [17]. Opus is an LBR
codec that supports both narrowband and
wideband scenarios and a wide range of
compression rates as well as very low latency
[18]. In this paper the codecs are negotiated to
operate at the following rates: G.711 at 64 Kbps,
G.729A at 8 Kbps, AMR-NB at 7.95 Kbps, EVRC
at 8.55 Kbps, AMR-WB at 8.85 Kbps and Opus at
8 Kbps. On the other hand, speech quality
evaluation is performed by means of the well-
known PESQ algorithm, standardized under
P.842. This mechanism involves a Degradation
Category Rating (DCR) test where synthetic
speech is contrasted against the reference in order
to obtain a score between 1 (bad) and 5 (excellent)
[19]. Although PESQ typically operates on
narrowband codecs, a wideband version of the
algorithm called PESQ-WB is used for those
codecs that involve a sampling rate of 16 KHz or
above. In addition the following video codecs are
considered (1) H.263 and (2) H.264. H.263 is a
legacy video codec that relies on intra and inter
frame encoding including motion estimation and
several scalability modes [20]. H.264, on the other
hand, represents the evolution of H.263 and
mainly consists of two layers; (1) Video Coding
Layer
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 173-183
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
Figure 11: Experimental Framework
Table 1: Clear/Untunneled
p
(%)
(ms)
PESQ
PESQ-WB
PEVQ
G.711 µ-law
G.729A
AMR-NB
EVRC
AMR-WB
Opus
H.263
H.264
2
25
3.76
3.21
3.05
3.13
3.12
3.05
2.75
3
.
05
2
50
3.53
3.09
2.90
3.03
3.08
2.97
2.70
2
.
89
2
150
3.18
2.69
2.63
2.64
2.71
2.72
2.35
2
.
57
5
25
3.46
3.04
2.78
2.94
2.96
2.95
2.64
2
.
77
5
50
3.39
2.93
2.70
2.83
2.84
2.88
2.45
2
.
70
5
150
3.02
2.54
2.42
2.45
2.59
2.46
2.22
2
.
48
15
25
2.75
2.35
2.19
2.25
2.32
2.31
2.01
2
.
21
15
50
2.56
2.22
2.18
2.16
2.31
2.21
1.91
2
.
08
15
150
2.31
2.00
1.89
1.96
1.96
1.90
1.73
1
.
88
Table 2: Encapsulated
p
(%)
(ms)
PESQ
PESQ-WB
PEVQ
G.711 µ-law
G.729A
AMR-NB
EVRC
AMR-WB
Opus
H.263
H.264
2
25
2.90
2.59
2.44
2.49
2.51
2.52
2.15
2
.
44
2
50
2.94
2.43
2.30
2.35
2.47
2.46
2.10
2
.
40
2
150
2.53
2.23
2.03
2.08
2.17
2.14
1.87
2
.
03
5
25
2.77
2.36
2.24
2.32
2.36
2.27
2.10
2
.
21
5
50
2.73
2.36
2.16
2.31
2.37
2.26
2.05
2
.
15
5
150
2.34
2.07
1.89
1.97
2.08
2.02
1.79
1
.
92
15
25
2.16
1.83
1.72
1.83
1.82
1.80
1.63
1
.
79
15
50
2.13
1.81
1.67
1.78
1.85
1.73
1.60
1
.
68
15
150
1.86
1.58
1.47
1.53
1.62
1.52
1.34
1
.
48
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 173-183
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
Table 3: Encapsulated with FEC
p
(%)
(ms)
PESQ
PESQ-WB
PEVQ
G.711 µ-law
G.729A
AMR-NB
EVRC
AMR-WB
Opus
H.263
H.264
2
25
3.76
3.33
3.07
3.17
3.18
3.24
2.81
3
.
03
2
50
3.59
3.19
2.93
3.03
3.12
3.02
2.77
2
.
96
2
150
3.27
2.78
2.69
2.68
2.77
2.74
2.38
2
.
66
5
25
3.53
3.03
2.84
3.03
3.01
3.04
2.67
2
.
95
5
50
3.38
3.04
2.85
2.84
3.02
2.83
2.55
2
.
82
5
150
3.07
2.61
2.43
2.50
2.59
2.50
2.31
2
.
42
15
25
2.72
2.36
2.28
2.29
2.42
2.36
2.06
2
.
30
15
50
2.73
2.30
2.17
2.23
2.36
2.26
1.97
2
.
22
15
150
2.32
2.08
1.90
2.03
2.06
1.97
1.75
1
.
92
Table 4: Encapsulated with DDT
p
(%)
(ms)
PESQ
PESQ-WB
PEVQ
G.711 µ-law
G.729A
AMR-NB
EVRC
AMR-WB
Opus
H.263
H.264
2
25
3.65
3.08
2.95
3.09
3.14
3.04
2.67
2
.
96
2
50
3.57
3.13
2.90
3.00
3.12
3.03
2.66
2
.
85
2
150
3.07
2.69
2.61
2.59
2.69
2.59
2.36
2
.
50
5
25
3.38
3.00
2.81
2.89
2.95
2.86
2.50
2
.
75
5
50
3.28
2.83
2.75
2.72
2.85
2.73
2.48
2
.
70
5
150
2.89
2.57
2.43
2.44
2.59
2.48
2.23
2
.
35
15
25
2.69
2.34
2.15
2.21
2.30
2.20
2.02
2
.
17
15
50
2.53
2.24
2.12
2.13
2.24
2.12
1.96
2
.
15
15
150
2.31
1.97
1.84
1.91
1.94
1.96
1.70
1
.
88
(VCL) that provides, among other things, inter and
intra frame prediction and (2) Network
Abstraction Layer (NAL) that is used to packetize
VCL data [21]. Similar to the speech case, video
quality is evaluated by means of the PEVQ
algorithm that is standardized as J.247 and
involves a DCR test where the decoded video is
compared against the full reference to obtain a
score between 1 (bad) and 5 (excellent) [22].
Since video codecs typically exhibit variable
transmission rates, in order to provide a consistent
and fair comparison, both video codecs are
configured at a fixed transmission rate of 128
Kbps. It is critical to mention that because PEVQ
scores, as opposed to PESQ ones, are highly
dependent on the video sequence used as
reference, relative comparison between codecs is
more relevant than absolute evaluation of scores.
In the following section, the different codecs
under study have their performance compared
under two possible scenarios; (1) with and (2)
without encapsulation.
5 COMPARATIVE ANALYSIS
In this section the effect of both, clear and
encapsulated traffic, are evaluated and more
specifically the following transport test cases are
considered; (1) clear/untunneled, (2) encapsulated,
(3) encapsulated with FEC and (4) encapsulated
with DDT. In addition network impairments are
incorporated to this testing; (1) 2%, (2) 5% and (3)
15% packet loss (p) as well as (1) 25 ms, (2) 50
ms and (3) 150 ms latency (). In order to provide
a complete analysis, this comparison assumes a
permissive firewall that allows traversal of all
transport types. It can be seen that more restrictive
access networks prevent datagram based traffic
and cause clear/untunneled and DDT scenarios to
fail. Under these conditions stream based
encapsulation, relying or not on FEC, is the only
acceptable solution. Note that encapsulation with
FEC implies that traffic is fanned out through
three independent redundant tunnels affected by
the same network packet loss and latency. All
codecs have different transmission rates (i.e.
G.711 µ-law at 64 Kbps vs G.729 at 8 Kbps),
different inter packet duration (i.e. 20 milliseconds
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 173-183
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
G.729 vs 30 milliseconds G.723.1) and different
Discontinued Transmission (DTX) sensitivities
that make their direct comparison fairly irrelevant.
The goal of this paper is to compare the overall
effects of the aforementioned mechanisms when
applied to each of the speech and video codecs
independently. As previously mentioned
standardized scores are used to this end; (1) PESQ
scores apply to narrowband speech codecs like
G.711 µ-law, G.729A, AMR-NB and EVRC, (2)
PESQ-WB scores apply to wideband speech
codecs like AMR-WB as well as Opus and (3)
PEVQ scores apply to video codecs like H.263
and H.264. Tables 1, 2, 3 and 4 show the
performance results obtained by evaluating the
codecs when the scenarios described above are
implemented.
6 CONCLUSIONS
Common to all tables, among the narrowband
codecs, G.711 µ-law always provides the best
quality based on the fact that has a transmission
rate that is an order of magnitude higher than that
of G.729A, AMR-NB and EVRC. These latter
codecs, consequently, exhibit levels of quality
consistent with the linear prediction techniques
they rely upon. The wideband codecs, AMR-WB
and Opus, also show similar quality levels that
comply with their also similar transmission rates.
On the video front, because of the improvements
associated to H.264, this codec exhibits quality
that is consistently superior to that of H.263 for
the same transmission rate.
When analyzing performance, regular
encapsulated datagram media traffic is naturally
affected by loss and latency that results in playout
buffers skipping missing frames and causing
impairments that negatively affect quality. As
expected, the larger the loss and the latency the
more negative the effect on the quality score. On
the other hand, regular stream based encapsulation
guarantees that no inner datagram media frames
are lost due to TCP retransmissions, however, it
introduces extra latency that causes the frames to
arrive too late for the playout buffer to play them.
The playout buffer is typically dynamic and
automatically adjusts itself according to the
network latency, however, a latency value of 150
milliseconds is the threshold that most buffers
support as higher values degrade the overall user
experience. Quality is significantly worse (around
20% in average) when encapsulation is used for
transmission as opposed to plain clear traffic.
Again, under very restrictive networks, clear
traffic is not allowed so bad quality is better that
no quality at all.
In order to improve the scores of encapsulated
media, FEC by means of multi-tunnels is a
feasible solution that provides a similar quality to
clear traffic transport for all network restriction
scenarios even those that prevent clear traffic from
traversing a network. FEC in the context of stream
based encapsulation ideally requires the
availability of alternative paths for media to flow.
On the other hand, the cost of FEC is a higher
transmission rate due to the multipath transmission
of traffic. An additional technique that can be used
to obtain better media quality is via DDT, which
encapsulates time sensitive media in a datagram
based tunnel. This mechanism exhibits a
performance that is slightly inferior to that of clear
traffic, mostly due to the overhead of the DDT
negotiation that causes traffic to traverse the
stream based tunnel until the datagram one is fully
established. Since DDT is contingent to the
network allowing datagram traffic traversal, when
this is not possible, media traffic fallbacks to
regular stream based encapsulation.
As opposed to clear media which cannot traverse
highly restrictive networks, encapsulated traffic,
regardless of its nature, can always go through
firewalled networks and dynamically take
advantage of FEC or DDT depending on the
network resource availability. When network loss
and latency are high enough that affect user
experience (1) if multipath traversal is available
then FEC provides the best solution and (2) if not,
DDT support is the next best alternative.
REFERENCES
1. 3GPP TS 23.107: Technical specification group
services and system aspects; Quality of Service (QoS)
concept and architecture (release 9), v9.0.0 Dec. 2009.
2. Schulzrinne, H., Casner, S., Frederick , R., Jacobson, V.:
RTP: A Transport Protocol for Real-Time
Applications, RFC 3550 (Internet Standard), July
2003,
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 173-183
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
Updated by RFCs 5506, 5761, 6051, 6222.
3. Rosenberg, J., Schulzrinne, H., Camarillo, G.,
Johnston, A., Peterson, J., Sparks, R., Handley, M.,
Schooler , E.: SIP: Session Initiation Protocol, RFC
3261 (Proposed Standard), June 2002.
4. Minshall, G., Saito, Y., Mogul, J., C., Verghese, B:
Application performance pitfalls and
TCP’s Nagle
algorithm, SIGMETRICS Perform. Eval.
Rev., vol. 27,
no. 4, pp. 3644, Mar. 2000.
5. Mahy, R., Matthews, P., Rosenberg, J.: Traversal Using
Relays around NAT (TURN) RFC 5766 (Internet
Standard), 2010.
6. Rosenberg, J., Mahy , R., Matthews P., Wing, D: Session
Traversal Utilities for NAT (STUN), RFC 5389
(Internet Standard), 2008.
7. Kent, S., Seo, K: Security Architecture for the Internet
Protocol RFC 4301 (Proposed Standard), December
2005.
8. Kent, S: IP Encapsulating Security Payload (ESP): RFC
4303 (Proposed Standard), December 2005.
9. Pouffary, Y., Young, A: ISO Transport Service on
top
of TCP (ITOT) RFC 2126 (Proposed Standard),
March 1997.
10. 3GPP TS
24.322: Tunneling of IP Multimedia
Subsystem (IMS)
services over restrictive access
networks; Stage 3,, 3rd Generation Partnership Project
(3GPP), 06
2015.
11. 3GPP TR
33.830: Feasibility study on IMS firewall
traversal,, 3rd Generation Partnership Project (3GPP),
01
2014.
12. Ecotronics: Kapanga Softphone
http://www.kapanga.
net.
13. ITU-T G.711: Pulse Code Modulation (PCM) of voice
frequencies, Tech. Rep. G.711, International Telecom-
munication Union, Geneva, 2006.
14. Salami, R., Laflamme, C., Bessette , B., Adoul J:
Description of ITU-T recommendation G.729 annex
A:
Reduced complexity 8 kbit/s cs-acelp codec, in
Proceedings of the 1997 IEEE International
Conference
on Acoustics, Speech, and Signal
Processing (ICASSP
’97)-Volume 2 - Volume 2,
Washington, DC, USA, 1997,
ICASSP ’97, pp. 775,
IEEE Computer Society.
15. 3GPP TS 26.071: Mandatory speech codec speech
processing functions; amr speech codec; general de-
scription,, 3rd Generation Part-
nership Project, 2008.
16. 3GPP2 C.S0014-a: Enhanced variable rate codec,
speech service option 3 for wideband spread spectrum
digital systems, 3rd Genera-
tion Partnership Project 2,
2004.
17. 3GPP TS 26.190: Speech codec speech processing
functions; adaptive multi-rate - wideband (AMR-WB)
speech codec; transcoding functions, 3rd Generation
Partnership Project, 2008.
18. Valin, JM., Vos, K., Terriberry, T.: Definition of the
Opus Audio Codec RFC 6716 (Proposed Standard),
Sept. 2012.
19. ITU-T P.862.2: Wideband extension
to
Recommendation P.862 for the assessment of wide-
band telephone networks and speech codecs Tech.
Rep., International Telecommunication Union, Geneva,
Switzerland, Nov. 2007.
20. ITU-T H.263: Video coding for low bit rate commu-
nication Tech. Rep. H.263, International Telecommu-
nication Union, Geneva, 2005.
21. ITU-T H.264: Advanced video coding for generic
audiovisual services,” Tech. Rep. H.264, International
Telecommunication Union, Geneva, 2014.
22. ITU-T J.247: Objective perceptual multimedia video
quality measurement in the presence of a full refer-
ence, Tech. Rep. J.247, International Telecommunica-
tion Union, Geneva, 2008.
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 173-183
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
Development of a Web-based Proximity Based Media Sharing Application
Erol Ozan, Ph.D.
East Carolina University
College of Engineering and Technology
Department of Technology Systems
Greenville, North Carolina 27858, USA
ozang@ecu.edu
ABSTRACT
This article reports the development of Vissou, which
is a location based web application that enables media
recording and sharing among users who are in close
proximity to each other. The application facilitates the
automated hand-over of the recorded media files from
one user to another. There are many social networking
applications and web sites that provide digital media
sharing and editing functionalities. What differentiates
Vissou from other similar applications is the functions
and user interface that make it convenient for users to
have their images taken by others without any effort on
their part. The application can detect the other users
who share the same location and enables media sharing
only with users who are close to them. The application
develops a synergistic integration of a number of open
source code modules and frameworks to build complex
functions. The project constitutes a light weight
solution to a complex design problem.
KEYWORDS
Social networks, location-based media sharing,
multimedia, social media, web application
development, web development
1 INTRODUCTION
There are many social networking applications
and web sites that enable digital photography and
video sharing and editing. However, none of those
systems offer location-based recording of media
on one device by facilitating the automated hand-
over of the recorded media files (e.g. photography,
video, audio files) to a second party. In many
social circumstances, individuals can find it
convenient to have their image taken by other
people’s devices without having to initiate the
process themselves.
There are a number of location-based media
sharing applications. For example, Piximity [1]
allows users to browse the feeds that contain
images from nearby locations. It is designed to
help people keep up with events all around the
world such as sports, concerts, and festivals.
Another location based application is called Shout
[2], which lets users post messages that are shared
with those in their immediate vicinity or longer
distances. Geofeedia [3] is another geolocation
based application which provides capability to
search real-time social feeds by location. It also
enables users to monitor the locations providing
content that they are interested in. Another
application, Bivid [4] allows users to share media
files and browse content uploaded by the users
around them. An academic application can be
found at Geopot [5], which is defined as a cloud-
based geolocation data service for mobile
applications.
Vissou is different from the existing location
based applications in many ways. In Vissou,
media sharing occurs between two users. Many
other tools are based on the sharing of images with
public or broader user groups. With Vissou, the
user experience is based on media exchange
between two users at a time. Those users are
required to share the same geo-location. This is
meant to prevent clutter and unwanted content. In
addition, the communication between two users
are limited with the exchange of photo or video
files. There are numerous other tools and
applications that enable the exchange of chat
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 184-191
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
184
messages. Vissou differentiates itself by focusing
on the direct connection between users and only
by the exchange of media files. One of the main
contribution of this project is to reach a unique
design configuration that provides utility for users
in certain social settings.
2 GENERAL DESCRIPTIONS OF THE
FUNCTIONS
Vissou constitutes a web service that can be
accessed by Internet users by browsers using
desktop or mobile computing devices. The user
experience is optimized for smartphones.
However, the application is functional on desktop
computers as well. There is no functional
difference between the experiences available for
mobile and desktop users. The Vissou interface is
based on responsive design practices, which
ensures compatibility on various platforms
including, smartphones, tablets, and desktop
computers. Vissou can be used after completing a
standard registration process. The following
describes the limitations and rules that describe he
interaction between users.
- A user can only send media file to another
user as long as they are in close proximity
to each other.
- The recipient of a media file can send
media files to the sender no matter how far
she/he is located from the sender.
- Vissou only allows the exchange of media
files among users
- Users should make themselves visible in
Vissou in order to see other users in their
location
- Close proximity is defined as 50 m by 50
m square around a user.
A typical media recording and sharing event is
described in Figure 1. The user selects a recipient
among those listed on home page. They are the
users who are in her immediate vicinity. To send a
media file to the selected user, she clicks on the
camera or video recorder icons next to the target
user. Next, the smartphone interface activates the
camera app, enabling her to capture an image. At
this step, a user can select an existing image that is
already stored in the device instead of capturing a
new image. One should note that, this stage is
slightly different on a desktop or a laptop
computer. In such cases, the computer displays the
file system for users to select an existing media
file to upload.
Once user captures an image or selects a file, the
upload process starts. The file is stored in the
server’s file system. The URL data of the media
file is stored on a database table. The file can only
be accessible to the recipient. The new image
appears in the media inbox of the recipient.
Figure 1. A typical media recording and sharing event in
Vissou
One can envision a number of practical scenarios
where Vissou may become useful and enhance
social interaction. A number of scenarios are
provided below to illustrate the use of the system
and the proposed method.
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 184-191
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
185
2.1 User Having Their Image Captured by
Others
Let us say Alice wants her photo taken by Bob.
Instead of handing over her camera to Bob, she
invites Bob to take her photo using the
application. Bob captures Alice’s photo using
Vissou. The application captures the image and
transmits the image file to Vissou server and
makes the file only accessible by Alice. Using the
application, Alice accesses the image stored on the
secure server location. She may choose to
download, delete, or keep the image on the server.
2.2 Reciprocal image taking:
In touristic places or social settings (restaurants,
clubs, concerts, museums etc.), it is a common
occurrence for people to ask strangers to take their
photos by handing their camera to them. In many
cases, the people who ask for recording
reciprocate by taking the picture of the other party.
Vissou can be used to facilitate this process.
2.3 Users start social interaction via the use of
the app
Alice reviews the media files that she has received
and selects the ones she likes. She connects with
some of the users who took her photo by sending
media files to them. Dialogues should consist of
exchange of media files.
2.4 User documenting a part of her life (travel,
wedding, party, sports event, concert) via the
app
Alice keeps her Vissou app active during her
vacation. Her video and photos are taken by other
users as she visits touristic attractions and travels.
By the end of her trip, she accumulates a
collection of media, documenting her entire trip
without spending any time in photo and video
taking. She also benefits from being photographed
from different angles and perspectives that can be
hard to achieve by herself.
2.5 Professionals recording of events
Alice is a professional singer and gives concerts in
different venues. It is hard and time consuming for
her to facilitate the recording of her events. She
enables and allows other participants to record her
concerts. That way, Alice creates a collection of
recordings of her concerts. She can review her
performance. She can use the recorded media in
her promotional and marketing activities.
2.6 Machine-based user capturing media via
others’ devices
The app can be installed on a computing system
that is placed on a museum (or a similar public
space) and it allows other users to take pictures
and video within its vicinity. The recorded
imagery can be shared on the museum’s web site
to promote the museum. In such scenarios, a store,
a museum, a gallery, or a store can encourage to
visitors to take photos and video in their
establishment. The organization receives the
images taken by visitors and builds a collection of
media.
3 SOLUTION
The author developed a solution to build Vissou
using various programming languages and
frameworks, including HTML5, CSS, JavaScipt,
PHP and MySQL. Vissou is hosted on an Apache
server. PHP 5.3.6 and MySQL 5 were installed on
the server. Vissou also uses JQuery and Bootstrap
frameworks to build responsive design and
interactive user interface for the application. The
user interface solution relies on the three main
pages where most of the user interaction occur.
Those three pages (home, media inbox, and profile
setting) are accessible via icons on top of the
screen. Figure 2 shows the three screenshots
depicting those pages. Once logged in to the
system, users arrive at home page, where they can
make themselves visible in the system. Once a
user is visible, he or she becomes an active user in
the system. She can see a list of other users in the
immediate vicinity. She can also be seen by others
in the same location.
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 184-191
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
186
Home Page
Media Inbox Page
Profile Settings
Figure 2. Screenshots of the Main Pages in Vissou
Media inbox shows the photos and videos
received from other users. It allows the deletion of
individual files. It also shows the age of the media
files. Users can click on the sender’s icon to
connect with those users. Media files can be
downloaded from this page. Profile setting page
contains all the relevant function related with the
parameters related with a user’s account such as
passwords, profile picture update functionality,
and status updates. A complete list of functions
accessible on each page is shown in Figure 3.
Vissou relies on AJAX methods to pull data from
the server. Typically, it uses XMLHttpRequest
object to exchange data with the server. It utilizes
the open() and send() methods of the
XMLHttpRequest object to send a request to the
server. Those requests can be processed
asynchronously, therefore, they provide an
efficient way to prevent hang ups and slowdowns.
AJAX lets the browser execute and process other
scripts while waiting for the server to respond to
the HTTP requests. AJAX allows the application
to process the data when it becomes available
without blocking other ongoing tasks.
3.1 Location Detection
Geolocation detection constitutes an important
part of the solution. Vissou relies on HTML
navigator.location method to determine the GPS
location of users. In HTML, developers can also
use getCurrentPosition() method to gain access to
the users’ GPS coordinates. With the geolocation
API, web sites can detect users’ locations. For
privacy reasons, the browsers must obtain users’
permission to transmit location data to server.
Vissou uses Google Maps service to display the
location of the user on a map. That way, users can
verify their location and may choose to refresh
their location by clicking a button.
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 184-191
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
187
Figure 3. Main Functions Included in Vissou
There will always be location detection attempts
that are not successful. Some users may choose to
disable the geolocation detection on their browsers
or devices. In addition, some browsers and devices
do not support location detection. When the user
location is not detected, Vissou displays a message
(Figure 4) to warn users and give them the option
of testing their location detection function.
Despite all the improvements in GPS sensors and
location detection, there is still room for
improving the accuracy of location readings.
There is an increasing interest in improving the
accuracy of web based location detection services.
Many applications rely on GPS and cell based
location tracking in cellular networks. Alternative
detection techniques are being explored by
researchers. For example, in one of such studies
[6], the scientists developed a prototype
establishing a simple path and location tracking
system within an organization based on its
available infrastructure Wi-Fi network. Their
system uses signal strength and the logs of
previously used access points to approximate the
location in a designated area.
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 184-191
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
188
3.2 File Upload Process
Vissou’s file upload module is based on jQuery-
File-Upload, a widget developed by Sebastian
Tschan [7]. jQuery-File-Upload facilitates file
selection, progress bars, validation and preview of
the images, audio and video. The solution is based
on jQuery. The upload process can be resumed if
it is interrupted. JQuery-File-Upload can be
integrated and used with a wide variety of server-
side platforms and configurations, including PHP.
It can also work with standard HTML forms. The
author has made a number of modification in
jQuery-File-Upload to make it compatible and
communicate with the front and back end
functions of Vissou. jQuery-File-Upload is
available under MIT license at [7]. The file types
that Vissou accepts are limited to the following;
mov, avi, mp4, jpeg, png, and gif. Maximum file
size is set to 5 MB. Before storing the images, the
captured digital photos are resized so that the
image height is limited to 800 pixels and the
maximum width is set to 1200 pixels. The media
files are uploaded in small chunks to ensure a
reliable upload process.
3.3 File Storage Strategy
The media files are stored in the file system.
Storage based on the file system provides faster
access then data base based storage. The files are
assigned random file names, which ensures the
security of the files. The files can only be viewed
by the users who are authorized to view them.
JQuery-File-Upload widget’s UploadHandler.php
file contains the php code that handles the file
upload process. The author has made a number of
modifications in the file to make it compatible
with Vissou. The modifications allow Vissou to
use its own file naming scheming to generate
secret file names for media files. The author also
inserted code to facilitate the database operations
for the storage of the new media files’ URL
locations in the file system. A secret file name is
generated by the use of the following PHP code,
which relies on the built-in random number
generators of PHP.
$name = "media" . mt_rand(). mt_rand().
mt_rand();
It has to be noted that the above php code does not
modify the file extension. It determines the file
name that excludes the file extension portion. The
users are allowed to share the secret URL of the
photos and video files. If they choose to do so, the
media files can be accessed by anybody who has
the secret URL address of the individual files.
Figure 4. Geolocation malfunction warning screen
The following PHP code adds a row to the table
that holds the data for each media file that is
stored. Each row contains the username of the user
who sent the file, the user name of the recipient of
the media file, the URL of the media file, GPS
coordinates of the media in terms of latitude and
longitude.
$timestp = mktime();
$sql="INSERT INTO $tbl_name
(mediataker,mediatarget,mediaurl,gpslat,
gpslong,time) VALUES
('$mediataker','$mediatarget','$mediaurl
','$gpslat','$gpslong',’timestp’)";
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 184-191
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
189
3.4 Database Design
Vissou employs five tables to conduct and manage
various activities and tasks. Those tables consist of
the following; members, userdesc, media, survey,
and contact. The first two tables contain
information about the current members. Those
tables store the user name, hashed passwords
along with other relevant details such as users
current GPS locations, their activity statuses, and
their last login times. Vissou stores the URL data
of the stored images and video files into a table,
titled media. Whenever a new video or photo is
captured, a new row is created in the table. The
username of the member who takes the image, the
username of the recipient, and the time of the file
transmittal are captured. Vissou uses a separate
table to capture users’ feedback about their
experience with the application. The table stores
their response to various questions in the survey.
The time of the survey completion is also retained.
Another table is dedicated to store the messages
sent by the users. These are the messages sent by
the users to Vissou. Instead of using an email
address for communication, Vissou employs a
database action to capture users’ messages.
4 PERFORMANCE EVALUATIONS AND
OBSERVATIONS
Vissou was tested with various devices and
platforms. Overall, the evaluations and tests
indicate a satisfactory user experience on most
commonly used devices and platforms. Effective
testing of web applications is a challenging task as
illustrated by [8]. The test results indicate that
Vissou is compatible with most of the recent
operating systems and device platforms.
There were some issues with older devices. For
example, with iPhones 4S or older versions,
frequent browser crashes were observed. The
crashes were frequently recorded during the file
upload process. However, this problem occurs
with other web sites that employ complex HTML
functions. On Chrome browsers, the location
detection requires secure contexts such as HTTPS.
Figure 5 summarizes the browser compatibility of
Vissou’s location detection function. It has also
been noted that geolocation function fails to work
with the native browsers of some of the older
smartphones. In such devices, the use of Firefox
has solved the problem.
Vissou’s file upload module is based on jQuery-
File-Upload. The compatibility of jQuery-File-
Upload is shown in Figures 6 and 7.
Browser
Chrome
IE
Firefox
Safari
Opera
Geolocation
Enabled for
the
indicated
versions or
higher
5.0
requires
SSL
connection)
9.0
3.5
5.0
16.0
Figure 5. Geolocation detection compatibility of Vissou
Mobile
Browser
Chrome
Native
Browser
Safari
Opera
Mobile
File Upload
Compatibility
on IOS
6+
on
Android
4.0+
on
Android
2.3+
on
IOS
6+
12.0+
Figure 6. File upload functional compatibility on mobile
platforms [7]
Desktop
Browser
Chrome
IE
Firefox
Safari
Opera
File Upload
Compatibility
All
6.0
3.0
4.0
11.0
Figure 7. File upload compatibility of Vissou on desktop
computers [7]
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 184-191
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
190
Security is another area of concern for the design
of all types of media sharing applications. The
issues surrounding secure use of social networks
have been investigated in many studies including
[9]. Most standard security design precautions and
operational best practices are applicable to
Vissou’s design concept. For example, it is
recommended that Vissou should be hosted on a
secure context that supports a SSL (Secure
Sockets Layer) based connection. Vissou allows
users to make themselves invisible to the other
users. The users become invisible after an hour of
inactivity. This prevents the unwanted
dissemination of the user’s GPS location in the
system. In addition, a user cannot send a media
file to another user unless they are in same
location or they have previously exchanged files.
Like most other similar applications, Vissou is
vulnerable to geolocation spoofing. Currently,
there is no feasible method that mitigates the risks
involving this particular vulnerability. However,
Vissou’s range is short (typically maximum 25
meters between two users), therefore it is possible
for users to check their physical surroundings
visually to see the other users. Visual
reaffirmation of the users is deemed feasible given
the short distance between the users.
5 CONCLUSIONS
This paper presents a light weight solution to a
complex web application design problem. The
proposed solution and the design process that are
explained in this article can be applied to many
other similar design problems. This project
demonstrates the effectiveness of the synergistic
use of open source code, software frameworks,
and modern web application development
techniques in creating location based interactive
applications. The location detection capabilities of
today’s smartphones provide considerable
potential to develop novel design applications.
This paper outlines and illustrates the major
technical elements of such undertakings. The web
application described in this paper constitutes an
original alternative to the existing location based
applications in the market. Readers are invited to
try Vissou [10].
REFERENCES
1. Piximity, http://www.piximity.me
2. Shout, http://www.shoutat.us
3. Geofeedia, http://www.geofedia.com
4. Bivid, http://www.bivid.com
5. Leea D, Lianga, S.: A Cloud-based Geolocation Data
Service for Mobile Applications: International
Journal of Geographical Information Science,
Volume 25, Issue 8, (2011).
6. Mingkhwan, A.: WI-FI Tracker: An Organization WI-FI
Tracking System. In: Proc. 2006 Canadian
Conference on Electrical and Computer
Engineering, pp 231-234, Ottawa, Ont.(2006)
7. Tschan, S.: blueimp/jQuery-File-Upload,
https://github.com/blueimp/jQuery-File-Upload
8. Raeisi, M., Dobuneh, N. Jawawi, D., Malakooti, M.: An
Effectiveness Test Case Prioritization Technique
for Web Application Testing, International Journal
of Digital Information and Wireless
Communications (IJDIWC), Hong Kong, Vol. 3,
No. 4, 451-459 (2013)
9. Mohtasebi, S,, Dehghantanha, A.:Defusing the Hazards
of Social Network Services,International Journal of
Digital Information and Wireless Communications
(IJDIWC), Hong Kong, Vol. 1, No. 2, 504-515
(2011)
10. Vissou, www.vissou.com/vissouapp.php
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 184-191
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
191
Simulation of Knowledge Sharing in Business Intelligence
Pornpit Wongthongtham 1, Behrang Zadjabbari 2, Hassan Marzooq Naqvi 3
Curtin University, GPO Box U1987, Perth, WA, 6845, Australia 1, 3,
Infocentric, PO Box 596, Collins Street West, Melbourne, Victoria, 8007, Australia 2,
p.wongthongtham@curtin.edu.au 1, behrang.zadjabbari@gmail.com 2,
syed.h.naqvi@postgrad.curtin.edu.au 3
ABSTRACT
Knowledge sharing is one of the most critical elements
in a knowledge based society. With huge concentration
on communication facilities, there is a major shift in
world-wide access to codified knowledge. Although
communication technologies have made great strides in
the development of instruments for accessing required
knowledge and improving the level of knowledge
sharing, there are still many obstacles which diminish
the effectiveness of knowledge sharing in an
organization or a community. The ability and
willingness of individuals to share both their codified
and uncodified knowledge have emerged as significant
variables in knowledge sharing in an environment
where all people have access to communication
instruments and have the choice of either sharing their
own knowledge or keeping it to themselves. In this
paper, we simulate knowledge sharing key variables in
community. The key variables used in a Business
Intelligence Simulation Model (BISiM) are (i)
willingness to share or gain knowledge, (ii) ability to
share or gain knowledge, and (iii) complexity or
transferability of the shared knowledge. BISiM is
developed to report the knowledge sharing level
between members of the simulated community. In
addition, the simulated model is able to calculate and
report the knowledge sharing and knowledge
acquisition levels of each member in addition to the
total knowledge sharing level in the community.
KEYWORDS
Business Intelligence, Knowledge Sharing, Trust,
Ontology, Information presentation
1 INTRODUCTION
A business intelligence system needs to provide
valuable information to managers about
knowledge sharing levels as well as trust levels
within and between different communities. As
worldwide competition is growing, traditional
decision-making applications cannot satisfy the
requirement of new business environments for
effective decisions and more productivity. Most of
the available business intelligence applications are
more process-oriented and improve the speed and
effectiveness of business operations by providing
process-driven decision support system. On the
other hand, in a knowledge-based economy, new
generations of business agents have been born,
such as virtual organizations and electronic firms
in digital ecosystems. Digital Ecosystems
transform the traditional, rigorously defined
collaborative environments from centralized or
distributed or hybrid models into an open, flexible,
domain cluster, demand-driven interactive
environment [1]. Digital ecosystems are based on
knowledge and all members in these ecosystems
are intelligent and everyone is free to make
relationship, connection and collaboration with
other members and ecosystems are constructed by
knowledge workers. New collaborations rely
greatly on trust between collaborators and the
knowledge that can be shared between them.
Process-based business intelligence applications
may not be able to cover all the requirements of
knowledge-based collaborations and it is
necessary to investigate new requirements and
provide accurate information to decision makers in
modern organizations that are mostly based on
knowledge. Knowledge is rapidly created and just
as rapidly loses its value, so decision makers need
to ensure that their organizations have enough
ability to absorb and share updated knowledge and
use it in their business before their competitors do
so. In this paper, the roles of trust and knowledge
192
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 192-205
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
sharing in digital ecosystems are presented. The
business intelligence simulation prototype is
developed as a business intelligence system to
demonstrate measured levels of trust and
knowledge sharing in a dashboard for decision
makers. Business intelligence systems provide the
ability to analyze business information in order to
support and improve management decision
making across a broad range of business activities
[2].
The structure of the paper is as follows. We
describe business intelligence in the next section
followed by digital ecosystem in section 3. In
section 4, we present knowledge sharing including
its variables. In section 5, we present simulation
prototype of knowledge sharing in business
intelligence and in section 6 we give conclusion.
2 BUSINESS INTELLIGENCE
The business world is moving rapidly and
becoming more complicated. Also, a huge amount
of data is available in the business world and
effective applications are required to manage the
clutter of data and to respond to the needs of
decision makers. Business Intelligence (BI) plays
an increasingly important role in business
operational analysis and decision support [3].
BI refers to the set of software and hardware
solutions (data warehousing, data mining, OLAP,
etc.) which add value to enterprises by providing
new insights about data using sophisticated
analysis tools [4]. BI turns data into timely,
relevant, and meaningful information. It then helps
people in higher management to make a better and
faster decision. BI systems refer to an important
class of systems for data analysis and reporting. BI
systems help companies to acquire a more
comprehensive knowledge of the factors affecting
their business, such as metrics on sales,
production, internal operations, and they can help
companies to make better business decisions. If a
business intelligence system can be successfully
implemented, it can play its due role in four areas:
business status understanding, measuring
organization performance, improving stakeholder
relationship and creating profitable opportunities
[6]. BI covers a wide range of tools and has three
main components: reporting, data mining, and
predictive analytics. Overall, BI delivers the right
information to the right person at the right time
[7].
193
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 192-205
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
Figure 1: Evolution of business intelligence (revised from Chang, 2006).
BI has shifted from the traditional concentration
by businesses on using data purely for repetitive
calculations, monitoring and control to obtaining
knowledge in a form that is suitable for supporting
and enabling business decisions from marketing,
sales, relationship formation, and fraud detection
through to major strategic decisions. Figure 1
shows the evolution of BI. A new generation of BI
is moving from traditional applications such as
CRM, SCM, ERP and etc. to Social Business
Intelligence (SBI). BI technologies and
applications were started from business modeling
and quality standards in 1980s and moved to
CRM, ERP, etc. which were more process base in
1990s. The typical BI solutions were focused
mostly on the data stored mainly inside the
company itself. However, in the recent years
companies have started looking for data out of
their boundaries (i.e. data from the web) in order
to provide BI with new kind of data sources to get
better analysis dimensions [9]. In this regard,
companies are targeting consumer's opinions and
reviews as a significant source of feedback for
certain products or services. Consequently,
listening to the Voice of Customers (VoC) is
considered vital for companies since this will
affect decision-making and policies. Social Media
is a rich source for mining customer's opinions.
Dinter and Lorenz [10], highlighted the main
issues related to Social Business Intelligence and
addressed the research agenda for the future.
New dimensions of interactions based more on
information and knowledge analysis are going to
replace the traditional aspects that were more
focused on data analysis. BI in the future will
include, amongst other things, Social Big Data
Analytics, Big Data Visualization,
Trustworthiness, Machine Learning, and
Semantic/Sentiment Analysis. Information
democracy and social network are the most
important parts of future business models [11], and
business intelligence. Hence, more investigation is
needed to design and propose trust-based business
194
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 192-205
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
intelligence systems or the role of knowledge
sharing in the future of business intelligence. Seah
et al. [12] also reveal that managers need to
consider socio-cultural variables such as
knowledge sharing between business components
like suppliers, customers and etc. and also need
their collaboration and commitment to implement
business intelligence systems. The research
presented in this paper focuses on the role of trust
and knowledge sharing in BI applications and
organization’s need to improve trust and
knowledge sharing between employees to decrease
risk of failure in the implementation.
3 DIGITAL ECOSYSTEM
A digital ecosystem is a collaborative environment
in which all members feel free to initiate a
relationship with other participants within a virtual
community. Anyone can join any community
except for dangerous communities that damage
ecosystems, and share his/her ideas and
knowledge freely. This is opposite to the
traditional ecosystems where individuals are more
dependent on their family, their society, cultures,
and religions, and usually the rules are pre-defined
and community members have to follow the rules.
In traditional ecosystems, individuals are not free
enough to share all of their ideas. In this research,
it is assumed that everyone is free to join groups
and share his/her ideas without any external
pressure, and community members are not ordered
to collaborate or are not forced to follow the rules.
In a traditional ecosystem, an individual’s
behavior can be affected by the rules that each
ecosystem has developed over a long period of
time.
In leader based ecosystem, all the community
members follow the leader. In traditional
ecosystems, individuals are forced to follow the
leader but, in a digital and free ecosystem,
individuals might not trust the leader and follow
the leader reluctantly. In a digital and free
ecosystem, it is very important that if any
community (business or organizations) wants to
adopt a leadership management style, the trust and
knowledge sharing levels between members have
to be calculated regularly. This is also applicable
to any business that wants to play a leadership role
in a free ecosystem and in this case, customers’
trust is a key factor in determining whether this
business is accepted as a leader in the market. In a
traditional ecosystem, members are forced to
follow the leader and accept the knowledge that is
shared by the leader even if they are not able to
understand the shared knowledge. In a free
ecosystem, leaders need innovative tools to ensure
that the shared knowledge is transferable to the
majority of the community members and
sometimes to all of them. Also, the knowledge
complexity should be based on a member’s
competency to be understood by most of the other
members.
Another type of ecosystem is hierarchical
ecosystem. Trust and knowledge sharing between
members is both vertical and horizontal in these
kinds of the communities. In traditional hierarchy
ecosystems, knowledge sharing is like a command
from top levels to bottom levels and members are
forced to follow the commands. Although in
developed free ecosystems members in
hierarchical ecosystems are given the opportunity
to explain their ideas and suggestions, they still
need to follow the rules and commands of their
higher level members. This is one of the major
disadvantages of these kinds of ecosystems.
However, in this kind of ecosystem, supporters
believe that hierarchy creates motivation between
members to increase their competency and
progress towards higher levels. Another key issue
in an ecosystem is community clustering.
Normally, ecosystems are divided into sub-
communities and knowledge sharing occurs
between the members of these sub-communities.
For example, in traditional ecosystems, different
religions have their own sub-communities and
trust between the members of sub-communities is
high.
For sub-communities in an ecosystem, knowledge
sharing among members of one specific sub-
community is much more than knowledge sharing
between members of two different-sub
communities. The rules are normally pre-defined
in these communities by the community founders.
However, in a free ecosystem there are also sub-
195
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 192-205
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
communities although individuals are free to join
or exit from these communities and they
encourage rather than the following of rules. For
example, sport communities encourage people to
join them or music groups on the Internet
encourage people to join them and support their
community. There are also some other styles of
ecosystems such as line and circle. The ecosystem
that is the focus of this research is a free
ecosystem where everyone can join any legal
communities, share any knowledge and refuse
anything that they do not like such as an offer of
membership of a community or forced sharing of
knowledge.
Traditional business ecosystems are going to
change to digital business ecosystems and this will
change the structure and business elements of the
firms. In digital business ecosystems, decision
makers need access to real and on-time data and
they cannot limit themselves to analyzing previous
data and forecasting the future based on past
events. Hence, process-based business intelligence
applications may not satisfy new requirements and
some new variables have to be considered in
decision making models. One of the key elements
in developing a business in a digital ecosystem is
using new data resources to access on-time data
and create reliable knowledge to use in the
decision-making process. Knowledge is an
organization’s most important competitive
advantage in digital ecosystems and pioneer
organizations need to plan a strategy whose
objective is to collect, manage and put knowledge
in action, and develops knowledge continuously.
Knowledge creation and knowledge sharing are
crucial to organizations in a digital ecosystem. It is
necessary for decision makers to develop and use
knowledge-based business intelligence tools. In
new businesses, intelligence tools and the level of
knowledge sharing within communities and the
trust level between members should be addressed.
As was discussed in the literature, the success of
Knowledge sharing depends on developing an
effective relationship between transmitter and
receiver of the knowledge. The key variable in
establishing an effective relationship is trust.
Competence- and benevolence-based trust are
important variables in knowledge sharing and
should be considered in the new applications. BI
Simulation (BISiM) is introduced indicating trust
and knowledge sharing as the main variables in
free ecosystems and to success in a competitive
and knowledge based business environment. The
new business intelligence simulation is developed
based on these key variables.
4 KNOWLEDGE SHARING
4. 1 Knowledge in Digital Society
As knowledge is becoming increasingly important
in knowledge-based societies, it affects all aspects
of modern societies including business, education,
communication, transport and, most importantly,
the lifestyles of humans. It is believed in many
cultures that better educated individuals with a
high level of knowledge will contribute to faster
and more sustainable development and in most
countries people with better education and skills
earn more and have more opportunities in the job
market in comparison with those who have low
levels of knowledge [13]. Many studies have been
conducted to investigate how knowledge can be
created, managed and shared, and to determine the
best tools to accomplish these tasks in a cost- and
time-efficient manner.
In a post-capitalism society, power comes from
transmitting information to make it productive
[14]. It is estimated that more than 50 per cent of
Gross Domestic Product (GDP) in the major
economies is now knowledge-based [15]. In a
knowledge based-economy, knowledge is a
resource just like other resources such as raw
materials and postulates as an input resource that
will have a greater impact than physical capital in
the future [16]. Many social scientists have come
to characterize the world as a knowledge society
and central to this claim is the notion that new
social uses of information, and in particular the
application of scientific knowledge, are
transforming social life in fundamental ways [17].
From the individual’s personal perspective,
knowledge is the main source of progress and
from the business perspective, knowledge helps
organizations to build core competencies and
create more opportunities. Knowledge helps to
196
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 192-205
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
find new strategies for increasing the continuous
improvement, innovation and performance of
businesses, so as to create sustainable competitive
advantages Johannessen et al. [18]. It should come
as no surprise that the most valuable asset for any
business is the knowledge of its employees. The
focus on knowledge has led to increased attention
on information technology to increase knowledge
exchange between knowledge holders.
Knowledge is a combination of information and a
person’s experience, training and expertise [19]. It
is important to mention that most discussions
within Information Technology and definitions of
knowledge in the literature, begin with data and
information [20]. Data is defined as raw [21],
isolated facts [22] or as the results of observations
[23]. Data would represent numbers, words or
figures that are organized in such a manner as to
produce useful results such as statistics [24]. Data
is a raw product and a set of discreet objective
facts about events and a collection of any number
of required observations on one or more variables
[25]. Data has been categorized as structured,
semi-structured, or unstructured. Structured data is
organized in a highly regular way, such as in
tables and relations, where the regularities apply to
all the data in a particular dataset [26]. Semi-
structured data does not have regular structures. It
can be neither stored nor queried easily and
efficiently in relational or object-oriented database
management systems. Unstructured data, such as
text or images, contain information but have no
explicit structuring information, such as tags.
However, these tags may be assigned using
manual or automatic techniques, converting the
unstructured data to semi-structured data Gaffar et
al. [27]. Data can be changed to information
through conceptualization and categorization Jarke
et al. [28] or when data is placed in a specific
meaningful context [23]. Moreover, when data is
processed to provide certain useful contexts, it
becomes the information and can be used in
decision-making (2001) [29]. Further processing
of information leads to deeper understanding and
represents a reality that is defined as knowledge.
Information becomes knowledge when it is
understood and comprehended at a deeper level as
a result of human mental activity and further
analysis of the information including association
with other data and information (Jarvis, 2000).
Knowledge is defined as a mix of experiences,
values and contextual information that provides a
framework for incorporating new experiences
[30]. Knowledge is the power to act and to make
value producing decisions that add value to the
enterprise [31]. Knowledge is also defined as “the
insights, understandings, and practical know-how
that we all possess -- is the fundamental resource
that allows us to function intelligently” [32].
4.2 Literature in Knowledge Sharing
Knowledge sharing is one of the most critical
elements of effective knowledge processing and
organizations often face difficulties when trying to
encourage knowledge sharing behavior [33]. It has
been estimated that at least $31.5 billion are lost
per year by Fortune 500 companies as a result of
failing to share knowledge [34]. Knowledge
sharing is defined as the process of exchanging
knowledge (skills, experience, and understanding)
among knowledge holders [35]. It refers to the
provision of task information and know-how to
help and collaborate with others to solve problems,
share ideas, or implement policies or procedures
[36]. It is the fundamental means through which
employees can contribute to knowledge
application, innovation, and ultimately the
competitive advantage of the organization [37].
Davenport and Prusak consider knowledge sharing
as equivalent to knowledge transfer and sharing
amongst members of the organization [38] This
can lead organizations to develop skills and
competencies and create sustainable competitive
advantage. It is important for companies to be able
to develop skills and competence, increase value,
and sustain competitive advantages due to the
innovation that occurs when people share and
combine their personal knowledge with that of
others [39]. The importance of knowledge sharing
raises the issue of how organizations can
effectively encourage individual knowledge
sharing behavior and what factors enable, promote
or hinder the sharing of knowledge.
Knowledge sharing has been considered in relation
to future reciprocal monetary and non-monetary
197
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 192-205
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
benefits [40]. Reciprocal exchange motivates
employees to obtain knowledge and cooperate in
knowledge exchange processes. By exchanging
knowledge over time, employees can obtain
valued resources such as knowledge that increases
their productivity, not by way of hierarchical
authority or contractual obligation, but because the
norm of reciprocity is so strongly upheld [41].
Knowledge sharing can occur in different forms
such as written correspondence, face-to-face
communications or through networking with other
experts, documenting, organizing and capturing
knowledge for others [36]. Face-to-face
communication is a suitable method for
transferring tacit knowledge and written
correspondence is an effective method for
transmitting explicit knowledge. Based on
different types of knowledge and the importance
of each type, different strategies can be applied to
increase knowledge sharing contribution and
encourage community members to share their
knowledge with a system instead of keeping it to
themselves.
However, knowledge sharing definitions in the
literature fail to consider different components of
the knowledge sharing process. The current
definitions of knowledge sharing fail to determine
the role of knowledge in terms of the knowledge
sharing context. An appropriate definition of
knowledge sharing should encompass or reflect
that knowledge sharing by individual A with
individual B originates as a result of their
competence and willingness (benevolence) to
share knowledge in a given context and at a given
point in time. The disadvantages of current
definitions can be listed as:
Knowledge context is not considered in current
definitions of knowledge sharing. Knowledge
sharing between two individuals may be different
in various knowledge domains and it should be
taken into consideration in a knowledge sharing
definition.
Knowledge sharing level is dynamic and a
knowledge sharing definition should address this
dynamic entity of knowledge sharing.
In current definitions of knowledge sharing, the
roles of knowledge sender or receiver in the
knowledge sharing process are not addressed.
Available knowledge sharing definitions are more
focused on knowledge exchange rather than
knowledge sharing and do not give a clear
understanding and exact meaning of “sharing”.
The meaning of “sharing” in this research is a
common understanding of knowledge by all
parties that are engaged in the knowledge sharing
process.
Knowledge sharing in this research is defined as:
the transfer and sharing of a particular knowledge
amongst specific members of a community or
organization within a specific time slot where the
members understand the shared knowledge has a
unique meaning.
4.3 Knowledge Sharing Variables
As discussed, knowledge sharing is a key issue in
knowledge management and plays a main role in
different processes of knowledge management. In
a knowledge-based economy, organizations have
been forced to take a step back and re-evaluate
their core competencies and ability to innovate and
create new organizational knowledge as a valuable
strategic asset in a modern business environment
[42]. An organization needs to develop ways to
share the created knowledge among employees
who need or will need that particular knowledge
for their normal duties or for future tasks.
Improving the efficiency of knowledge sharing is
the main knowledge management challenge of
organizations. Effective knowledge sharing leads
to a smarter organization. In a smart organization,
all tasks are planned, executed, and checked based
on updated knowledge including updated
strategies, researches and experimental
knowledge. Knowledge sharing occurs between
individuals within a team or organizational unit
and teams can be formal or informal. Also, the
sharing of knowledge may be differentiated in
terms of the sharing of explicit knowledge and
tacit knowledge [43]. Different variables have
been discovered by scholars in terms of
198
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 192-205
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
knowledge sharing management. [44] identifies
the following variables that reduce knowledge
sharing: low trust, lack of contextual clues,
memory loss, discontinuity in progress toward
goals, inability to voice relevant knowledge,
unwillingness to listen, differences in unit and
culture, specialized languages, national cultures
and languages. Similarly, Barson et al. [45]
indicate the variables of trust, risk, fear of
exploitation and losing power or resources, costs,
technology, culture and rewards. Overall, different
variables can be grouped in three categories:
social, economic and technological. Social
variables relate to social concepts such as trust,
culture willingness to share, language. Economic
variables such as cost of sharing knowledge,
rewards, management support are also important
issues. Also, technological variables related to
creating networks and easy communications are
key issues in knowledge sharing management.
Based on these variables different theories have
been suggested by scholars to leverage knowledge
sharing in an organization. The most important
theories are the economic exchange theory and the
social exchange theory. From the economic
exchange perspective, it is common to view
knowledge exchange in terms of economic value.
Based on this theory, knowledge transmitter and
knowledge receiver can acquire economic benefits
from the other party. This perspective emphasizes
the importance of motivators such as monetary
incentive, promotion, and educational opportunity
in shaping knowledge-sharing behavior [46]. Here,
an individual is treated as a rational and self-
interested party who may behave in ways to
maximize his or her utility [46] and minimize
costs [47]. Unlike the economic exchange theory,
the social exchange theory has its foundation in
social strong relationships. This theory proposes
that individuals believe that the action will be
reciprocated at some time in the future, though the
exact time and nature of the reciprocal act is
unknown and unimportant [48]. The major
difference between social and economic exchange
theories is that there is no guarantee in social
exchange that the cost invested will be returned by
sharing knowledge and that individuals believe
that the other party will reciprocate as expected.
With the social exchange theory, trust is the most
important variable and the knowledge sharing
level can be determined by the trust level of
individuals.
5 BUSINESS INTELLGENCE SIMULATION
MODEL (BISiM) MODEL
In the area of digital economy, the most important
challenges are those of producing and using data,
information, and knowledge. As was discussed,
there is a rise of ultra-large cooperative efforts to
embrace digital ecosystems that transform the
traditional rigorously defined collaborative
environments from centralized or distributed or
hybrid models to an open, flexible, domain cluster,
demand driven interactive cyber space. Following
the vision of ‘creating value by making
connections’, in a digital ecosystem, each digital
species acts for its own benefit and profit by
choosing different strategies (i.e. business
partners, human resources, intelligence models)
for communicating, collaborating, socializing,
contributing and even competing with each other.
There are some key contributors to the success of
the selected strategies in an open and flexible
collaborative environment. Therefore, a central
and pressing research question is related to
maximizing the benefits to members in these
ecosystems and forecasting the overall behavior of
the digital ecosystems in order to ensure that DES
as a whole can achieve the desired goals (i.e. value
creation and increase) beneficial for the entire
community and all stakeholders. In order to
improve knowledge sharing and developing a
strong relationship between community members,
this research has been designed and implemented
the BISiM using a Business Intelligence concept.
The main aim of this simulation is to represent the
development of knowledge sharing in different as
areas of an organization including its strategic
planning, where “Knowledge Creation” and
“Knowledge Sharing” are vital to organization’s
knowledge management process. To encourage
people to share their knowledge and contribute to
decisions, the BISiM projects the level of
Knowledge Sharing within communities and
addresses the trust level between members.
According to the Knowledge Sharing principle,
members rely on an effective relationship between
199
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 192-205
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
one another to exchange knowledge, and the key
factor in making an effective relationship is Trust.
Two of the most regularly cited forms of trust -
Competence and Benevolence - are used in this
simulation for knowledge sharing measurement.
While Competence-based trust represents the
essential capability to share a particular knowledge
within a specific time slot, and benevolence-based
trust represents the willingness to share that
particular knowledge within the same time slot.
The significance of this simulation is twofold.
Firstly, it is one of the few simulators in the world
that provides a visualized and dynamic
demonstration of trust and knowledge sharing in a
complex system such as a digital ecosystem.
Secondly, it is an attempt to create Swarm
Intelligence by creating an ideal knowledge
sharing environment to capture and simulate the
knowledge sharing behavior of species in the
digital world. The result of this simulation can be
applied to different domains such as customer-to-
customer marketing, e-commerce, and social
networking.
5.1 Features of Business Intelligence Simulation
Model
As discussed previously, the simulation represents
knowledge sharing in an organization. It shows
how knowledge sharing depends on the levels of
trust and knowledge. Trust is represented by
competency and benevolence while knowledge is
represented by complexity and transferability. In
the BISiM model, the levels of trust and
knowledge of the individual within the community
can be given manually. Alternatively, the
simulator can connect to database which has all
value of trust dimensions and knowledge
complexity and transferability. The levels of the
variables are calculated automatically by the
simulator based on measurement process
presented in [49][50][51]. The simulation starts
with the default values of species number, trust,
knowledge and result. The simulation has random
face expression, number of faces, colors, and
connection lines displayed on screen. These reflect
the species’ different levels of benevolence and
transferability within the same communities and
inter-community. The simulation consists of two
main features: a drawing canvas and control panel.
The user uses the control panel to vary the species
number (from 1 to 400) and the levels of
competency, benevolence, complexity and
transferability on a scale of 0 to 9. The result will
be seen in the drawing canvas.
Figure 2: Business Intelligent Simulator Control Panel.
Figure 2 shows the control panel. The Species
number identifies the number of displayed faces.
Individuals in the community rely on an effective
relationship between one another to exchange
knowledge. One of the important key factors in
creating effective knowledge sharing is Trust. This
simulation demonstrates the level of trust between
members in the community in relation to sharing
knowledge. Two of the most regularly cited forms
of trust that are presented in the prototype are
competence and benevolence. The simulation
calculates the values of both benevolence and
competency levels for the total rate of trust and
rate of knowledge sharing. The benevolence level
or the willingness level is represented by the
smiley face in the drawing canvas. The faces
change according to the level of individuals’
benevolence. A very happy smiley face indicates a
high benevolence level. The very sad faces on the
other hand, indicate a low benevolence level. The
higher the Benevolence level, the better is the
capability of individuals to learn new Knowledge.
The competency level of trust is represented by
200
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 192-205
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
shading the faces from faint shading to a bright
shade. The faint shading indicates a low level of
competency. The bright shading indicates a high
level of competency. As if competency level is
high, the capability of individuals to learn new
knowledge increases.
Knowledge is another important key issue in
Knowledge sharing beside trust. In knowledge
sharing, it is vital to measure the complexity and
transferability of the knowledge. The complexity
and transferability of knowledge has a direct
influence on knowledge sharing. The high value of
knowledge complexity is limiting the capacity of
community members to share their knowledge.
The high value of knowledge transferability would
increase the capacity of each individual to transfer
the knowledge to others in the community or
across the community. The simulation calculates
the value of complexity and transferability for the
total knowledge and the rate of knowledge
sharing. The complexity level of knowledge
depends on the domain knowledge captured in an
ontology that applies to the community. We use
ontology to capture domain knowledge. As
complexity increases, it is more difficult for
individuals to acquire new knowledge. The
simulator uses this complexity value to calculate
the transferability rate. The complexity level is
represented by the straight line that connects each
face. Knowledge transferability is represented as
the connection line between the faces. The
thickness of lines depends on the value of
transferability. The greater the value of
transferability, the thicker is the line. This means
that individuals can share new knowledge more
effectively.
Figure 3: Business Intelligent Simulator Drawing Canvas
Figure 3 shows the drawing canvas which is the
area where the animated graphical images display.
The animated graphical images such as smiley
faces show their reaction according to the input
value from the Control Panel. In the drawing
canvas, face animation that demonstrate the
relationship of the simulation consist of smiley
faces, sad faces, colors of faces, faces’ colors
alpha and connection lines.
Connection lines are the lines that connect species.
The thickness of these lines represents the
knowledge transferability rate at which each
species is able to exchange its knowledge. The
transferability also depends on the benevolence
value (Smiley face) as well. If each species has a
very high benevolence value, the chance of
knowledge transferability is high. Thus, the
thickness of the line will also change in relation to
the benevolence value. The smiley face with
different face background colors represents
species that belong to a different domain
knowledge expertise community. For example:
green faces indicate the individuals are experts in
the market domain. Blue faces indicate the
individuals who have expertise in the finance
domain. Red faces indicate the individuals who are
experts in the management domain etc. The
outcome of the simulation presents numerically
trust, knowledge and knowledge sharing levels via
animated graphical images as partially explained
above. The knowledge sharing outcome is
indicated by the thickness of the connection line: if
201
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 192-205
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
the line is thick, this means the knowledge sharing
rate is high, but if the line is thin, the sharing rate
is low. The calculation of outcome values involves
the values of benevolence, competency,
complexity, and transferability levels.
5.2 Use of Business Intelligence Simulation
Model
For each member in the community, a trust record
folder is created to save results of measured trust
levels. Over a long period of time, the reputation
of each member is an essential knowledge that
forecasts the knowledge sharing level of a new
knowledge. Trust level can be regularly checked
and updated in different knowledge domains and
in the long term an individual’s trust level in each
knowledge domain can be found in the trust level
repositories. Also, the complexity and
transferability of a particular knowledge within a
specific time slot can be calculated. Another key
issue that can be developed in the simulation is the
role of knowledge repetition in decreasing the
complexity of knowledge. Normally, if
information (knowledge) is repeated several times,
its complexity decreases due to the intelligence of
the community members. For example, when
learning a language if a new word is repeated
several times, the receiver eventually learns the
meaning of the word. Figure 4 shows the
relationship between knowledge repetitions and
complexity. Figure 4 demonstrates that complexity
of knowledge reduces over time and the overall
complexity is a function of time.
Figure 4: Relationship between time and complexity.
Learning rates differ from person to person. One
more issue that is important in business
intelligence applications is making a dashboard for
managers to follow real-time situations in different
communities. For example, managers need to
know their customers’ level of trust in their
business or the trust level between their employees
and between their marketing staff. Decision
makers can control the trust and knowledge
sharing level between employees from different
departments as well as customers, and as trust
level reduction is a negative effect that can reduce
business revenue, they can change business
strategy or determine the causes of the trust level
reduction. Similar to customers, reduction in trust
level between employees can reduce business
performance and decision makers should follow
the trust level between employees.
5.3 Discussion
BISiM conducted experimental tests to simulate
knowledge dissemination in a simulated network.
The tests examined the role of the variables that
are defined in the knowledge sharing prototype as
the main variables in knowledge dissemination. To
prove the importance of the variables of
willingness and benevolence trust in knowledge
sharing measurement, members of a simulated
network were divided into three groups based on
the level of trust in each other. These three groups
are blue group, red group and green group. Blue
group members have a high level of benevolence
and competence trust in each other, but their level
of trust in other group members is low. Similar to
the blue group, red group members and green
group members have a high level of trust in their
own group members and a low level of trust in
members from other groups. Figure 5 shows the
knowledge sharing levels in the simulated
network.
202
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 192-205
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
Figure 5: Knowledge sharing level in the simulated
network based on trust between members
As seen in Figure 5, the thickness of the lines
between members of a group is greater than the
thickness of the lines between members from
different groups. Line thickness indicates the level
of knowledge sharing between members and it is
clear that a high level of trust leads to a high level
of knowledge sharing in a network. The result
supports the concept that trust is a key issue in
knowledge sharing measurement. To prove the
importance of the complexity and transformability
of knowledge in knowledge sharing measurement,
members from engineering (pink color),
management (blue color) and medical (black
color) backgrounds are simulated in BISiM.
Figure 6 shows the simulation outcomes.
Figure 6: Knowledge sharing level in the simulated
network based on knowledge repository of the members
As can be seen in Figure 6, the thickness of the
lines between members with the same knowledge
background is greater while the level of
knowledge sharing between members from
different backgrounds is low. The result supports
the proposed framework for knowledge sharing
measurement and is proof of the correctness of the
knowledge sharing variables.
In order to handle real-time situations in different
communities, dashboard is important in business
intelligence application. For example managers
need to know their customers’ level of trust in
their business or the trust level between their
employees and between their marketing staff.
BISiM provides these kinds of data in a dashboard
as shown in Figure 7.
Figure 7: BISiM dashboard
Decision makers can control the trust and
knowledge sharing level between employees from
different departments as well as customers, and as
trust level reduction is a negative effect that can
reduce business revenue, they can change business
strategy or determine the causes of the trust level
reduction. Similar to customers, reduction in trust
level between employees can reduce business
performance and decision makers should follow
the trust level between employees.
203
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 192-205
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
6 CONCLUSIONS
In this paper, the BISiM was introduced to
simulate knowledge sharing between individuals.
In a digital ecosystem, there are collaborative
environments and traditional ecosystems from
centralized or distributed or hybrid models that
have been transformed into an open, flexible,
domain cluster, demand-driven interactive cyber
space. It was discussed that in a free and open
environment, contribution, relationships and
connections are the most important issues and trust
as well as knowledge sharing are the most
important variables in these kinds of ecosystems.
In order for new ecosystems to be successful, it is
necessary to control and improve the variables
such as trust and knowledge sharing, and decision
makers need tools with which to measure and
control these variables. The BISiM seeks to create
new business intelligence tools that provide useful
knowledge to decision makers in a digital and
competitive environment. Managers can follow up
these variables and define or change their
strategies based on the fluctuation of these
variables. The simulation can be a suitable
platform for future business intelligence
applications to provide on time and reliable data
for decision makers.
REFERENCES
1. Chen, W. & Chang, E. Exploring a Digital Ecosystem
Conceptual Model and Its Simulation Prototype.
Industrial Electronics, 2007. ISIE 2007. IEEE
International Symposium on, 4-7 June 2007 2007. 2933-
2938.
2. Elbashir, M. Z., Collier, P. A. & Davern, M. J. 2008.
Measuring the effects of business intelligence systems:
The relationship between business process and
organizational performance. International Journal of
Accounting Information Systems, 9, 135-153.
3. Inmon, W. H. 2002. Building the data warehouse, Wiley
Chichester.
4. Chaudhuri, S., et al. 2011. An overview of business
intelligence technology. Commun. ACM, 54, 88-98.
5. Hannula, M. P. V. 2003. Business intelligence empirical
study on the top 50 finnish companies. J Am Acad Bus
Camb, 2, 593-599.
6. Wang, J. Z. & Ali, F. An Efficient Ontology
Comparison Tool for Semantic Web Applications.
Proceedings of the 2005
7. Eckerson Wayne, W. 2005. Performance dashboards:
Measuring, monitoring and managing your business.,
Wiley
8. Castellanos, M., et al. 2013. Business Intelligence and
the Web. Information Systems Frontiers, 15, 307-309.
9. Castellanos, M., et al. 2013. Business Intelligence and
the Web. Information Systems Frontiers, 15, 307-309.
10. Dinter, B. A. A. L. 2012. Social Business Intelligence: a
Literature Review and Research Agenda. ICIS.
11. Weber, A. 2016. Considerations for Social Network site
(Sns) use in Education. International Journal Of Digital
12. Seah, M. H., Hsieh, M.H., & Wang, P.-D 2010. A case
analysis of Savecom: The role of indigenous leadership
in implementing a business intelligence system
International Journal of Information Management.
13. Soubbotin, T. P. 2004. Beyond Economic Growth: An
Introduction to Sustainable Development [Online]. The
World Bank Washington D.C. Available:
http://www.worldbank.org/depweb/english/beyond/beyo
ndco/beg_07.pdf. [Accessed 5th April 2011].
14. Drucker, P. 1995. The Post-Capitalist Executive, New
York, Penguin.
15. Organization For Economic and Development. (1996).
The knowledge based economy. Paris: OCDE/GD,
102(96).
16. Drucker, P. 1993. Post-capitalist society, New York,
Butterworth.
17. Rule, J. & Besen, Y. 2008. The once and future
information society. Theory and Society, 37, 317-342.
18. Johannessen, J.-A., Olaisen, J. & Olsen, B. 2001.
Mismanagement of tacit knowledge: the importance of
tacit knowledge, the danger of information technology,
and what to do about it. International Journal of
Information Management, 21, 3-20.
19. Kurbalija, J. 1999. Knowledge and Diplomacy,
Academic Training Institute, University of Malta
20. Alavi, M. & Leidner, D. E. 2001. Review: Knowledge
Management and Knowledge Management Systems:
Conceptual Foundations and Research Issues. MIS
Quarterly, 25, 107-136.
21. Raisinghani, M. S. 2000. Knowledge management: A
cognitive perspective on business and education.
American Business Review, 18, 105.
22. Tuomi, I. 1999. Data is more than knowledge:
implications of the reversed knowledge hierarchy for
knowledge management and organizational memory. J.
Manage. Inf. Syst., 16, 103-117.
23. Hertog, J. F. D. & Huizenga, E. 2000. The knowledge
enterprise: Implementing intelligent business strategies,
London, Imperial College Press.
24. Brooking, A. 1999. Corporate Memories, Strategies for
Knowledge Management, London, Thompson Business
Press.
25. Davenport, T. H. & Prusak, L. 1998. Working
knowledge: How organization manage what they know,
Harvard Business School Press.
26. Losee, R. M. 2006. Browsing mixed structured and
unstructured data. Inf. Process. Manage., 42, 440-452.
204
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 192-205
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
27. Gaffar, A., Darwish, E. M. & Tridane, A. 2016.
Structuring Heterogeneous Big Data for Scalability and
Accuracy. International Journal of Digital Information
and Wireless Communications (IJDIWC), 4, 10-23.
28. Jarke, M., Klemke, R. & Nick, A. 2001. Broker's
Lounge - An Environment for Multi-Dimensional User-
Adaptive Knowledge Management. Proceedings of the
34th Annual Hawaii International Conference on
System Sciences (HICSS-34)-Volume 3 - Volume 3.
IEEE Computer Society.
29. 2001. Knowledge Management: A Framework for
Succeeding in the Knowledge Era / Standards Australia,
Sydney, Standards Australia International Limited.
30. Davenport, T. H. & Prusak, L. 1998. Working
knowledge: How organization manage what they know,
Harvard Business School Press.
31. Kanter, J. Knowledge Management, Practically
Speaking. IS Management, 1999. 7-15.
32. Wiig, K. 1996. On the Management of Knowledge-
Position Statement [Online]. Available: http://www.km-
forum.org/what_is.htm [Accessed 12 March 2010].
33. Saraydar, C. 2002. Efficient power control via pricing in
wireless data network. transactions on communication,
50, 291-303.
34. Babcock, P. 2004. Shedding light on knowledge
management. HR Magazine, 49, 46-50.
35. Tsui, L. 2006. A Handbook on Knowledge Sharing:
Strategies and Recommendations for Researchers,
Policymakers, and Service Providers, Edmonton,
Alberta, Community-University Partnership, University
of Alberta.
36. Cummings, J. N. 2004. Work groups, structural
diversity, and knowledge sharing in a global
organization. Management Science, 50, 352−364.
37. Jackson, S. E., Chuang, C. H., Harden, E. E., Jiang, Y.,
Joseph, J. M 2006. Toward developing human resource
management systems for knowledge-intensive
teamwork. Research in personnel and human resources
management, 25, 27-70.
38. Davenport, T. H. & Prusak, L. 1998. Working
knowledge: How organization manage what they know,
Harvard Business School Press.
39. Matzler, K., Renzl, B., Muller, J., Herting, S.,
Mooradian, T. A., 2007. Personality traits and
knowledge sharing Journal of economic psychology, 29,
301-313.
40. Gouldner, A. W. 1960. The norm of reciprocity: a
preliminary statement. American Sociological Review,
25, 161-178.
41. Flynn, F. J. 2003. What have you done for me lately?
Temporal changes in subjective favor evaluations.
Organizational Behavior and Human Decision Process,
91.
42. Haghirian, P. 2003. Does culture really matter? Cultural
influences on the knowledge transfer processes within
multinational corporation. European Journal of
Information Systems.
43. Nonaka, I., Byosiere, P., Borucki, C.C., Konno, N.,
1994. Organizational knowledge creation theory: a first
comprehensive test. International Business Review, 3,
337351.
44. Levina, N. 2001. Sharing knowledge in heterogeneous
environment. The SoL Journal, 2, 32-42.
45. Barson, R. J., Foster, G., Struck, T., Ratchev, S., Pawar,
K., Weber, F. & Wunram, M. Inter and intra
organizational barriers to sharing knowledge in the
extended supply chain. In: Stanford-Smith, B. & Kidd,
P. T., eds. the eBusiness and eWork Conference and
Exhibition (e2000), October 18-20, 2000 2000 Madrid,
Spain. IOS Press, 367-373.
46. Bock, G. W., Zmud, R.W., Kim, Y., & Lee, J 2005.
Behavioral intention formation knowledge sharing:
Examining roles of extrinsic motivators, social-
psychological forces, and organizational climate. MIS
Quarterly, 29, 87-111
47. Kankanhalli, A., Tan, B. C. Y., & Wei, K. K 2005.
Contributing Knowledge to Electronic Knowledge
Repositories: An Emprical Investigation. MIS
Quarterly, 29, 113-143.
48. Turnley, W. H., Bolino, M.C., Lester, S.W., &
Bloodgood, J.M 2003. The impact of psychological
contract fulfillment on the performance of in-role and
organizational citizenship behaviors. Journal of
management, 29, 187-207.
49. Zadjabbari, B., Wongthongtham, P. & Hussain, F. K.
2010. Ontology based Approach in Knowledge Sharing
Measurement. Journal of Universal Computer Science,
16, 956-982.
50. Wongthongtham, P. and N. Kasisopha, An Ontology-
Based Method for Measurement of Transferability and
Complexity of Knowledge in Multi-site Software
Development Environment, in Knowledge, Information,
and Creativity Support Systems: 5th International
Conference, KICSS 2010, Chiang Mai, Thailand,
November 25-27, 2010, Revised Selected Papers, T.
Theeramunkong, et al., Editors. 2011, Springer Berlin
Heidelberg: Berlin, Heidelberg. p. 238-252.
51. Zadjabbari, B., S. Mohseni, and P. Wongthongtham.
Fuzzy logic based model to measure knowledge sharing.
in 2009 3rd IEEE International Conference on Digital
Ecosystems and Technologies. 2009.
205
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 192-205
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
KiwiLOD: A Framework to Transform New Zealand Open Data to Linked Open
Data
Rivindu Perera and Parma Nand
Auckland University of Technology
Auckland 1010, New Zealand
{rperera, pnand}@aut.ac.nz
ABSTRACT
In this paper we present the theoretical framework to
shift the New Zealand Open Data initiative to the next
level by designing a scalable, queryable, and thus more
accessible a Linked Data cloud, KiwiLOD. The
KiwiLOD project will be executed in two stages. In the
first stage a framework will be designed to transform
the current open datasets (in comma separated and
table formats) into Linked Data form. In the second
stage, Natural Language Processing will be combined
with an Information Extraction framework to transform
the unstructured online and offline texts into Linked
Data and made accessible via the cloud.
KEYWORDS
Open data, linked data, information extraction,
Semantic Web, KiwiLOD
1 INTRODUCTION
The purpose of the open data initiative is to make
publicly available the enormous amounts of non-
personal data held by various institutions to enable
better decision making in the modern economy.
This initiative has been widely embraced by
governments by publishing government data in
open form, so that general public can access it for
decision making. Furthermore, as published open
data is in a structured form, it makes it easy for the
users to easily analyze it and use it productively.
Open Government Data (OGD) [1] carries several
benefits to both general public as well as for the
government. OGD mainly help the public to get a
clear and transparent view of the government
process. For example, if the statistics related to
annual expenses of a particular ministry is
published as open data, this helps public to
understand how that particular ministry manages
its expenses. This is appreciated by many political
scientists claiming that such approach can increase
the trust in the general public. Using OGD, public
can understand whether the government is
performing well, and trace the achievements and
targets that might not have been achieved as
promised. On the other hand, by exposing the
information to the public, the government can
expect new ideas from the public. Furthermore,
deep analysis of this government data can expose
trends which can be valuable for decision makers
(e.g., business managers) when planning for the
long-term for strategies. In summary, OGD can
increase the government transparency and public
awareness of government processes.
Although governments have adhered and
embraced the open data initiative, there are two
major issues associated with the process. Firstly,
the structured OGD is published in different
formats and secondly the data encoded in
unstructured text is not available as part of the
structured open data.
The widely used formats to publish OGD are
comma separated values (CSV), spreadsheets,
HTML tables, and Portable Document Format
(PDF) files. The issue that arises when publishing
the data using multiple formats is that users face
problems in linking multiple datasets and initiating
a reasoning process. For example, assume that a
user needs to find number of schools located in a
particular area and the road traffic in that area in
the last year. The user needs to open two data sets
to complete for this analysis, and in addition
he/she must do the analysis by manually
comparing the data. It is obvious that this becomes
more complex when performing analyses with
multiple datasheets. Therefore, there is a clear
need for infrastructure to link the raw data into a
206
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 206-212
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
domain classified entity framework. Much of the
data in the web today are unstructured text [2].
This applies to government data as well, where a
large quantity of data is stored as natural language
text documents making it difficult for humans to
understand quickly and making is almost
impossible for computer systems to process the
data.
The rest of the report is structured as follows.
Section 2 provides a brief overview of New
Zealand Open Data initiative. Section 3 offers an
essential introduction to Linked Open Data and
explains its basic functions. Section 4 is dedicated
for open data to Linked Data transformation. In
Section 5, we describes the Linked Data triple
extraction from unstructured text. Section 6 lists
some of the research outputs from this study.
Section 7 describes future directions of this
research while Section 8 concludes the report with
a brief summary.
2 NEW ZEALAND OPEN DATA INITIATIVE
New Zealand government has also published open
data following the OGD specification which
belongs to multiple domains. This section
provides an overview of the currently available
New Zealand open data government datasets [3].
Table 1 shows properties of selected open
government datasets.
Table 1 Sample set of Open Government Data and their properties
Category
Example datasets
Associated agencies
Formats
Agriculture,
forestry, and
fisheries
- Historical livestock production
- Production, trade, and consumption
of plywood
- Ministry for Primary Industries
XLS
Arts, culture,
and heritage
- Survey of English Language
Providers
- New Zealand Heritage List
- Statistics New Zealand (SNZ)
- Heritage New Zealand
XLS
CSV
Building,
construction,
and housing
- Dwelling and Household Estimates
- Market rent
- SNZ
- Ministry of Business Innovation and
Employment (MBIE)
CSV
HTML
Commerce,
trade, and
industry
- Occupation Classification
- Industrial Classification
- SNZ
XSL
CSV
Education
- Directory of Educational Institutions
- Teachers Register
- Ministry of Education
CSV
DB
Employment
- Benefit Fact Sheets
- Ministry of Social Development
CSV
Energy
- Coal production statistics
- Electricity Market Information
- New Zealand Petroleum and
Minerals GIS data services
- MBIE
- Electricity Authority
XSL
CSV
Environment
and
conservation
- Annual growing degree days
- New Zealand Lizards Database
- Ministry for the Environment
- Landcare Research
CSV
DB
Fiscal, tax,
and
economics
- New Zealand Income Survey
- Gross Domestic Product
- Productivity Statistics
- Institutional Sector Accounts
statistics
- SNZ
CSV
PDF
HTML
XSL
Health
- Tobacco Trends
- Ministry of Health
XSL
Infrastructure
- Internet Service Provider Survey
- SNZ
XSL
207
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 206-212
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
3 LINKED OPEN DATA
Linked Data is a set of guidelines to link related
structured data on the web [4] [5]. The data is
represented in triple form (subject, predicate,
object) and. The subject and predicate in a triple
are always IRIs (Internationalized Resource
Identifiers). Specifying a triple as an IRI facilitates
the linking with other triples.
Resource Description Framework (RDF) [6] is the
basic representation format for the Semantic Web.
RDF document is a directed graph where both
nodes and edges are tagged with identifiers with
two exceptions. Firstly, RDF allows for the
encoding of data values as literals and secondly,
blank nodes can be introduced in the RDF graph.
Blank nodes are not labeled with IRIs and
therefore these cannot be referred globally and to
refer them in the context of the document a node
identifier is provided. Figure 1 depicts an example
RDF graph.
Figure 1. An example RDF graph
Ontology supports the organization of Linked
Data based on the conceptualization. An ontology
can be coded using both OWL (Web Ontology
Language) and RDFS (RDF Schema). However,
OWL is expressive than RDFS which is more
suitable for small scale ontologies.
Technology stack for Linked Data is shown in
Figure 2.
Figure 2. Technology stack for Linked Data
4 TRANSFORMING OPEN DATA TO
LINKED OPEN DATA
This section describes the process of transforming
Open Data to the Linked Open Data. The first step
of the process is to build the conceptualization of
the New Zealand entities, so that Linked Data
triples can be correctly organized under classes.
We have designed the version 1.0 of the ontology
which is customized for New Zealand.
4.1 Ontology and Controlled Natural Language
The ontology contains 107 ontology classes (see
Figure 3). Ontology can be further expanded and
improved using the help of domain experts.
However, a main drawback in managing an
ontology with non-technical users is that they are
not familiar with terminology of OWL and
associated technical details. Therefore, to reach
the non-technical domain experts, we provided the
OWL in Controlled Natural Language (CNL)
format. The CNL format converts the OWL
specification of an ontology to a natural language
representation using a limited terminology (shown
in Figure 4). This can improve the non-technical
community engagement in ontology expansion
with an extended conceptualization of all of the
different domains in New Zealand.
208
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 206-212
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
Figure 3 Ontology class view
Figure 4 Controlled natural language based description for
ontology specification
4.2 Transforming Data to RDF/XML
New Zealand Open Data is published in four
major document formats and all are in tabular
format. In addition, many datasets share common
specifications. We propose a template based
approach to transformation taking aforementioned
factors into consideration. The template is
composed in XML and it contains necessary
information to identify the triple elements and the
data types associated with each object. A sample
XML template developed to extract triples from
“Schools Directory” CSV files is shown in Figure
5. These templates can be used with other datasets
with minimal modifications.
Figure 5 A sample XML template developed to extract
triples
While extracting triples using the template, the
process converts each subject to a URI. This is
because RDF does not allow literal values for
triple subject. The reason for this is that difficulty
in linking data as well as ambiguity that may arise
when two literal values exist with same content.
The framework has graphical user interface (GUI)
to support RDF file creation and perform some
basic edits (shown in Figure 6). This is because to
provide opportunity to domain experts to decide
the ontology class of the dataset. As current OGD
is not classified under classes, it is not possible to
assign an ontology class to a dataset without
human involvement.
Figure 6 Graphical user interface of the framework
209
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 206-212
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
5 TEXT TO LINKED DATA
Although vast amount of data is available as
structured open data, the majority of information
is still provided as unstructured text. To build a
rich Linked Dataset for New Zealand, there is a
clear need to transform this unstructured data to
Linked Data triples [7] [8] [9]. This section
describes the framework developed to extract
triples from unstructured text using Open
Information Extraction (OpenIE).
In this phase we mainly focus on web based text
resources related New Zealand, so that
information embedded in such resources can be
transformed to triples. These triples can enrich the
Linked Data acquired from Open Data. However,
we generalize the approach in way that we
consider broad domain which belongs to the New
Zealand without strictly limiting it to the basic
government data. To accomplish such restriction,
a pre-determined constraints are necessary which
are not implemented as a part of this research.
5.1 Text Extraction and preprocessing
We use an automated web search to identify web
pages which contains keywords extracted from a
seed text resources. These seeds contains pre-
determined text resources that we have already
identified as which contain information directly
related to New Zealand.
The keyword extraction is based on rule based
noun phrase chunking method [8], [9]. We first
Part-Of-Speech (POS) tag [10] the seed text
resource and applied the predetermined rules to
identify the keywords. The phrase chunking rules
are shown in Table 2.
Table 2 Phrase chunking rules
Phrase chunking rules
NN..
[JJ]
[NN, NNP, NNS, NNPS]
NNP..
[JJR]
[NN, NNP, NNS, NNPS]
NNS..
[NN, NNP, NNS, NNPS]
[NN, NNP, NNS, NNPS]
NNPS..
Once the text resources are identified, we extract
the text using shallow text features. Basic set of
these features include average word length,
average sentence length, and the absolute number
of words. In addition to these features, local
context and heuristically derived set of features
are also used. The local context is the position of
the text in a document. Identifying this local
context is essential in extracting text from HTML
documents where text is placed in between
boilerplate.
5.2. OpenIE
The traditional Information Extraction (IE),
ClosedIE, focus on extracting relations using
resource expensive linguistics models, hand
tagged data, and hand crafted rules [2]. In essence,
ClosedIE is targeting extracting relations which
are specified in advance using rules. These models
generally fails when applied in the heterogeneous
text resources (i.e. web documents) [3] due to
several reasons. Rules generated to one domain
sometimes conflict with a different domain. Next,
tagging must be carried out in all domains.
However, in a large-scale heterogeneous dataset
identifying all domains makes a challenge itself.
Finally, if text resource is added from a
completely new domain, that makes the ClosedIE
a complete failure.
To address this incapability of traditional IE
models, Open Information Extraction (OpenIE)
[11] [12] [13] is introduced. Open IE utilizes the
self-supervised learning where relation extraction
process is deliberated as a complete automatic
process eliminating deep linguistic parsing.
According to Wu and Weld [14], OpenIE process
is a function from a document (d), to a set of
triples
arg1, rel,
arg2
where the args are noun
phrases and rel is a textual fragment which
indicates an implicit, semantic relation between
the two args. An important feature of OpenIE is
that one triple is generated for every relation
appeared in the text. However, recent progress in
the OpenIE has moved beyond this by generating
new triples using existing ones though template
and seed based mechanisms.
When considering a number of document (D), the
complexity of OpenIE is O(D). For the same
210
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 206-212
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
documents collection and set of relations (R)
which are specified in advance, ClosedIE has the
complexity of O(D*R). In particular, we utilize a
clause based OpenIE model [11] to extract
relations. The clauses include 7 basic clause types
and 5 other extended clause types.
These clauses are identified by dependency
parsing a seed sentence set. Each dependency
pattern extracted from parse tree is mapped to a
clause type. However, the available dependency
parse is not compatible with recent Stanford
dependency parse specifications which include
modified set of dependency parse where some are
aggregated while some are extended. The parser
used for this research includes the latest
dependency parse specification based clause
extractor which is modified accordingly. These
dependencies are then converted to clauses and
propositions.
After applying the OpenIE, the resulting relations
are converted to triple form. The arg1 is mapped
as the subject with a URI, rel as the predicate, and
the arg2 is mapped as the object. A key factor to
consider here is that except the subject, it is
difficult to automatically identify objects with
URIs. This is because object values cannot be
classified as literals and as real world
conceptualized entities. For this we propose the
crowd-sourced validation.
Figure 7 illustrates the number of triples extracted
from the sentence collection. According to the
figure, it is clear that framework has extracted
reasonable number of triples from the sentences.
Figure 7 Extracted triples for sentence collection
5.3. Triple Storage
The extracted triples can be stored in a graph
database to build a scalable Linked Data cloud and
open it to the general public. As an additional task
we have imported the RDF triples to GraphDB
community edition based graph database.
6 CONCLUSION AND FUTURE
DIRECTIONS
The aim of this research was to examine the
possibility of developing a Linked Data cloud
based on the New Zealand OGD. Initial steps of
this research designed an ontology for the New
Zealand OGD and a process of transforming
existing data to triples using a template based
approach. The later steps of this project
investigated the methodology of extracting triples
from unstructured data to enhance the triple
storage.
The research here uses limited set of data files and
templates to carry out an investigation of
transforming OGD to Linked Data. These
experiments and implementations shows that the
method is viable. Therefore, as future work of this
research one should focus on building templates
and transforming existing data to Linked Data
while new data can be stored in the graph database
as triples. Another key factor to consider here is
representation of statistical data in the Linked
Data cloud. Statistical data slightly differs from
other data as they contain archived records of
annually collected data. These can be transformed
in the same form as the other open data. However,
there are alternative ways of organizing this data
based on time, category, or based on other
provided factors. The current research did not
consider such classifications hence is left as future
work.
The process of triple extraction from unstructured
text includes a text extraction and preprocessing
module in which boilerplate of web based text is
removed and co-references are resolved. However,
with the meta web [15] concept, currently there is
a growing trend towards populating web pages
with meta data. The future research focus of
improving this module to extract metadata from
211
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 206-212
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
webpages to associate with extracted text. This
metadata can help later in the process to classify
the triples to ontology classes and enrich the data
with tags.
Triple extraction is based on OpenIE model which
incorporates clauses to extract large number of
relations from the text. However, the current
model has some limitations. Among them the
main drawback is that the relation extraction
process does not support information which is
represented with different forms. For example,
Wikipedia shows the birthdate of a person in the
form of “Steve Jobs (18-09-1949)”. These phrases
do not contain clauses. Therefore, future research
will focus on extracting such relations with rule
based and trained classifiers.
The current research utilizes GraphDB for triple
storage. However, there are several triple storages
which can be utilized for this task. Table shows a
comparison of some of the triple storages which
we can utilize for this task. The future research
will provide a practical view of using these triple
storages with multiple experiments carried using
OGD. Furthermore, we expect to build a web
service to automate the process explained in this
research and connecting the triple storage. The
web service will provide the same set of
functionalities using an easy to use interface.
REFERENCES
1. Zapilko, B., Mathiak, B. Performing statistical methods
on linked data. In: International Conference on Dublin
Core and Metadata Applications. Copenhagen,
Denmark: European Library Office (2011).
2. Sirmakessis, S. Text Mining and Its Applications:
Results of the NEMIS Launch Conference. 138th ed.
Springer International Publishing (2012).
3. Oh, J. Open Data in New Zealand. University of
Auckland. Auckland, New Zealand (2013).
4. Gerber, D., Ngomo, A. Bootstrapping the Linked Data
Web. In: The 10th International Semantic Web
Conference. Bonn: Springer-Verlag (2011).
5. Ngomo, A., Auer, S., Lehmann, J., Zaveri, A.
Introduction to Linked Data and Its Lifecycle on the
Web. In: 7th International Summer School. Galway,
Ireland: Springer International Publishing (2014).
6. Segaran, T., Evans, C., Taylor, J. Programming the
Semantic Web. vol 54 (2009).
7. Perera, R., Nand, P. A Multi-strategy Approach for
Lexicalizing Linked Open Data. In: 16th International
Conference on Intelligent Text Processing and
Computational Linguistics (CICLing). Springer
International Publishing (2015).
8. Perera, R, Nand, P. The Role of Linked Data in Content
Selection. Trends Artif Intell. (2014).
9. Perera, R, Nand, P. RealText cs - Corpus Based Domain
Independent Content Selection Model. In: 26th IEEE
International Conference on Tools with Artificial
Intelligence. IEEE Press (2014).
10. Manning, C., Bauer, J., Finkel, J., Bethard, S.,
McClosky, D. The Stanford CoreNLP Natural Language
Processing Toolkit. In: The 52nd Annual Meeting of the
Association for Computational Linguistics. Baltimore:
Association for Computational Linguistics (2014).
11. Corro, L., Gemulla, R. ClausIE: clause-based open
information extraction. (2013).
12. Mausam, S., Bart, R., Soderland, S., Etzioni, O. Open
language learning for information extraction. In: Joint
Conference on Empirical Methods in Natural Language
Processing and Computational Natural Language
Learning. Jeju Island: Association for Computational
Linguistics (2012).
13. Zhila, A., Gelbukh, A. Comparison of open information
extraction for Spanish and English. In: International
Dialogue Conference. Moscow, Russia: Association for
Computational Linguistics (2013).
14. Wu, F., Weld, D. Open Information Extraction using
Wikipedia. In: 48th Annual Meeting of the Association
for Computational Linguistics. Uppsala, Sweden:
Association for Computational Linguistics (2010).
15. Ceri, S., Bozzon, A. Web Information Retrieval.
Springer International Publishing (2013).
212
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 206-212
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
An Efficient Algorithm for Shortest Computer Network Path
Muhammad Asif Mansoor, Taimoor Karamat
Department of Computer Science & Information Technology,
Virtual University of Pakistan,
Lahore, Pakistan
m_asifmansoor@hotmail.com, taimoor.karamat@vu.edu.pk
ABSTRACT
To find the shortest path in computer network problem is
always a challenge for the network designers and many
algorithms have been developed in past to solve this
problem. One of these algorithms is the quantum
classical algorithm that is the possibly fastest algorithm
due to its wave like properties and it works faster with
different operations. An algorithm is usually called an
efficient algorithm if it takes less computation time than
other algorithms. One of the popular algorithms for
finding the shortest network path is Dijkstra’s algorithm.
In this paper an efficient algorithm is presented to find
shortest network path and its comparison with already
developed algorithms which are Dijkstra’s algorithm and
Dynamic Dijkstra’s algorithm. Along this simulation
results are generated and proved that the proposed
algorithm has improved the performance for shortest
network path.
KEYWORDS
Efficient Algorithm, Dijkstra’s Algorithm, Shortest
Network Path, Graph Theory, Computation Time
1. INTRODUCTION
Many efficient algorithms have been developed in
recent past to find the shortest network path. Some of
them were extraordinary fast and best in terms of
exponential and some were fast in terms of
polynomial. Our Proposed algorithm showed some
uniqueness than other algorithms developed in the
past in terms of finding the shortest network path.
In this research paper an algorithm is proposed to
find the shortest network path. We have used
Dijkstra’s and Dynamic Dijkstra’s algorithms for
comparison of the shortest network path in the
experimental network graph. Our simulation results
prove that our proposed algorithm performs well
than the other algorithms.
This paper is organized in following sections; first
section is introduction followed by the section
related work then there is the section of proposed
algorithm & analysis after that there is section of
implementation & simulation work which is
followed by the sections future work and conclusion.
2. RELATED WORK
To find the shortest path in a network is a key area
of research, where an efficient algorithm can
increases network efficiency and reduce load in any
network. Though researchers are still proposing new
dimensions to improve the performance of a
network by decreasing the number of computations
to find optimum path, fast track updating of new
routes, routing table recalculations etc. In [11] the
author has proposed an algorithm that reduces
calculations by 8% as compared to Dijkstra’s
algorithm. Instead of working on the whole network
for calculations the proposed algorithm is applied on
sub graph restricted to upper bound distance
between two nodes, experimental results are for 150
nodes with 176 edges. In [12] it was proposed that
first we should compare modified matrix with values
of shortest path, which makes it a complex solution
whereas in [13] the authors introduced a solution
where we have feature matrix that have 0-1’s, which
makes it difficult to read the shortest path lengths.
The aforementioned issues were resolved in [14] the
path and lengths were found intuitively by
increasing shortest path tree.
The proposed algorithm that is presented in this
paper is tested against both static and dynamic
Dijkstra’s algorithm. In past many algorithms have
been developed [1, 2] to find the best and efficient
network path. One of them is Dijkstra’s algorithm
213
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 213-218
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
which is very popular network popular network
graph search algorithm that was generated to find
shortest network path. When the Dijkstra’s
algorithm is applied [3] in a network graph then it
starts from source vertex called s and it is reachable
from S that pass through the all vertices and a tree
structure T grows up. It follows the rule that at start
it goes with first vertex S then to its closest vertex,
and then to the next one closest vertex and so on. As
Dijkstra’s algorithm is a quantum search algorithm
[4] and mostly used in finding the shortest network
path in which all the edges have non-negative
values. The main advantages which are found [5] are
that it is better and faster than other algorithms used
for computing the dense graphs. It is also very
useful to find the shortest paths of unknown network
graph. The main disadvantages which are found are
that as this algorithm is from the class of quantum
search algorithms so it can be mixed with other
classical algorithms. On the basis of the problem
found in Dijkstra’s algorithm [6, 7] we have
proposed a new approach for the solution of such
problems which is discussed and proved via
simulation work in subsequent sections.
3. PROPOSED ALGORITHM AND ANALYSIS
The proposed algorithm is static routing algorithm
which becomes in-effective when the link status of
the network is changed. This algorithm requires the
updates from smaller part of the shortest path. This
increases the load on router’s CPU as it requires re-
computation of the new shortest path as a whole and
not the use old shortest path. As a result more
routers’ resources are used and it takes more time
for the Dijkstra’s algorithm to find the shortest path.
Packet may be loss due to this reason and the
routing tables do not update to show the correct
network topology information.
By definition an efficient algorithm is that one
which takes less time than other algorithms of that
class of algorithms. In this research paper we
presented a proposed algorithm which will reduce
the total time taken to find the shortest network path.
Static and dynamic algorithms are applied to find
very important constraint which is the time taken by
running this proposed algorithm to find the shortest
network path.
Static routing algorithm is used to find the shortest
path in the network rather than using the dynamic
algorithm. The reason for using static routing
algorithm in such a situation is because there are
large numbers of nodes for which the weights are to
be computed. So in this scenario it is better to use
the static routing algorithm rather than dynamic
algorithm which takes more computational time for
each node in the network.
The pseudo code of the proposed algorithm which is
divided into three steps is as follows:
Step 1: Find the depth of the network by explore the
structure of network
Find the depth of the network // Use
Depth First Search (DFS) algorithm
D = Depth of entire network for all nodes
Do
Find the link of which link cost is
changed
Use DFS to compute depth of the link
if depth of the link < D
// Entire network depth is greater than
depth of the single node link
then
Go to Step 2
else
Go to Step 3
endif
endfor
Step 2: Perform Static Routing
By using the static routing find the shortest path
G = (V, E) // Dijkstra’s
algorithm is used
Routing Table information is updated
Step 3: Perform the Dynamic Routing
Use dynamic routing to find shortest path
G = (V, E) // Dijkstra’s algorithm
is used
Update the previous nodes initialization information
214
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 213-218
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
Depth First Search is used from a node in
shortest path time T
// All previous nodes are updated
From Q, remove edges with ended nodes and
which are from previous nodes
In Q, old information is updated
A temporary shortest path time T is
obtained
while (previous nodes)
{previous nodes, inc_node} ← Extract
(M)
if some incoming links hold by v between
previous nodes
then
if incoming links for D(i)> inside
nodes for D(j)
then D(i) = D(j)
endif
endif
Routing table is updated
The aforementioned procedure is adopted in our
algorithm. At first to make a decision for applying
which algorithm on a given network, the entire
depth of the network is found by using Depth-first
search. Secondly, static routing algorithm is applied
for whose links whose weights are new and have
depth less than 20%. At last stage, if some links
have the weights with depth more than 50%
dynamic algorithm is used to compute the shortest
network path.
In some cases where the weight of the links is new
and is near to end node, dynamic algorithm is
applied because there is small number of nodes for
which the weight is to be computed. In this case
only weight of old and effected nodes is calculated
rather than of every node.
4. SIMULATION AND RESULTS
In this section simulation is carried out and results
are generated. The proposed algorithm is used for the
network depth of about 50% by applying the static
and dynamic routing algorithms and simulated them
from the depth of 40% to 60%. We have found in
results of simulation that the 40% of the network
depth is best possible value for this algorithm as its
computation time is less than the computation time
of other algorithms.
This simulation work is shown in below Figure 1
which is displaying the comparison between
different algorithms.
Figure 1: The Simulation Result with respect to Graph Size
Performance and Simulation Results:
We have evaluated the performance of proposed
algorithm for shortest network path and compared it
with other algorithms. One of them is Dijkstra’s
algorithm and the other one is Dynamic Dijkstra’s
algorithm.
For the input parameters of simulation, we have
used the parameters including the number of nodes
in the network, rate of change for link weights and
the deviation of link weights. We have taken the
following values to perform simulation work:
Number of Nodes in Network = 50, 100,
150, 200, 250, 300
Rate of change for link weights in percentage
= 100, 200, 300
Link weights changing (Deviation) = 5, 10,
15
On the basis of above values the simulation work is
performed to compare 3 different algorithms in
which one in our proposed algorithm.
Simulation Results:
In simulation for comparison of algorithms the
computation time for each algorithm is measured for
finding the shortest network path. Images of
simulation results of comparison between our
proposed efficient algorithm, Dijkstra’s algorithm
215
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 213-218
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
and Dynamic Dijkstra’s algorithm are shown in this
research paper.
Rate of Change for Link Weights:
In the Figure 2, the rate of change for link weights
on value of 100% is shown. Here in this figure it can
be seen that the computation time for the value of
100% rate of change for link weights for our
proposed algorithm is less than the other algorithms.
Figure 2: Computation Time against network size of 100
nodes for Rate of Change for Link Weights
In Figure 3, the graphs for rate of change for link
weights with value 200 and in Figure 4, the link
weights with value 300 are shown.
It is clear from these graphs that are generated via
simulation results; our proposed algorithm is taking
less computation time than the other algorithms.
Figure 3: Computation Time against network size of 200
nodes for Rate of Change for Link Weights
Figure 4: Computation Time against network size of 300
nodes for Rate of Change for Link Weights
Now the simulation is performed to compare the
deviation of the link weights against different values
for the efficient algorithm, Dijkstra’s algorithm and
Dynamic Dijkstra’s algorithm.
In below Figure 5 the simulation result in forma of
graph for the Deviation of Link weights for value 5
is shown and it can be seen that the computation
time for our proposed algorithm is less than other
algorithm’s computation time.
Figure 5: Computation Time against network size for
Deviation value of 5 for Link Weights
In next two simulation result graphs the deviation of
link weight for value 10 and 15 are shown. The
computation time of our proposed algorithm is less
than Dijkstra’s algorithm and Dynamic Dijkstra’s
algorithm.
The simulation result graphs for both deviations of
link weights values of 10 and 15 are shown in
Figure 6 and Figure 7 respectively to find the
computation time against the network size.
216
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 213-218
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
Figure 6: Computation Time against network size for
Deviation value of 10 for Link Weights
Figure 7: Computation Time against network size for
Deviation value of 15 for Link Weights
As shown in all above simulation results, our
proposed algorithm for finding the computation time
of shortest network path is less than other algorithms
including Dijkstra’s algorithm and Dynamic
Dijkstra’s algorithm which were used in the
evaluation of an efficient algorithm.
Here it can be noticed that as we increase the
number of nodes in every simulation result the
computation time of each algorithm also increases
and it takes longer computation time to find the
shortest network path.
Furthermore if the rate of change for link weights is
calculated lower then the computation time is
counted for long time on liner base but if the rate of
change for link weights is higher then the
computation time for finding the shortest network
path becomes longer on nonlinear basis.
The results of simulation work which are shown in
all above graphs prove that our proposed algorithm is
better than already developed algorithms in the past
by other researchers. The results are compared for
computation time against the network size which
includes the number of nodes.
5. CONCLUSION
By applying this algorithm on an experimental
network and measure the effectiveness of total
computation time, our developed algorithm is an
efficient algorithm to find the shortest network path.
This is also proved by the simulation results in form
of graphs mentioned in this research paper. The
proposed algorithm is compared with Dijkstra’s and
Dynamic Dijkstra’s algorithms and it shown the
results better in performance than the other
algorithms for the shortest network path in terms of
computation time.
6. FUTURE WORK
Our proposed algorithm to find the shortest network
path in less time than the other algorithms from same
class can be extended further to find shortest network
path in further less time than current time.
Some more details are also required to work more in
our proposed algorithm. More optimized work would
be needed to make this algorithm to work more
smartly and find the best shortest path in a given
network.
Furthermore for the shortest network path problems
in network routing, more network topologies like
Star, Ring, and tree can be implemented and for
these topologies the searching algorithms can be
faster than linear array type algorithms. More work
can also be done for unknown network paths for
which some powerful algorithms are required to
develop.
Our proposed algorithm can be further extended in
the future works and a generalization form of
algorithm can be developed which will work for
every type of network path and will find the best
network path with best computation time.
217
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 213-218
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
7. ACKNOWLEDGMENT
We special thanks to our Advanced Computer
Network course teacher Dr. Amir Qayyum who
recommended our research topic and guide us to
work on this exciting computer network research
topic.
8. REFERENCES
[1] Taehwan Cho, Kyeongseob Kim, Wanoh Yoon and
Sangbang Choi: A Hybrid Routing Algorithm for an
Efficient Shortest Path Decision in Network Routing, July
2013 :1-5
[2] Huseyin Kusetogullari, Md. Haidar Sharif, Mark Leeson,
Turgay Celik: A Reduced Uncertainty-Based Hybrid
Evolutionary Algorithm for Solving Dynamic Shortest-
Path Routing Problem, February 2012 :1-2
[3] AmmarW. Mohemmed, Nirod Chandra Sahoo: Efficient
Computation of Shortest Paths in Networks Using Particle
Swarm Optimization and Noising Metaheuristics, April
2007 :2-5
[4] Gang Feng, Turgay Korkmaz: A Hybrid Algorithm for
Computing Constrained Shortest Paths, April 2013 :1-3
[5] Mohammad Reza Soltan Aghaei, Zuruati Ahmad
Zukarnain, Ali Mamat, Hishamuddin Zainuddin: A Hybrid
Algorithm for Finding Shortest Path in Network Routing,
June 2009 :3-6
[6] Kirill Levchenko, Geoffrey M. Voelker, Ramamohan
Paturi, Stefan Savage: XL: An Efficient Network Routing
Algorithm, September 2008 :1-7
[7] Sumitha J.: Routing Algorithms in Networks, December
2013 :5-6
[8] K.Rohila, P.Gouthami, Priya M: Dijkstra’s Shortest Path
Algorithm for Road Network, October 2010. :4-8
[9] Andrew V. Goldberg, Eva Tardos and Robert E. Tarjan:
Network Flow Algorithm, August 1990 :9-14
[10] Tayseer S. Atia, Manar Y. Kashmola: A Hybrid Hopfield
Neural Network and Tabu Search Algorithm to Solve
Routing Problem in Communication Network, June 2012
:13-18
[11] Practical Algorithm for Shortest Path on Transportation
Network Zhen Zhang, Wu Jigang, Xinming Duan School
of Computer Science and Software Tianjin Polytechnic
University, Tianjin 300160, China :7-17
[12] Liu Ping, Bai Cuimei A Improvement of the Shortest Path
Based on the Dijkstra Algorithm [J]. Qinghai Normal
University, 2008,1 (1) :79-80
[13] Li Guilin A Improvement of the Dijkstra Algorithm [J].
Development and Application of Computer 2009 22 7 13-
14
[14] An Improvement of the Shortest Path Algorithm Based on
Dijkstra Algorithm Ji-Xian Xiao college of science Hebei
Polytechnic University Tangshan, China
xiaojix@yahoo.com.cn Fang-Ling Lu college of science
Hebei Polytechnic University Tangshan, China :5-9
218
International Journal of Digital Information and Wireless Communications (IJDIWC) 6(3): 213-218
The Society of Digital Information and Wireless Communications, 2016 ISSN: 2225-658X (Online); ISSN 2412-6551 (Print)
The International Journal of Digital Information and Wireless Communications aims to provide a forum
for scientists, engineers, and practitioners to present their latest research results, ideas, developments
and applications in the field of computer architectures, information technology, and mobile
technologies. The IJDIWC publishes issues four (4) times a year and accepts three types of papers as
follows:
1. Research papers: that are presenting and discussing the latest, and the most profound
research results in the scope of IJDIWC. Papers should describe new contributions in the
scope of IJDIWC and support claims of novelty with citations to the relevant literature.
2. Technical papers: that are establishing meaningful forum between practitioners and
researchers with useful solutions in various fields of digital security and forensics. It includes
all kinds of practical applications, which covers principles, projects, missions, techniques,
tools, methods, processes etc.
3. Review papers: that are critically analyzing past and current research trends in the field.
Manuscripts submitted to IJDIWC should not be previously published or be under review by any other
publication. Original unpublished manuscripts are solicited in the following areas including but not
limited to:
Information Technology Strategies
Information Technology Infrastructure
Information Technology Human Resources
System Development and Implementation
Digital Communications
Technology Developments
Technology Futures
National Policies and Standards
Antenna Systems and Design
Channel Modeling and Propagation
Coding for Wireless Systems
Multiuser and Multiple Access Schemes
Optical Wireless Communications
Resource Allocation over Wireless Networks
Security; Authentication and Cryptography for Wireless Networks
Signal Processing Techniques and Tools
Software and Cognitive Radio
Wireless Traffic and Routing Ad-hoc Networks
Wireless System Architectures and Applications
International Journal of
DIGITAL INFORMATION AND WIRELESS COMMUNICATIONS
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In knowledge management literature it is often pointed out that it is important to distinguish between data, information and knowledge. The generally accepted view sees data as simple facts that become information as data is combined into meaningful structures, which subsequently become knowledge as meaningful information is put into a context and when it can be used to make predictions. This view sees data as a prerequisite for information, and information as a prerequisite for knowledge. In this paper, I will explore the conceptual hierarchy of data, information and knowledge, showing that data emerges only after we have information, and that information emerges only after we already have knowledge. The reversed hierarchy of knowledge is shown to lead to a different approach in developing information systems that support knowledge management and organizational memory. It is also argued that this difference may have major implications for organizational flexibility and renewal.
Article
Full-text available
Regression tests are executed when some changes are made in the existing application in order to check the negative impact of the changes in the rest of the system or on the expected behavior of other parts of the software. There are two primary options for test suites used during regression testing, first generate test suites that fulfill a certain criterion, or user session based test suites. User-sessions and cookies are unique features of web applications that are useful in regression testing. The main challenge is the effectiveness of average percentage fault detection rate and time constraint in the existing techniques. Test case prioritization techniques improve the performance of regression testing, and arrange test cases in order to obtain maximum available fault that is going to be detected in a shorter time. Thus, in this research the priority is given to test cases that are performed based on some criteria related to log file which collected from the database of server side. In this technique some fault will be seeded in subject application then applying the prioritization criteria on test cases to obtain the effectiveness of average percentage fault detection rate.
Article
Full-text available
The development of hybrid algorithms for solving complex optimization problems focuses on enhancing the strengths and compensating for the weakness of two or more complementary approaches. The goal is to intelligently combine the key elements of these approaches to find superior solutions to solve optimization problems. Optimal routing in communication network is considering a complex optimization problem. In this paper we propose a hybrid Hopfield Neural Network (HNN) and Tabu Search (TS) algorithm, this algorithm called hybrid HNN-TS algorithm. The paradigm of this hybridization is embedded. We embed the short-term memory and tabu restriction features from TS algorithm in the HNN model. The short-term memory and tabu restriction control the neuron selection process in the HNN model in order to get around the local minima problem and find an optimal solution using the HNN model to solve complex optimization problem. The proposed algorithm is intended to find the optimal path for packet transmission in the network which is fills in the field of routing problem. The optimal path that will be selected is depending on 4-tuples (delay, cost, reliability and capacity). Test results show that the propose algorithm can find path with optimal cost and a reasonable number of iterations. It also shows that the complexity of the network model won't be a problem since the neuron selection is done heuristically.
Book
The world of text mining is simultaneously a minefield and a gold mine. Text Mining is a rapidly developing applications field and an area of scientific research, using techniques from well-established scientific fields such as data mining, machine learning, information retrieval, natural language processing, case-based reasoning, statistics and knowledge management. The book contains the papers presented during the 1st International Workshop on Text Mining and its Applications held at the University of Patras, which was the launch event of the activities of NEMIS, a network of excellence in the area of text mining and its applications. The conference maintained a balance between theoretical issues and descriptions of case studies to promote synergy between theory and practice in the field of Text Mining. Topics of interest included document processing and visualization techniques, web mining, text mining and knowledge management, as well as user aspects and relations to official statistics
Book
Price and quality alone are no longer sufficient to gain competitive advantage. It is high quality knowledge which provides the opportunities for adding exclusive value to products and services. At the same time, the development of knowledge is gaining momentum. Knowledge is becoming obsolete more quickly and becomes more complex. The danger of this development is that organizations will continue to play the same competitive game and are often unaware that they are lagging behind.This book provides organizations with a way to shift the knowledge ambition and realize it in practice. For this purpose, an intelligent business strategy is offered based on the experiences of seven market leaders in The Netherlands combined with modern insights from the organizational theory. The Netherlands is presently the country with the highest productivity in commercial services.The authors devote much attention to the tools available to the knowledge enterprise, such as lateral structures, personnel management and information technology.
Article
Recently, shortest path tree construction is essential in network routing. Dijkstra algorithm, one of the static routing algorithms, is widely used. When some links develop new weights, dynamic routing algorithms become more efficient than static routing algorithms. This is because dynamic routing algorithms reduce the redundancy caused by re-computing the affected part of the network in regards to the changed links. However, dynamic routing algorithms are not always efficient in some cases and increase the computation time when making the shortest path tree. In this paper, we present a Hybrid Shortest Path Tree (HSPT) algorithm which reduces the total execution time of shortest path tree computation by using the advantages of both static and dynamic routing algorithms. Comparisons with the other routing algorithms such as Dijkstra, Dynamic Dijkstra and RDSP show that the HSPT algorithm provides a better performance as demonstrated by the decrease in the execution time.
Conference Paper
This paper aims at exploiting Linked Data for generating natural text, often referred to as lexicalization. We propose a framework that can generate patterns which can be used to lexicalize Linked Data triples. Linked Data is structured knowledge organized in the form of triples consisting of a subject, a predicate and an object. We use DBpedia as the Linked Data source which is not only free but is currently the fastest growing data source organized as Linked Data. The proposed framework utilizes the Open Information Extraction (OpenIE) to extract relations from natural text and these relations are then aligned with triples to identify lexicalization patterns. We also exploit lexical semantic resources which encode knowledge on lexical, semantic and syntactic information about entities. Our framework uses VerbNet and WordNet as semantic resources. The extracted patterns are ranked and categorized based on the DBpedia ontology class hierarchy. The pattern collection is then sorted based on the score assigned and stored in an index embedded database for use in the framework as well as for future lexical resource. The framework was evaluated for syntactic accuracy and validity by measuring the Mean Reciprocal Rank (MRR) of the first correct pattern. The results indicated that framework can achieve 70.36% accuracy and a MRR value of 0.72 for five DBpedia ontology classes generating 101 accurate lexicalization patterns.