Content uploaded by Markus Jäger
Author content
All content in this area was uploaded by Markus Jäger on Dec 04, 2017
Content may be subject to copyright.
Focussing on Precision- and Trust-Propagation
in Knowledge Processing Systems
Markus J¨ager1, Jussi Nikander2, Stefan Nadschl¨ager1,
Van Quoc Phuong Huynh1, and Josef K¨ung1
1Institute for Application Oriented Knowledge Processing (FAW)
Faculty of Engineering and Natural Sciences (TNF)
Johannes Kepler University Linz (JKU), Austria
{mjaeger, snadschlaeger, vqphuynh, jkueng}@faw.jku.at
2Natural Resources Institute Finland (LUKE)
Helsinki, Uusimaa, Finland
jussi.nikander@luke.fi
Abstract. In knowledge processing systems, when gathered data and
knowledge from several (external sources) is used, the trustworthiness
and quality of the information and data has to be evaluated before con-
tinuing processing with these values. We try to address the problem of
the evaluation and calculation of possible trusting values by considering
established methods from known literature and recent research.
After the calculation, the obtained values have to be processed, depend-
ing on the complexity of the system, where the values are used and
needed. Here the way of trust propagation, precision propagation and
their aggregation or fusion is crucial, when multiple input values come
together in one processing step. We discuss elaborated trust definitions
already available and according options for trust and precision aggrega-
tion and propagation in units of knowledge processing.
Keywords: trust, precision, trust measurement, precision measurement,
trust aggregation, precision aggregation, trust fusion, precision fusion,
trust propagation, precision propagation, trust management, precision
management, sensors, sensor precision knowledge processing systems
1 Introduction
When gathering and processing data or knowledge in an environment, the qual-
ity, accuracy, certainty and precision of the data or knowledge cannot always be
ensured. This is gaining importance even more, when the source of the data or
knowledge is not supervised by your agents or is part of your environment. In our
work we concentrate on knowledge processing in general, as it usually requires
more complex calculations and processing of the available data and information
than conventional data processing.
The rest of the paper is structured as follows: section 2 refers to related
work on trust and other important terms in our research as well to knowledge
based systems. In section 3, we discuss the trust issue on data and agents and
give working-definitions, based on established scientific research. We show our
recent work in section 4. In the sections 5 and 6 we focus on the question of
propagating representative values through knowledge processing steps and how
to handle trust in knowledge processing systems in general. Section 7 sums up
and concludes our work and gives an outlook on future work.
2 Related Work
2.1 Trust
There are numerous different definitions of trust available in literature. In gen-
eral, trust is considered to be belief in the reliability, correctness, or benevolence
of the party being trusted. It is also possible to further divide trust into differ-
ent types, such as done in [15], where three different types of trust are defined:
trusting beliefs, trusting intentions, and trusting behaviors. Of these three the
first is the type of trust we mentioned here: belief in the positive quality of
the other party. Trusting intentions is committed willingness to depend on the
trusted party, and trusting behaviors are actions that demonstrate trust towards
the other party.
Another related definition of trust is given in [13], where trust is defined as a
belief about the reliability of, and as a decision to depend on the trusted party.
Thus, this definition combines the intentions and behaviors described in [15].
Furthermore, in [13], belief – or subjective opinion – is formally defined as an
ordered tuple ωA
x= (b, d, u, a). In this tuple, b, d, u, a ∈[0,1] and b+d+u= 1.
In the tuple brepresents belief in a party or object, drepresents disbelief, u
uncertainty, and arepresents a base rate probability in absence of evidence and
is used to calculate the expected value of ωA
x.Ais the agent that hold the belief,
and xis the object of the belief, such as data item or another agent.
A third approach on trust is given in [7], where trust is discussed related to
pieces of data or knowledge. The trustworthiness of an item of data for item
of knowledge kis denoted as t(f) or t(k), which is the probability of for k
being correct. Furthermore, the trustworthiness of a data source sis defined as
the average of the trustworthiness of the data sprovides. In this definition data
items are elements provided by data sources that describe some entity or event.
Thus, for example, measurements could be considered data items. Knowledge
items, on the other hand, are created from the data items by some process.
Thus, this definition of trust is closely related to the hierarchical view of of data
and knowledge, as well as information and wisdom, often used in knowledge
management [3]. However, for the sake of completeness, it should be noted that
the hierarchical view of data to knowledge is a only small part of the overall field
of knowledge management [2].
For further research in this paper, we will rely on the definitions of Jøsang
et al. [12, 13] and Dai et al. [7], especially for the propagation of trust values, as
well as the definitions in section 3.
2.2 Trust-based sensory data fusion in Wireless Sensor Networks
Sensor Fusion is the combination of sensory data or derived data from sensory
data that the output information is in some sense better (qualities or quantities
in terms of accuracy, robustness, etc.) than would be possible when these sources
were used individually. In general, motivation for sensor fusion comes from the
following drawbacks that a single sensor system suffers from [21].
–Sensor Deprivation: The breakdown of the unique sensor element will cause
loss of perception on the observed object.
–Limited spatial coverage: A single sensor only covers a restricted region.
–Limited temporal coverage: Sensors need a particular delay time to perform
and transmit a measurement, thus the maximum frequency of measurements
is limited.
–Imprecision: Measurements from individual sensors are limited to the preci-
sion of the employed sensory element.
–Uncertainty: In contrast to imprecision, uncertainty depends on the observed
object rather than the observing sensor. A single sensor system cannot reduce
uncertainty because of its limited view on the object. Uncertainty arises when
features are missing, when the sensor cannot measure all relevant attributes
of the percept, or when the observation is ambiguous.
Fusion processes are often categorized into three levels.
–Low-level/raw data fusion: combines several sources of raw data to produce
new data that is expected to be more informative than the inputs.
–Intermediate-level/feature fusion: combines various features such as edges,
corners, lines, textures, or positions into a feature map that may then be
used for segmentation and detection.
–High-level/decision fusion: combines decisions from several experts. Methods
of decision fusion include voting, fuzzy-logic, and statistical methods.
Nowadays, the Internet of Thing (IoT) has been gained much attention from
researchers and practitioners; in that Wireless Sensor Networks (WSNs) is em-
ployed as the main technology of IoT. WSNs encompass many sensor nodes in
which each node perform a specific monitoring task. Obtained monitoring data
are then transmitted to the control center for further analysis. However, in an
open environment like WSNs, sensor nodes may be easily exposed by many kinds
of attacks such as node compromising, eavesdropping, physical disruption, etc.
which cause unreliable data. Hence, ensuring reliability for data is significantly
necessary, and one of approaches is to detect abnormities in data with methods of
trust evaluation for incoming data. Many researches have pursued this approach
such as [22–27]. The work in [22] introduces a trust evaluation model and a trust-
based data fusion method. In that, the trust value for a sensor node is estimated
based on its behavior and transmitted data. The trust model consists of three
components: data trust, behavior trust, and historical trust. In that, data trust
is calculated based on real time data, regional data, and historical data; behavior
trust is estimated through the statistical values of the sensors’ abnormal behav-
ior; and historical trust is updated and recorded according to the comprehensive
trust. In [23], a technique is proposed fusing multi-dimensional sensor data in
context-specific means employing Subjective Logic based on trust values of infor-
mation sources. The research showed better results for convoy operations than
a baseline counterpart. In [24], a trust rating method is introduced through a
reputation-based framework for sensor networks (RFSN) employing a watchdog
mechanism. RFSN utilizes a beta reputation system for sensor networks (BRSN)
which employs a Bayesian theory. The data fusion is then performed and the
impact of untrustworthy nodes can be reduced. A heuristic approach based on
trustworthy architecture for WSNs is proposed in [25]. Fan et al. proposed a
trust evaluation method based on energy monitoring to deal with the problem
of trust in WSNs [26]. A lightweight dynamic trust model synergizing a honey
bee mating algorithm is presented by Sahoo et al. [27]. The method aims at pre-
venting malicious nodes from becoming a cluster head. A lightweight trust model
is employed to make the clustering method more secure and energy efficient.
2.3 Precision, Accuracy & Certainty
When handling quality of data, trustworthiness of sources, certainty of values.. .,
the meaning of several important terms has do be distinguished. We used the
term ”certainty of data” to describe the ”reliability, confidence, and/or steadiness
of the provided data” from one source in our past research [10,11]. With these
terms, the ongoing work has to be considered more carefully, as discussions and
research e.g. from Streiner et al. [18] show (nevertheless they are from the medical
domain).
Other descriptions/definitions that can be taken into account are e.g. from
Usman [19]: ”Precision is the degree of obtaining a score on first turn repeats
on a second term.” and ”Accuracy is the degree of obtaining a score close to the
actual score.”
2.4 Provenance
When it comes to trust concerning trusting in data and trusting the sources of
data, the term ”Data Provenance” must be taken into account. It describes the
origin and complete processing history of any kind of data. A good introduc-
tion and overview can be found in ”Data provenance the foundation of data
quality” [5] and in ”Data Provenance: Some Basic Issues” [4]: We use the term
data provenance to refer to the process of tracing and recording the origins of
data and its movement between databases.” and ”It is an issue that is certainly
broader than computer science, with legal and ethical aspects.”
Several problems concerning data provenance are covered in ”Research Prob-
lems in Data Provenance”[20].
Trusting the services used and established in a particular information process-
ing and knowledge management (IPKM) system is highly related to the question
of data provenance (where does any Data/Information/Knowledge come from?).
In particular such a system has to be aware of the cumulated data of complex
communication between services. If there is any communication between ser-
vices inside the system, a security system ensures the trustworthiness of data.
However, the trustworthiness of data from outside the system can never be fully
guaranteed. Since many systems require external data, minimizing the risk of
uncertainty is key. E.g. weather data should come from external (and multiple)
sensors to ensure correctness of the values and also legislation information or
data from e.g. chemical-databases will also come ”from the outside”. Trustwor-
thiness of sources or the provenance of data differs from source to source (e.g.
values from governmental institutions can usually be given a higher trust value
than from other third party providers).
2.5 Trust in Knowledge Processing
To the best knowledge of the authors, there is no related work dealing with this
topic directly - neither for processing trust and certainty, nor for the aggregation
of (un)certainty. A good approach for measuring trust is given in ”An Approach
to Evaluate Data Trustworthiness Based on Data Provenance” [7]. Recent re-
search on modeling uncertainty is given by [14] and the usage of uncertainty in
complex event processing can be found in [6].
We developed an approach for trust and certainty (precision) calculation
and propagation in knowledge processing systems, which is briefly presented in
section 4.
3 Trust in Data and Agents
In general, we define trust as belief in the appropriate positive qualities of the
party being trusted. In this work, we will require definitions of trust for both data
and for agents. In this work we use the term data for all pieces of information that
can be expressed on a computer. This definition does not distinguish between
different types used in the hierarchical view of data, information, knowledge and
wisdom [1, 3]. When needed, the term raw data is used to distinguish data gained
from a source, and information to distinguish data created by processing some
input. We use the term agent for all elements of a system that are capable of
producing data. This part of our work is mostly based on [7] with some influence
from [13]. For trust in data and trust in agents we use the following definitions.
Definition 1. Trust in a data item i, denoted as t(i), is the probability of ibeing
correct.
This definition is a simplified version of the one used in [7]. It is simple to
use for data items for which correctness can be define as a binary value, such
as a data item representing the current date. If the data item represents the
actual current date, it is correct, and if it represents anything else it is incor-
rect. However, for many types of data the situation is more complex where the
correctness of a data item is not a simple categorial quality. Many types of data
are continuous, and have quality characteristics attached to them. These char-
acteristcs represent metadata about the data that describe how good the data
is. Possible quality characteristics include information about data consistency,
completeness, accuracy, precision, etc. For such data, we say that a data item
is correct if it corresponds to the quality characteristics. For example, if the
data is temperature measurements and the quality characteristics tell how close
to the actual temperature the measurements are (e.g. ±0.1◦C), we say that a
measurement is correct if the error is smaller than that.
It should be noted that t(i) is functionally similar to b, or belief, in the work
of Jøsang et al. [13]. Similarly, using this definition, disbelief din data, is the
probability of ibeing incorrect, while uncertainty ucovers cases where it is not
possible to say either. Uncertainty may occur for example in cases where the
quality characteristics are such that we cannot clearly define what is correct or
incorrect. For example, if quality characteristics define that temperature mea-
surements are normally distributed around the actual value with a standard
deviation of ±0.1◦C, we might need to assign some measurements with a trust
value of uncertain.
Definition 2. Trust in agent a, denoted as t(a), is the average of the trustwor-
thiness of the data items provided by agent ain a specific context.
Like Definition 1, this definition also follows [7]. The trust on an agent,
whether a source of measurement or other input data, or an agent that aggre-
gates, analyzes, or modifies the data in some other manner, is defined through
the trustworthiness of the data it provides. We have modified this definition by
adding the clause “in a specific context”. This is meant to explicitly allow us
to take into account only the data provided by an agent that are relevant for
the task for which the trust is evaluated. For example, the trustworthiness of
an agent may change with time. Thus, if we have a previously trusted agent
that starts sending incorrect data, we can make a new trust definition for the
agent without taking into account all the data the agent has provided over time.
Similarly, if an analysis agent is used in a new analysis process, trust for the
agent’s work in this process can be analyzed without taking into account the
agent’s work in other analysis. The different contexts can, after all, affect how
much we can trust the agent.
Again, t(a) is functionally equivalent of b, or belief in [13]. However, disbelief
dand uncertainty uare harder to separate from each other. For data we can say
that if data is not correct, it is incorrect, or cannot be categorized either way.
However, for agent trust is the average of the trust in the data, and we cannot
just say that disbelief is the complement of that. Thus, for agents we will now
settle for merely working with belief b.
4 Introducing Trust & Certainty (Precision) into
Knowledge Processing Systems
4.1 Recent research
As mentioned earlier, we developed an approach in our recent research [10, 11]
for processing gathered values through multi-step knowledge processing systems.
We considered the following subjects:
–any Source (S), which provides information in the environment; there can
be multiple sources in an environment.
–any Data (D)1, which is provided by one Source; for our model, every source
usually provides one or more data (elements).
–any Knowledge Processing System (KPS), which processes data from one or
more sources; each KPS itself produces new data as output; in our model,
every KPS produces only one output.
The source provides data in an abstract manner: it is not important which
type of data it is – in our approach it can be a whole database as well as a single
text file or a single data value. A knowledge processing system is any system
using the provided data from the existing sources, processing it, and providing
new data as output. To have computable and usable values in our approach,
computation of these different values from existing input data is needed. We
considered the following main values:
–Trust value (T) of source (S), which defines how trustable the source is. The
system (sources / data / knowledge processing systems) has to be seen as
a whole environment, hence the trust level for one source should always be
the same.
–Certainty value (C) of data (D), which describes how reliable, confident or
steady the provided data is. In literature and research work many definitions
of believability and certainty in knowledge based systems exist.2
–Importance value (I) of one input data (D), decided by the current knowledge
processing system (KPS) for the current step of computation.
For the continuation of the values of trust and certainty, the arithmetic mean
was chosen.
Tnew|Cnew =1
n
n
X
i=1
(Ti|Ci×Ii) (1)
1In our work we combine the data and information layer referring to the Data-
Information-Knowledge-Wisdom (DIKW) architecture in [1] from Russell Lincoln
Ackoff, i.e. data has the role of information and belongs to the information layer.
2Note that the term ”certainty” is very vague and can be substituted with definitions
like ”precision”, ”accuracy”, and other related terms. In this context, the usage of
”precision” is more meaningful due to the reference on sensor networks and sensor
precision.
Formula 1: Calculating Tnew|Cnew over all T1-n|C1-n related to I1-n .
Our approach was initialized with the following constraints on the values:
–Trust T of source S, for each S, has to be greater than 0 and less or equal
than 1, where each value of T for each S has to be the same (if used multiple
times) – a higher value represents higher trust:
0< T ≤1 (2)
–Certainty C of data D, for each D, has to be greater than 0 and less or equal
than 1, where each value of C for each D has to be the same (if used multiple
times) – a higher value represents higher certainty:
0< C ≤1 (3)
–Importance I of data D, decided by the KPS, is staggered:
•0.5 for values which are not very important
•1.0 for regular values, where no special impact on importance is given
•1.5 for very important values, concerning the current data processing
I= 0.5 |1|1.5 (4)
We applied the approach on several fictitious and real-world scenarios, which
showed promising results. Nevertheless there are several open questions that
have to be answered, listed in the next subsection.
4.2 Evaluation of this Approach
We addressed the question of how to determine trust- and certainty-values of a
KPS output, when different trust- and certainty-values are given for the input
data and applied the approach on several scenarios. Expert’s feedback assesed
the results as realistic and the computed values are promising. Further steps
such as analyzing runtime-complexity, proof of non-converging, evaluation of
usage of the approach, experiments and testing the approach on several more
realistic multi-step scenarios, and their evaluation will be done in further work.
Additionally, we will evaluate of more complex aggregation functions, hereby
incorporating statistical distributions of trust and certainty values. Moreover,
we will consider recursion in our approach and dealing with questions like ”Is
staggering of Importance (I) needed?” and ”Are T and C (in)dependent?”.
A philosophical element has to be discussed too: ”Are we allowed to alter a
trust value according to its importance?”. Interpreting and calling it an influ-
ence would probably be less controversial. However, it does not eliminate the
underlying aspect and the much needed discussion. There are no other devel-
oped approaches concerning the processing of trust and certainty, neither for
their aggregation. Its novelty and innovation will have a profound impact on
further research in this area.
Our aim is to develop a complete model for calculating representative values
in in knowledge processing systems. This approach can then be applied to all
other processing systems as well. Such a system, which can be applied to a
variety of applications, would be incredibly useful in practice.
5 Propagation of Trust and Precision
When it comes to an application-scenario, where several processing steps are
passed, the question arises, how the initial trust and precision values from the
inputs can be propagated through these processing steps and how multiple input
values are considered.
If we propose the simplest way, where all input values must be trusted, we
have to face the problem, that (too) many inputs can converge to ”not trustable”
very fast. For example, we propose, that the characteristic of trust is ”boolean”
and can only be 0 (not trustable) and 1 (fully trustable). When you then pre-
sume, that all of the input values must be trustable to get an trustable output,
you have the following problem: if only one input is not trustable, the whole out-
come will not be trustable. An example of this scenario can be a surgery, where
the doctors have to rely on all given information like results from laboratories,
information from the heart rate monitor, and other sensors. If the doctors decide,
to not conduct the surgery, if one of the input data is possibly not trustable,
then the chance is very high, that lots of surgeries won’t take place any more.
This shows the need for a propagation model, which not only relies on a
boolean model of trust values and their aggregation/propagation. Possible prop-
agation models have been developed in our recent research, as published in [10,
11] and indicated in section 4.
In this domain, the philosophical questions of ”is it allowed to alter trust
over time?” and ”can aggregated low trust result in higher/lower trust?” must
be answered, which is a quite long lasting process where many research domains
can be considered.
The same question arises with regard to the propagation of precision values
from the input data. There is no need for propagating precision values in the
same way as trust values, but in fact there needs to be a consideration of the
propagation of precision values. A possible model is to handle and propagate
trust and precision values in the same way through the processing steps of you
system, as we did in our last research.
Mathematical approaches for the propagation of trust and precision values
can e.g. be found in [28]. The usage of Markov-Chains would be a possible way
of calculation and propagation.
6 Debate on Trust in Knowledge Processing Systems
Knowledge processing means some kind of reasoning activity on knowledge stored
in a knowledge base. Trust definitively strives several aspects of modern knowl-
edge processing and this topic has already been investigated in literature.
The results of knowledge processing benefit from the introduction of a trust
value to that effect that the resulting new knowledge / insights gain credibility
and the needed manual evaluation to introduce the new insights in a company
and be reduced. Hajidimitriou et al. [9] discuss the importance of trust for this
aspect (even though in a slightly different context).
Especially for knowledge processing a formal representation of trust related
values has to be defined so that it can be used in combination with existing
knowledge representation forms (e.g., simple rules, or ontologies). Schenk et
al. [17] define such values as meta knowledge. They especially concentrate on
the combination of meta knowledge and OWL (Web Ontology Language).
Most existing reasoning algorithms are currently unaware of a trust concept.
Maia and Alcˆantara [16] introduce a reasoning process between agents that are
aware of trust. Also Dividino et al. [8] discuss an approach for reasoning over
meta knowledge in RDF (Resource Description Framework).
Nevertheless, for the practical aspect of knowledge processing, there are ad-
ditional topics that have to be handled. By introducing trust, the performance of
the inference process and such a system in general, must not suffer. Algorithms
will have to be adapted to support the additional processing of meta knowledge.
Moreover, the failure handling in knowledge processing systems has to be
improved. Trust is relevant only in systems using distributed knowledge sources.
Communication over a network always poses the risk to fail. The calculation and
transport of trust values has to be secured.
7 Conclusion
We showed the problems of creating and propagating trust and precision values,
as well as we introduced an recently developed approach which tries to face some
of these problems. We investigated related work and observed important terms
like ”trust”, ”provenance”, ”precision”, ”accuracy”, and ”certainty”. We also
tried to create working-definitions for trusting in data and agents for further
research.
By thinking about scenarios of trust and precision aggregation, fusion and
propagation, it became obvious, that a simple boolean model of 0 (not trustable)
and 1 (fully trustable) is not applicable in the most cases. We made clear, that
another calculation model for propagating these values is needed, especially in
multi-step processing applications. Our approach from recent research is a first
step solution for rudimentary solving and answering these questions.
Also the meaning of trust in knowledge processing systems in general was
considered – this will come into account especially for our further research on
trust propagation in such systems.
Acknowledgements The research leading to these results was partly funded by
the federal county of Upper Austria.
References
1. Richard L. Ackoff. From Data to Wisdom. In Journal of Applied System Analysis,
16:3–9, 1989.
2. Maryam Alavi and Dorothy E. Leidner. Review: Knowledge management and knowl-
edge management systems: Conceptual foundations and research issues. MIS quar-
terly (2001): 107-136.
3. Gene Bellinger, Durval Castro, and Anthony Mills. Data, information, knowledge,
and wisdom. (2004).
4. Peter Buneman, Sanjeev Khanna, and Wang-Chiew Tan. Data Provenance: Some
Basic Issues. In FST TCS 2000: Foundations of Software Technology and Theoretical
Computer Science, pages 87–93. Springer LNCS Berlin Heidelberg, 2000.
5. Peter Buneman and Susan B. Davidson. Data provenance–the foundation of data
quality In http://www.sei.cmu.edu/measurement/research/upload/Davidson.pdf,
2010.
6. Gianpaolo Cugola, Alessandro Margara, Matteo Matteucci, and Giordano Tambur-
relli. Introducing uncertainty in complex event processing: model, implementation,
and validation. In Journal Computing, Volume 97, pages 103–144, 2015.
7. Chenyun Dai, Dan Lin, Elisa Bertino, and Murat Kantarcioglu. An Approach to
Evaluate Data Trustworthiness Based on Data Provenance. In Proceedings of the 5th
VLDB Workshop on Secure Data Management (SDM ’08), pages 82–98. Springer
Berlin Heidelberg, 2008.
8. Renata Dividino, Sergej Sizov, Steffen Staab, and Bernhard Schueler. Querying for
provenance, trust, uncertainty and other meta knowledge in rdf. Web Semantics:
Science, Services and Agents on the World Wide Web, 7(3):204–219, 2009.
9. Yannis A Hajidimitriou, Nikolaos S Sklavounos, Konstantinos P Rotsios, et al. The
impact of trust on knowledge transfer in international business systems. Scientific
Bulletin–Economic Sciences, 11(2):39–49, 2012.
10. Markus J¨ager, Trong Nhan Phan, Christian Huber, and Josef K¨ung. Incorporating
Trust, Certainty and Importance of Information into Knowledge Processing Systems
- An Approach. In Future Data and Security Engineering: Third International Con-
ference, FDSE 2016, Can Tho City, Vietnam, November 23-25, Proceedings, pages
3–19, Springer International Publishing, 2016.
11. Markus J¨ager and Josef K¨ung. Introducing the Factor Importance to Trust of
Sources and Certainty of Data in Knowledge Processing Systems - A new Approach
for Incorporation and Processing In Proceedings of the 50th Hawaii International
Conference on System Sciences, pages 4298–4307, IEEE, 2017.
12. Audun Jøsang, and S.J. Knapskog. A Metric for Trusted Systems (full paper).
In Proceedings of the 21st National Information Systems Security Conference, NSA,
1998.
13. Audun Jøsang, Stephen Marsh, and Simon Pope. Exploring Different Types of
Trust Propagation. In Trust Management: 4th International Conference, iTrust 2006,
Pisa, Italy, May 16-19, 2006. Proceedings, pages 179–192. Springer Berlin Heidelberg,
2006.
14. Alexander Karlsson, Bj¨orn Hammarfelt, H. Joe Steinhauer, G¨oran Falkman, Nas-
rine Olson, Gustaf Nelhans, and Jan Nolin. Modeling uncertainty in bibliometrics
and information retrieval: an information fusion approach. In Journal Scientometrics,
Volume 102, pages 225–2274, 2015
15. D. Harrison McKnight. Trust in Information Technology. In The Blackwell Ency-
clopedia of Management: Operations management, Blackwell Pub. 2005.
16. Gabriel Maia and Jo˜ao Alcˆantara. Reasoning about trust and belief in possibilis-
tic answer set programming. In Intelligent Systems (BRACIS), 2016 5th Brazilian
Conference on, pages 217–222. IEEE, 2016.
17. Simon Schenk, Renata Dividino, and Steffen Staab. Reasoning with provenance,
trust and all that other meta knowledge in owl. In Proceedings of the First Interna-
tional Conference on Semantic Web in Provenance Management-Volume 526, pages
11–16. CEUR-WS. org, 2009.
18. David L. Streiner and Geoffrey R. Norman. ”Precision” and ”Accuracy”: Two
Terms That Are Neither. In Journal of Clinical Epidemiology, Volume 59, pages
327–330, ELSEVIER, 2006.
19. Muhammad Usman. Design and Implementation of an iPad Web Application for
Indoor-Outdoor Navigation and Tracking Locations. Master’s thesis, Aalto Univer-
sity, 2012.
20. Wang-Chiew Tan. Research problems in data provenance. In IEEE Data Engi-
neering Bulletin, Volume 27, pages 45–52, 2004.
21. Wilfried Elmenreich An Introduction to Sensor Fusion. In Research Report
47/2001, Institut fr Technische Informatik, Vienna University of Technology, Aus-
tria, 2001.
22. Chen Z., Tian L. and Lin C. Trust Model of Wireless Sensor Networks and Its
Application in Data Fusion In Sensors, 17(4), p. 703, 2017.
23. Cho J.H., Chan K. and Mikulski D. Trust-based information and decision fusion
for military convoy operations. In Military Communications Conference (MILCOM),
pp. 1387-1392, 2014.
24. Ganeriwal S., Balzano L.K. and Srivastava M.B. Reputation-based framework for
high integrity sensor networks. In ACM Transactions on Sensor Networks (TOSN).
4(3), pp. 1-37, 2008.
25. Dhulipala V.R.S., Karthik N. and Chandrasekaran R. A Novel Heuristic Approach
Based Trust Worthy Architecture for Wireless Sensor Networks. In Wireless personal
communications. pp. 1-17, 2013.
26. Fan C.Q., Wang S.G., Sun Q.B., Wang H.M., Zhang G.W. and Yang F.C.A. Trust
valuation Method of Sensors Based on Energy Monitoring. In Acta Electronica Sinica.
41(4), pp. 646-651, 2013.
27. Sahoo S.S., Sardar A.R., Singh M., Ray S. and Sarkar S.K. A Bio-Inspired and
Trust Based Approach for Clustering in WSN. In Natural Computing: an interna-
tional journal, 15(3), 423-434, 2016.
28. Dirk Draheim. Semantics of the Probabilistic Typed Lambda Calculus: Markov
Chain Semantics, Termination Behavior, and Denotational Semantics. Springer
Berlin Heidelberg, ISBN 978-3-64-255198-7, 2017