Conference PaperPDF Available

A Classification of Trust Systems

Authors:

Abstract and Figures

Trust is a promising research topic for social networks, since it is a basic component of our real-world social life. Yet, the transfer of the multi-facetted concept of trust to virtual social networks is an open challenge. In this paper we provide a survey and classification of established and upcoming trust systems, focusing on trust models. We introduce a set of criteria as basis of our analysis and show strengths and short-comings of the different approaches.
Content may be subject to copyright.
A Classification of Trust Systems
Sebastian Ries, Jussi Kangasharju, and Max M¨
uhlh¨
auser
Department of Computer Science
Darmstadt University of Technology
Hochschulstrasse 10
64289 Darmstadt, Germany
{ries, jussi, max}@tk.informatik.tu-darmstadt.de
Abstract. Trust is a promising research topic for social networks, since
it is a basic component of our real-world social life. Yet, the transfer
of the multi-facetted concept of trust to virtual social networks is an
open challenge. In this paper we provide a survey and classification of
established and upcoming trust systems, focusing on trust models. We
introduce a set of criteria as basis of our analysis and show strengths and
short-comings of the different approaches.
1 Introduction
Trust is a well-known concept in everyday life, which simplifies many complex
processes. Some processes are just enabled by trust, since they would not be
operable otherwise. On the one hand, trust in our social environment allows us to
delegate tasks and decisions to an appropriate person. On the other hand, trust
facilitates efficient rating of information presented by a trusted party. Computer
scientists from many areas, e.g., security, ubiquitous computing, semantic web,
and electronic commerce, are still working on the transfer of this concept, to
their domain. In Sect. 2 we will introduce, the main properties of social trust,
in Sect. 3 we provide our own set of criteria and the analysis of a selected set of
trust systems from different areas, and in Sect. 4 we give a short summary and
derive ideas for our future work.
2 Properties of Trust
There is much work on trust by sociologists, social psychologists, economists,
and since a few years also by computer scientists. In general trust can be said
to be based on personal experience with the interaction partner in the context
of concern, on his reputation, or on recommendations. Furthermore, trust is
connected to the presence of a notion of uncertainty, and trust depends on the
expected risk associated with an interaction. [1,2,3,4,5,16]
The author’s work was supported by the Deutsche Forschungsgemeinschaft (DFG)
as part of the PhD program ”Enabling Technologies for Electronic Commerce” at
Darmstadt University of Technology.
R. Meersman, Z. Tari, P. Herrero et al. (Eds.): OTM Workshops 2006, LNCS 4277, pp. 894–903, 2006.
c
Springer-Verlag Berlin H eidelber g 2006
A Classification of Trust Systems 895
The following properties are regularly assigned to trust, and are relevant when
transferring the concept to computer sciences. Trust is subjective and therefore
asymmetric. It is context dependent, and it is dynamic, meaning it can increase
with positive experience and decrease with negative experience or over time
without any experience. This makes also clear that trust is non-monotonic and
that there are several levels of trust including distrust. A sensitive aspect is the
transitivity of trust. Assuming Alice trusts Bob and Bob trusts Charlie, what
can be said about Alice trust in Charlie? In [2], Marsh points out that trust is
not transitive. At least it is not transitive over arbitrary long chains, since this
will end in conflicts regarding distrust. Yet recommendation and reputation are
important factors for trust establishment.
McKnight and Chervany state in [1] that there are three principle categories
of trust: personal / interpersonal trust, impersonal / structural trust, and dispo-
sitional trust. Interpersonal trust describes trust between people or groups. It is
closely related to the experiences, which people had with each other. Structural
trust is not bound to a person but raises from social or organizational situation.
Dispositional trust can be explained as a person’s general attitude towards the
world. As shown in [6] much work is done on transferring interpersonal trust to
computer sciences, whereas there is little work supporting the other categories.
Although, trust is a well-known concept and despite there is a set of properties
on which most researchers agree, it is hard to define trust. A couple of definitions
are provided from several scientific areas with different focuses and goals (cf.
[2, 6]). A definition which is shared or at least adopted by some researchers
[3, 7, 8, 9], is the definition provided by the sociologist Diego Gambetta:
... trust (or, symmetrically, distrust) is a particular level of the sub-
jective probability with which an agent will perform a particular action,
both before [we] can monitor such action (or independently of his capac-
ity of ever to be able to monitor it) and in a context in which it affects
[our] own action.” [16]
3 Classification Criteria and Analysis
Having introduced the general aspects of trust, we will now give a survey how
the concept of trust is realized in different areas of computer science. We derive
our coarse-gained classification from the work provided in [2,4,5,10,11]. As main
categories we see trust modeling,trust management and decision making [12].
In this classification, trust modeling deals with the representational and com-
putational aspects of trust values. Trust management focuses on the collection
of evidence and risk evaluation. Although decision making is actually a part of
trust management, we treat it separately, since it is such an important aspect.
Due to the limitations of this paper and our own research interests we focus
for a more fine-grained classification only on trust modeling, especially on the
aspects of domain,dimension,andsemantics of trust values.
Trust values are usually expressed as numbers or labels, thus their domain
can be binary, discrete, or continuous. A binary representation of trust allows
896 S. Ries, J. Kangasharju, and M. M¨
uhlh¨
auser
only to express the two states of ”trusted” and untrusted”. This is actually
near to certificate- or credential-based access control approaches, where access
is granted, if and only if the user presents the necessary credentials. But since
most researchers agree that trust has several levels, binary models are considered
as not sufficient. Trust can also be represented using more than two discrete
values. This can be done either by using labels or by using a set of natural
numbers. The advantage of this approach is, that trust values can be easily
assigned and understood by human users [3, 13]. Continuous trust values are
supported by well-known mathematical theories depending on the semantics of
the trust values.
The dimension of trust values can be either one- or multi-dimensional. In one-
dimensional approaches this value usually describes the degree of trust an agent
assigns to another one, possibly bound to a specific context. Multi-dimensional
approaches allow to introduce a notion of uncertainty of the trust value.
The semantics of trust values can be in the following set: rating, ranking,
probability, belief, fuzzy value. As rating we interpret values which are directly
linked with a trust related semantics, e.g., on a scale of natural numbers in the
interval [1,4], 1 can be linked to ”very untrusted”,..., and 4 to ”very trusted”.
Whereas, the trust values which are computed in ranking based models, e.g., [14],
are not directly associated with a meaningful semantics, but only in a relative
way, i.e. a higher value means higher trustworthiness. Therefore, it is only
possible to assign an absolute meaning to a value, if this value can be compared
to large enough set of trust values of other users. Furthermore, trust can be
modeled as probability. In this case, the trust value expresses the probability
that an agent will behave expected. The details of belief and fuzzy semantics are
explained together with ’Subjective Logic’ and ReGreT (see below). A summary
of our classification is presented in Table 1.
Table 1. Classification of trust models
Domain Dimension Sem. Trust management Decision
making
Marsh cont. in [-1,1) 1(situational
trust) rating – (but risk
evaluation)
threshold-
based
TidalTrust disc. in [1,10] 1(rating) rating global policy (no
risk evaluation)
Abdul-Rahman &
Hailes disc. labels 1(trust value) rating – –
SECURE
Project
(exemplary)
disc. in [0,]2(evid.-based) prob. local policies (incl.
risk evaluation)
threshold-
based
cont. in [0,1] 3(bel., disbel.,
uncert.) belief
Subjective Logic disc. in [0,]2(evid.-based) prob. not directly part of
SL
not directly
part of SL
cont. in [0,1] 3(b, d, u)belief
ReGreT disc. fuzzy
val ue s 2(trust,
confidence)
fuzzy
val ue s
local policies (fuzzy
rules)
A Classification of Trust Systems 897
3.1 Model Proposed by Marsh
The work of Marsh [2] is said to be the seminal work on trust in computer science.
Marsh concentrates on modeling trust between only two agents. He introduces
knowledge, utility, importance, risk, and perceived competence as important
aspects related to trust. The trust model should be able to answer the questions:
With whom should an agent cooperate, when, and to which extend? The trust
modelusesrealnumbersin[1; 1) as trust values. He defined three types of
trust for his model. Dispositional trust Txis trust of an agent xindependent
from the possible cooperation partner and the situation. The general trust Tx(y)
describes the trust of xin y, but is not situation specific. At last, there is the
situational trust Tx(y, a), which describes the trust of agent xin agent yin
situation a. The situational trust is computed by the following linear equation:
Tx(y, a)=Ux(a)×I
x(a)×
Tx(y),(1)
where Ux(a) represents the utility and Ix(a) the importance, which xassigns to
the trust decision in situation a.Furthermore,
Tx(y) represents the estimated
general trust of xin y.
The trust management provided by Marsh does not treat the collection of
recommendations provided by other agents, he only models direct trust between
two agents. The aspect of risk is dealt with explicitly based on costs and benefits
of the considered engagement.
The decision making is threshold based. Among other parameters the coop-
eration threshold depends on the perceived risk and competence of the possible
interaction partner. If the situational trust is above the value calculated for the
cooperation threshold, cooperation will take place otherwise not. Furthermore,
the decision making can be extended by the concept of ”reciprocity”, i.e. if one
does another one a favor, it is expected to compensate at some time.
3.2 TidalTrust
In [13] Golbeck provides a trust model which is based on 10 discrete trust values
in the interval [1,10]. Golbeck claims that humans are better in rating on a
discrete scale than on a continuous one, e.g., in the real numbers of [0,1]. The
10 discrete trust values should be enough to approximate continuous trust values.
The trust model is evaluated in a social network called FilmTrust [15] with about
400 users. In this network the users have to rate movies. Furthermore, one can
rate friends in the sense of ”[...] if the person were to have rented a movie to
watch, how likely it is that you would want to see that film” [13].
Recursive trust or rating propagation allows to infer the rating of movies by
the ratings provided by friends. For a source sin a set of nodes Sthe rating
rsm inferred by sfor the movie mis defined as
rsm =iStsi ·rim
iStsi
,(2)
898 S. Ries, J. Kangasharju, and M. M¨
uhlh¨
auser
where intermediate nodes are described by i,tsi describes the trust of sin i,
and rim is the rating of movie massigned by i. To prevent arbitrary long
recommendation chains, the maximal chain length or recursion depth can be
limited. Based on the assumption that the opinion of the most trusted friends
are the most similar to opinion of the source, it is also possible to restrict the
set of considered ratings, to those provided by the most trusted friends.
Although the recommendation propagation is simple, the evaluation in [13]
shows that it produces a relatively high accuracy, i.e. the ratings based on
recommendation are close to the real ratings of the user. Since this approach
does not deal with uncertainty, the calculated trust values can not benefit in
case that their are multiple paths with the similar ratings. The trust value is
calculated as a weighted sum. For the same reason, the path length does not
influence the trust value. The values for trust in other agents on the path are
used for multiplication and division in each step. Since each node aggregates its
collected ratings and passes only a single value to its ancestor in the recursion,
the source cannot evaluate which nodes provided their rating. The approach
does not deal with any form of risk or decision making.
3.3 Model Proposed by Abdul-Rahman and Hailes
The trust model presented by Abdul-Rahman and Hailes [7] is developed for
use in virtual communities with respect to electronic commerce and artificial
autonomous agents. It deals with a human notion of trust as it is common in
real world societies. The formal definition of trust is based on Gambetta [16].
The model deals with direct trust and recommender trust. Direct trust is
the trust of an agent in another one based on direct experience, whereas recom-
mender trust is the trust of an agent in the ability of another agent to provide
good recommendations. The representation of the trust values is done by discrete
labeled trust levels, namely ”Very Trustworthy”, ”Trustworthy”, ”Untrustworthy”
a n d, ”Ve r y U n tr u s tw o r th y ” f o r d i r ec t t r u s t, a n d ”Ve r y g o o d ”, ”g o o d ”, ”b a d ” a n d ,
very bad” for recommender trust.
A main aspect of this trust model is to overcome the problem that different
agents may use the same label with a different subjective semantics. For example,
if agent alabels an agent cto be Trustworthy” based on personal experience,
and aknows that agent blabels the same agent cto be ”Very Trustworthy”. The
difference between these two labels can be computed as ”semantic distance”.
This semantic distance” can be used to adjust further recommendations of b.
Furthermore, the model deals with uncertainty. Uncertainty is introduced if
an agent is not able to determine the direct trust in an agent uniquely, i.e. if an
agent has e.g., as much ”good” as ”very good” experiences with another agent.
But it seems unclear how to take benefit from this introduction of uncertainty
in the further trust computation process. The combination of recommendations
is done as weighted summation. The weights depend on the recommender trust
and are assigned in an ad-hoc manner.
Although the model drops recommendations of unknown agents for the cal-
culation of the recommended trust value, those agents get known by providing
A Classification of Trust Systems 899
recommendations, and their future recommendations will be used as part of the
calculation.
It is important to mention that the direct trust values are only used to calcu-
late the semantic distance to other agents, but are not used as evidence which
could be combined with the recommendations.
Trust management aspects are not considered. The collection of evidence is
only stated for recommendations of agents which have direct experience with the
target agent. It is not explicitly described how to introduce recommendations
of recommendations. Furthermore, the system does not deal with risk. Decision
making seems to be threshold based, but is not explicitly treated.
3.4 SECURE Project Trust Model
The trust model and trust management in the SECURE project [5, 17] aims to
transfer a human notion of trust to ubiquitous computing.
A main aspect of the trust model is to distinguish between situations in which
aprincipalbis ”unknown” to a principal a, and situations in which a principal b
is ”untrusted”or ”distrusted”. The principal bis unknown to a,ifacannot collect
any information about b.Whereasbis ”untrusted” if ahas information, based
on direct interaction or recommendations, stating that bis an ”untrustworthy”
principal.
This leads to define two orderings on a set of trust values Tdenoted as and
. The first ordering (T,) is a complete lattice. For X, Y ∈T the relation
XYcan be interpreted as Y is more trustworthy than X. The second ordering
(T,) is a complete partial order with a bottom element. The relation XY
can be interpreted as the trust value Y is based on more information than X.
The set of trust values can be chosen from different domains as long as the
orderings have the properties described above. It is possible to use intervals over
the real numbers in [0,1] [17]. This allows for an interval [d0,d
1] to introduce
the semantics of belief theory by defining d0as belief and 1 d1as disbelief.
Uncertainty can be defined as d1d0. Another possibility would be to define the
trust values as pair of non-negative integers (m, n). In this case mrepresents
the number of non-negative outcomes of an interaction and nthe number of
negative ones. These approaches seem to be similar to the trust model provided
by Jøsang, but they do not provide a mapping between these two representations.
It is also possible to define other trust values e.g., discrete labels.
The trust propagation is based on policies. This allows users to explicitly
express whose recommendations are considered in a trust decision. Let Pbe
the set of principals, the policy of a principal a∈Pis πa. The local policy
allows to assign trust values to other agents directly, to delegate the assignment
to another agent, or a combination of both. Since it is possible to delegate the
calculation of trust values, the policies can be mutually recursive. The collection
of all local policies πcan be seen as global trust function m. This function m
can be calculated as the least fixpoint of Π,whereΠis Π:λp :Pp.
The trust management also deals with the evaluation of risk. Risk is modeled
based on general cost probability density functions, which can be parameterized
900 S. Ries, J. Kangasharju, and M. M¨
uhlh¨
auser
by the estimated trustworthiness of the possible interaction partner. The evalu-
ation of risk can based on different risk policies, which e.g., describe if the risk
is independent from the costs associated to an interaction or if it increases with
increasing costs.
The decision making is threshold based. For the application in an electronic
purse [5] two thresholds are defined by the parameters x,y(xy). If the
situation specific risk value (parameterized by the trust value corresponding to
the interaction partner) is below x, the interaction will be performed (money
will be payed), if it is above ythe interaction will be declined. In case the risk
value is between xand ythe decision will be passed to the user.
3.5 Subjective Logic
The trust model presented by Jøsang [10], named ”subjective logic”, combines ele-
ments of Bayesian probability theory with belief theory. The Bayesian approach
is based on beta probability density function (pdf), which allows to calculate
posteriori probability estimates of binary events based on a priori collected ev-
idence. For simplification we do not explain the concept of atomicity, which is
introduced by Jøsang to use his model also for non-binary events.
The beta probability density function fof a probability variable pcan be
described using the two parameters α,βas:
f(p|α, β)= Γ(α+β)
Γ(α)Γ(β)pα1(1 p)β1,
where 0 p1, α>0, β>0.
(3)
By defining α=r+1 and β=s+ 1, it is possible to relate the pdf directly to
the priori collected evidence, where rand srepresent the number of positive and
negative evidence, respectively. In this model trust is represented by opinions
which can be used to express the subjective probability that an agent will behave
as expected in the next encounter. It is possible to express opinions about other
agents and about the truth of arbitrary propositions. The advantage of this
model is that opinions can be easily be derived from the collected evidence.
An approach to deal with uncertainty is called belief theory, which tempts
to model a human notion of belief. In belief theory as introduced in [10] an
opinion can be expressed as a triple (b, d, u), where brepresents the belief, dthe
disbelief, and uthe uncertainty about a certain statement. The three parameters
are interrelated by the equation b+d+u= 1. Jøsang provides a mapping
between the Bayesian approach and the belief approach by defining the following
equations:
b=r
r+s+2,d=s
r+s+2,u=2
r+s+2 where u=0 .(4)
Furthermore, he defines operators for combining (consensus) and recommend-
ing (discounting) opinions. In contrast to the belief model presented in [18] the
consensus operator is not based on Dempster’s rule. Moreover, the model sup-
ports also operators for propositional conjunction, disjunction and negation.
A Classification of Trust Systems 901
In [19] it is shown how ”subjective logic” can be used to model trust in the
binding between keys and their owners in public key infrastructures. Other
papers introduce how to use ”subjective logic” for trust-based decision making
in electronic commerce [20] and how the approach can be integrated in policy
based trust management [21].
Another approach modeling trust based on Bayesian probability theory is
presented by Mui et al. in [8], an approach based on belief theory is presented
by Yu and Singh in [18].
3.6 ReGreT
ReGreT tries to model trust for small and mid-size environments in electronic
commerce [22]. The system is described in detail in [23, 24]. A main aspect of
ReGreT is to include information which is available from social relations between
the interacting parties and their environments. In the considered environment
the relation between agents can be described as competitive (comp), cooperative
(coop), or trading (trd).
The model deals with three dimensions of trust or reputation. The individual
dimension is based on self-made experiences of an agent. The trust values are
called direct trust or outcome reputation. The social dimension is based on third
party information (witness reputation), the social relationships between agents
(neighborhood reputation), and the social role of the agents (system reputation).
The ontological dimension helps to transfer trust information between related
contexts. For all trust values a measurement of reliability is introduced, which
depends on the number of past experience and expected experience (intimate
level of interaction), and the variability of the ratings.
The trust model uses trust or reputation values in the range of real numbers
in [1; 1]. Overlapping subintervals are mapped by membership functions to
fuzzy set values, like ”very good”, which implicitly introduce semantics to the
trust values. In contrast to the probabilistic models and belief models, trust
is formally not treated as subjective probability that an agent will behave as
expected in the next encounter, but the interpretation of a fuzzy value like ”very
good” is up to the user or agent.
Since the fuzzy values are allowed to overlap, this introduces also a notion of
uncertainty, because an agent can be e.g., good” and very good” at the same
time to a certain degree.
The inference of trustworthiness is based on intuitively interpretable fuzzy
rules. The trustworthiness assigned by agent ato agent cwith respect to pro-
viding information about agent b, e.g., can depend on the relation between the
agents band c, as shown in the following example. In the example the social
trust of ain information of babout cis ”very bad” if the cooperation between b
and cis high. IF coop(b;c)ishigh
THEN socialT rust(a;b;c)isvery bad.
Further information concerning risk evaluation and decision making is not
given.
902 S. Ries, J. Kangasharju, and M. M¨
uhlh¨
auser
4Conclusion
In this paper we have provided a short survey of trust systems based on differ-
ent approaches. Furthermore, we provided a set of criteria to analyze systems
dealing with trust, on a top level by distinguishing between trust model, trust
management and decision making, and for the main aspects of trust modeling in
detail. As we can see from our survey, it is possible to reason about trust models
without especially addressing aspects of trust management, and the other way
around. The comparison of trust models is yet difficult, since they are often
developed for different purposes and use different semantics for modeling trust.
Furthermore, most authors define their own way of trust management to evalu-
ate their trust models. The trust propagation chosen by Golbeck seems to be a
simple and yet an accurate way to evaluate recommendations in social networks.
By analyzing the trust models, we came to the conclusion that the models
need to be able to represent a notion of uncertainty or confidence, since it is a
main aspect of trust. The approach taken in ReGreT allows to define a sub-
jective component for confidence, but the approach seems to be done in an ad
hoc manner. The approach taken by belief models binds uncertainty to belief
and disbelief. In conjunction with the Bayesian approach uncertainty depends
directly of the number of collected evidence, but it is not related to a subjective
and context-dependent measurement. For our future work we favor the Bayesian
approach, since it allows to easily integrate the collected evidence. We will try
to find a new way to derive uncertainty from the relation between the amount of
collected evidence and an amount of expected evidence based on this approach.
By giving the user the opportunity to define an expected amount of evidence,
uncertainty gets a subjective and most notably a context-dependent notion.
References
1. McKnight, D.H., Chervany, N.L.: The meanings of trust. Technical report, Man-
agement Information Systems Research Center, University of Minnesota (1996)
2. Marsh, S.: Formalising Trust as a Computational Concept. PhD thesis, University
of Stirling (1994)
3. Jøsang, A., Ismail, R., Boyd, C.: A survey of trust and reputation systems for
online service provision. In: Decision Support Systems. (2005)
4. Grandison, T., Sloman, M.: A survey of trust in internet applications. IEEE
Communications Surveys and Tutorials 3(4) (2000)
5. Cahill, V., et al.: Using trust for secure collaboration in uncertain environments.
IEEE Pervasive Computing 2/3 (2003) 52–61
6. Abdul-Rahman, A.: A Framework for Decentralised Trust Reasoning. PhD thesis,
University College London (2004)
7. Abdul-Rahman, A., Hailes, S.: Supporting trust in virtual communities. In: Proc.
of Hawaii International Conference on System Sciences. (2000)
8. Mui, L., Mohtashemi, M., Halberstadt, A.: A computational model of trust and
reputation for e-businesses. In: Proc. of the 35th Annual HICSS - Volume 7,
Washington, DC, USA, IEEE Computer Society (2002)
A Classification of Trust Systems 903
9. Teacy, W.T., et al.: Travos: Trust and reputation in the context of inaccurate
information sources. Autonomous Agents and Multi-Agent Systems 12(2) (2006)
10. Jøsang, A.: A logic for uncertain probabilities. International Journal of Uncer-
tainty, Fuzziness and Knowledge-Based Systems 9(3) (2001) 279–212
11. Grandison, T., Sloman, M.: Specifying and analysing trust for internet appli-
cations. In: I3E ’02: Proc. of the IFIP Conference on Towards The Knowledge
Society, Deventer, The Netherlands, Kluwer, B.V. (2002) 145–157
12. Ries, S.: Engineering Trust in Ubiquitous Computing. In: Proc. of Workshop on
Software Engineering Challenges for Ubiquitous Computing, Lancaster, UK (2006)
13. Golbeck, J.: Computing and Applying Trust in Web-Based Social Networks. PhD
thesis, University of Maryland, College Park (2005)
14. Kamvar, S.D., Schlosser, M.T., Garcia-Molina, H.: The eigentrust algorithm for
reputation management in p2p networks. In: Proc. of the 12th international con-
ference on World Wide Web, New York, USA, ACM Press (2003) 640–651
15. Golbeck, J., Hendler, J.: Filmtrust: Movie recommendations using trust in web-
based social networks. In: Proc. of the Consumer Communications and Networking
Conference. (2006)
16. Gambetta, D.: Can we trust trust? In Gambetta, D., ed.: Trust: Making and
Breaking Cooperative Relations. Basil Blackwell, New York (1990) 213–237
17. Carbone, M., Nielsen, M., Sassone, V.: A formal model for trust in dynamic
networks. In: Proc. of IEEE International Conference on Software Engineering
and Formal Methods, Brisbane, Australia, IEEE Computer Society (2003)
18. Yu, B., Singh, M.P.: An evidential model of distributed reputation management.
In: Proc. of the 1st International Joint Conference on Autonomous Agents and
Multiagent Systems, New York, NY, USA, ACM Press (2002) 294–301
19. Jøsang, A.: An algebra for assessing trust in certification chains. In: Proc. of the
Network and Distributed System Security Symposium, San Diego, USA, (1999)
20. Jøsang, A.: Trust-based decision making for electronic transactions. In: Proc. of
the 4th Nordic Workshop on Secure IT Systems, Stockholm, Sweden (1999)
21. Jøsang, A., Gollmann, D., Au, R.: A method for access authorisation through del-
egation networks. In: 4th Australasian Information Security Workshop (Network
Security) (AISW 2006). Volume 54 of CRPIT., Hobart, Australia, ACS (2006)
22. Sabater, J., Sierra, C.: Review on computational trust and reputation models.
Artificial Intelligence Review 24(1) (2005) 33–60
23. Sabater, J., Sierra, C.: Reputation and social network analysis in multi-agent
systems. In: Proc. of the 1st International Joint Conference on Autonomous Agents
and Multiagent Systems, New York, NY, USA, ACM Press (2002) 475–482
24. Sabater, J.: Trust and reputation for agent societies. PhD thesis, Institut
d’Investigacion en Intelligencia Artificial, Spain (2003)
... As plataformas virtuais sociais possuem uma estrutura que habilita pesquisas avançadas para a identificação de relacionamentos de confiança entre indivíduos. Há três elementos que estão relacionados à confiança que são fortemente discutidos na literatura, de acordo com Reis [8], são eles, subjetividade, contexto e dinamismo. I. subjetividade: confiança é subjetiva e, portanto, assimétrica, isto é, a confiança de uma pessoa p, em relação a uma pessoa p1 não é a mesma confiança da pessoa p1 em relação a p; ...
... Nesta perspectiva, diversos algoritmos foram propostos para a inferência de relacionamentos ocultos de confiança em ambientes virtuais sociais, tais como, TidalTrust [9]; o RN-Trust, [10]; SUNNY [11,12]; FlowTrust [8]; o de Mamami [9] e o T-SWEETS [13]. Neste sentido, o presente trabalho tem como objetivo apresentar uma análise do algoritmo T-SWEETS em relação ao algoritmo clássico TitalTrust. ...
Article
Full-text available
This work presents an analysis between two approaches for inferring trust in social networks: TidalTrust and T-SWEETS. The first part of this research is composed by an experiment with the Epinions, Slashdot and Wikipedia datasets in order to define the most suitable type of trust value (integer or fractioned, positive or negative, and so on). Furthermore, the second part of this research reports a comparative analysis between both algorithms. The results indicate that T-SWEETS outperforms TidalTrust in terms of accuracy and maintains the transitivity principle, which is the basic principle of trust.f trust.
... Nevertheless, the benefits of its application in computing and distributed systems is widely agreed. Many works have been done to use it in distributed systems and many TMS have been designed to help agents make a better decision [7,15,16,19,22,23,29,30]. Due to the large interest given to TMS, very different approaches can be found in the literature. ...
... The second main difference comes from the fact that trust associated to an agent can be global and shared by all agents, or subjective to each agent. Furthermore, as stated by [19] TMS can be split into three main components: trust management, trust modelling and decision making (see Fig. 1). Trust management describes how evidences are gathered from past interactions, contexts and other agents. ...
Chapter
Full-text available
With the growing interest toward pervasive systems such as the Internet of Things or Cyber-Physical Systems, embedded multi-agent systems have been increasingly investigated. In these systems, agents cooperate to achieve their local goals and a global goal that would be impossible for an isolated agent to achieve. However, the dark side of this collaboration is that agents can easily be victim of malicious attacks coming from untrustworthy agents. Consequently, trust management systems are designed to help agents choosing trustworthy counterparts to cooperate based on available information. But gathering the necessary information may be too expensive in terms of energy for small embedded agents and not relevant in all contexts. We propose a solution that allows agents to manage the energy consumption associated with information gathering. Our solution uses a Multi-Armed Bandit algorithm, which is a reinforcement learning technique to allow the agents to adapt themselves and their energy consumption to the context.
... Dynamic trust mechanism is useful for the object as a control for selecting the application services in the IOT. Characteristic of trust as described in [8], [15]: a. Asymmetric and subjective b. Dependent or has a particular context c. ...
... This definition, when to be applied to the computer science, will lead to the trust modeling, trust management, and quantitative decision-making. Trust modeling related to the aspects of computational representation of the trust value, while trust management is a collection of evidence and risk assessment for decision-making [15]. The trust value is usually written in the number or label, and can be in binary, discrete or continuous forms. ...
Article
Full-text available
Internet of Things or widely known as IOT makes smart objects become active participants in the communication process between objects and their environment. IoT services that utilize Internet connection require solutions to a new problem: security and privacy. Smart objects and machine-to-machine communications in IOT now become interesting research, including that related to security. Privacy, which is a safe condition in which object is free from interference from other objects, is one of the important aspects in IOT. Privacy can be implemented using various ways for examples by applying encryption algorithms, restrictions on access to data or users, as well as implementing rules or specific policy. Trustable object selection is one technique to improve privacy. The process of selecting a trustable object can be done based on past activities or trust history of the object, also by applying a threshold value to determine whether an object is "trusted" or not. Some researchers have studied this approach. In this study, the selection processes of trustable objects are calculated using Modified Ant Colony algorithm. The simulation was performed and resulted in declining graphic trend but stabilized in certain trust value. Copyright © 2016 Institute of Advanced Engineering and Science. All rights reserved.
... For the classification research of trust issues, quite a few scholars have made related investigations and generalization. Ries et al. [18] have made the survey of the concept of trust realized in different fields of computer science, and divided the existing work in general as follows: ...
Article
Full-text available
Although cloud computing has some eminent advantages such as low cost, resource elasticity, yet the highly dynamic, distributed and opaque nature of cloud services makes it a meaningful challenge to establish and manage trust between cloud service providers and consumers. Based on this background, this paper first describes the common concepts and classifications of trust issues and trust models, then generalize trust issues in cloud computing based on the research of existing work, next enumerates typical trust evaluation models from several different perspectives in cloud computing. Finally, conclusions including several potential trends are proposed for the future work.
... Güven arkadaşlık etme, sırlarını paylaşma, satış ve satın alma işlemlerinde ve birlikte çalışma gibi insan ilişkilerinde büyük bir etkiye sahiptir. Güven karar alma işlemlerinde, delegasyon, belgelendirme ve kaynak erişimindeki yardımlarıyla günlük yaşantımızı kolaylaştırır [40]. ...
Thesis
Full-text available
Trust is an important factor in Wireless Sensor Networks in order to assess the believability of the produced data. Due to the limited computational power and energy resources of the wireless sensor networks, it is a challenge to maintain trust while using energy efficiently. Previously we developed a trust enhancing architecture called ProTru. In order to use energy more efficiently, we developed a multi-hop version of our previous architecture called MultiProTru. In this architecture, routing is done based on the trust values of the cluster heads. In MultiProTru in order to find untrusted data we used Kalman filtering approach. This new architecture will assist mission-critical sensor networks in assessing the trust value of the data by calculating the network’s trustworthiness.
... In the last step, local trust values are further updated based on the experience gained during the recommendation generation process. As stated in (Ries et al., 2006), trust is dynamic; that is, it can decrease with negative experience and increase with positive experience. Also, it can decrease over time without any experience. ...
Article
Trust-aware recommender systems have attracted much attention in recent years because of the popularity of social networks. Some researchers have proposed local trust models to measure trust between two users based on their interactions. However, in most cases, there are few (if any) direct interactions between two users. In such cases, it is useful to associate a user with a global reputation value that reflects the experience of the whole community with that user. In this paper, we propose a Dynamic Local–Global Trust-aware Recommendation (DLGTR) approach, which uses a new hybrid model of local and global trust. In this hybrid model, the relative importance of local trust versus global trust is dynamically determined based on the reliability of information from the two trust models. Moreover, DLGTR uses an incremental method to efficiently update a trust network as new data become available. This allows DLGTR to constantly improve the recommendation quality without loss of efficiency. Experimental results demonstrate that the proposed approach outperforms the state-of-the-art recommendation approaches, especially in terms of prediction accuracy and computational time.
Article
Social network interactions and web/IoT-based applications have led to data getting generated at a very rapid rate. Amongst these, social networking sites continue to have a comparatively greater influence on the day-to-day life of an individual. Social networking sites are the main vehicles for disseminating information due to their inherent global outreach. Due to popularity of social network sites, every smart phone user, government, corporate and political parties are able to keep up to date with the latest information. However, this information may also contain some misinformation and/or disinformation, which could manipulate and influence the opinion of an individual or a group of individuals. In order to minimize misinformation and disinformation in such networks, a veracity assessment in the form of trust, needs to be computed for each and every data included in it, in order to ensure secure and trustworthy data availability for the users. Many researchers have developed models for assessing veracity in the form of trust for different social networking sites. This paper focuses on a detailed review of trust-based models in social networks. The paper also delineates how machine learning models have been used for trust assessment in social networks.
Article
This paper presents an implementation of Extreme Learning Machine (ELM) in the Multi-Agent System (MAS). The proposed method is a trust measurement approach namely Certified Belief in Strength (CBS) for Extreme Learning Machine in Multi-Agent Systems (ELM-MAS-CBS). The CBS is applied on the individual agents of MAS, i.e., ELM neural network. The trust measurement is introduced to compute reputation and strength of the individual agents. Strong elements that are related to the ELM agents are assembled to form the trust management in which will be letting the CBS method to improve the performance in MAS. The efficacy of the ELM-MAS-CBS model is verified with several activation functions using benchmark datasets (i.e., Pima Indians Diabetes, Iris and Wine) and real world applications (i.e., circulating water systems and governor). The results show that the proposed ELM-MAS-CBS model is able to achieve better accuracy as compared with other approaches.
Article
This paper presents an implementation of Extreme Learning Machine (ELM) in the Multi-Agent System (MAS). The proposed method is a trust measurement approach namely Certified Belief in Strength (CBS) for Extreme Learning Machine in Multi-Agent Systems (ELM-MAS-CBS). The CBS is applied on the individual agents of MAS, i.e., ELM neural network. The trust measurement is introduced to compute reputation and strength of the individual agents. Strong elements that are related to the ELM agents are assembled to form the trust management in which will be letting the CBS method to improve the performance in MAS. The efficacy of the ELM-MAS-CBS model is verified with several activation functions using benchmark datasets (i.e., Pima Indians Diabetes, Iris andWine) and real world applications (i.e., circulating water systems and governor). The results show that the proposed ELM-MAS-CBS model is able to achieve better accuracy as compared with other approaches.
Technical Report
Full-text available
What does the word ‘trust’ mean? Scholars continue to express concern regarding their collective lack of consensus about trust’s meaning. Conceptual confusion on trust makes comparing one trust study to another problematic. To facilitate cumulative trust research, the authors propose two kinds of trust typologies: (a) a classification system for types of trust, and (b) definitions of six related trust types that form a model. Some of the model’s implications for management are also outlined.
Article
Full-text available
This article presents the preliminary findings from an explorative case study concerning barriers, benefits and use of SMEs adoption of business-to-business Internet commerce. The main findings were that SMEs embrace the Internet mainly just by chance ...
Thesis
Full-text available
Trust is a judgement of unquestionable utility -- as humans we use it every day of our lives. However, trust has suffered from an imperfect understanding, a plethora of definitions, and informal use in the literature and in everyday life. It is common to say "I trust you"^i but what does that mean? This thesis provides a clarification of trust. We present a formalism for trust which provides us with a tool for precise discussion. The formalism is implementable: it can be embedded in an artificial agent, enabling the agent to make trust-based decisions. Its applicability in the domain of Distributed Artificial Intelligence (DAI) is raised. The thesis presents a testbed populated by simple trusting agents which substantiates the utility of the formalism. The formalism provides a step in the direction of a proper understanding and definition of human trust. A contribution of the thesis is its detailed exploration of the possibilities of future work in the area.
Conference Paper
Full-text available
Despite their many advantages, e-businesses lag behind brick and mortar businesses in several fundamental respects. This paper concerns one of these: relationships based on trust and reputation. Recent studies on simple reputation systems for e-Businesses such as eBay have pointed to the importance of such rating systems for deterring moral hazard and encouraging trusting interactions. However, despite numerous studies on trust and reputation systems, few have taken studies across disciplines to provide an integrated account of these concepts and their relationships. This paper first surveys existing literatures on trust, reputation and a related concept: reciprocity. Based on sociological and biological understandings of these concepts, a computational model is proposed. This model can be implemented in a real system to consistently calculate agents' trust and reputation scores.