Content uploaded by Saneeha Ahmed
Author content
All content in this area was uploaded by Saneeha Ahmed on May 30, 2022
Content may be subject to copyright.
9498 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 66, NO. 10, OCTOBER 2017
Novel Trust Framework for Vehicular Networks
Saneeha Ahmed, Student Member, IEEE, Sarab Al-Rubeaai, and Kemal Tepe , Senior Member, IEEE
Abstract—Dedicated short range communication is proposed for
vehicle to vehicle communications to learn about significant events
in the network from neighboring vehicles. However, these neigh-
bors may be malicious and report incorrect events in order to take
advantage of the system. The malicious nodes may also provide
incorrect recommendations about their peers in order to exert a
stronger influence on the receiver’s decision. Incorrect information
and malicious nodes render the system unreliable for safety and
emergency applications. In order to correctly identify the events
as well as malicious nodes, a novel trust framework is proposed in
this paper that studies all aspects of the trust in connected vehi-
cle (CV) to CV communications. The nodes iteratively learn about
the environment from received messages and then update the trust
values of their neighbors. Nodes are classified on the basis of their
trust values and reported events are also classified as true and false.
Nodes advertise their recommendation about trusted and malicious
neighbors. The proposed framework allows nodes to identify and
filter recommendations from malicious nodes, and to discern true
events. The performance of the proposed framework is evaluated
experimentally using false and true positive rates, event detection
probability and trust computation error. The proposed framework
identifies malicious nodes and true events with high probability of
more than 0.92 while keeping the trust computation error below
0.03.
Index Terms—Connected vehicles, event, history, honest, mali-
cious, recommendation, trust.
I. INTRODUCTION
THE Dedicated Short Range Communication (DSRC) sys-
tem has been accepted as the default standard for Vehicle
to Vehicle (V2V) and Vehicle to Infrastructure (V2I) communi-
cations to realize Connected Vehicles (CVs) [1]. Several pilot
deployments of DSRC involving autonomous driving have also
been tested in the US and Europe [2]. One of the open issues
of DSRC is its security aspect including privacy, authentica-
tion and misbehavior detection. DSRC uses digital certificates
issued by a certificate authority in order to provide authentica-
Manuscript received May 26, 2016; revised October 21, 2016 and January
17, 2017; accepted April 28, 2017. Date of publication May 31, 2017; date
of current version October 13, 2017. This work was supported in part by the
National Science and Engineering Research Council of Canada through the
Discovery Grant program. The review of this paper was coordinated by Dr. Y.
Song. (Corresponding author: Kemal Tepe.)
S. Ahmed is with the Department of Electrical and Computer Engineer-
ing, University of Windsor, Windsor, ON N9B 3P4, Canada, on leave from
the NED University Karachi, Karachi 75270, Pakistan (e-mail: ahmed13m@
uwindsor.ca).
S. Al-Rubeaai is with the University of Windsor, Windsor, ON N9B 3P4,
Canada (e-mail: alrube@uwindsor.ca).
K. Tepe is with the Department of Electrical and Computer Engineering, Uni-
versity of Windsor,Windsor, ON N9B 3P4, Canada (e-mail: ktepe@uwindsor.).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TVT.2017.2710124
tion [3]. Since DSRC devices have limited processing power
[4]–[6], lightweight authentication algorithms have been pro-
posed. These algorithms are not sufficient to protect the CV
network from authenticated nodes which have been compro-
mised [7]. It is for this reason that complementary schemes to
enhance existing security features of CVs are being proposed
such as the trust management scheme in [8]. Trust management
enables nodes to learn about their surroundings from neighbor-
ing nodes and helps them to identify the veracity of reported
events and the honesty of nodes. However, trust-based systems
have their own vulnerabilities such as false recommendations
which can be exploited to manipulate the entire network for the
benefit of malicious nodes. That is why a comprehensive trust
framework must accurately identify the true nature of the nodes
and recognize the credibility of senders. Although trust models
for CVs have been proposed in [9]–[13], they do not address
a number of fundamental problems due to the distributed and
dynamic environment of CVs such as sparseness, lack of infor-
mation about events, large volumes of data, low tolerance for
overhead, short-lived connections, and lack of access to a cen-
tral authority. Having no prior knowledge about the events and
neighbors makes it extremely difficult for communicating nodes
to establish trust. Moreover, since nodes may not always behave
in a predictable manner, recent observations must be given more
importance in the trust estimation than historical observations.
Trust needs to be updated regularly and must include neighbors’
opinions in order to compensate the lack of evidence, which is
the case in CV communications. In order to include neighbors’
opinions or recommendations, a trust based system must identify
how credible are those recommendations before utilizing them.
The impact of incorrect recommendations has not been studied
in the perspective of CV communications. Hence, a more com-
prehensive and complete trust framework is necessary for the
viability and effectiveness of CVs to achieve their fundamental
objective of providing active safety and emergency messaging.
Trust proposals for CVs have been broadly classified into two
main categories: data centric and entity centric. Data Centric
Trust (DCT) is defined as the trust in the information received
from various sources and it is established on the basis of ev-
idence retrieved from these sources [9], [13]–[15]. Bayesian
Theorem (BT), Belief Propagation (BP), Dempster-Shafer The-
ory (DST) and weighted voting are commonly used techniques
to establish DCT [9], [16], [17]. DCT depends on Entity Cen-
tric Trust (ECT) which is derived from the past behavior of the
sender and its reputation among its neighbors [12], [13]. ECT is
the trust that one node places in another with respect to a certain
action such as routing or service provisioning. ECT is a combi-
nation of direct and indirect trust [10], [18]. Direct Trust (DT)
0018-9545 © 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications standards/publications/rights/index.html for more information.
AHMED et al.: NOVEL TRUST FRAMEWORK FOR VEHICULAR NETWORKS 9499
is based on a node’s own experience about another node. Indi-
rect Trust (IDT) is obtained from recommendations for a given
node. IDT is highly sensitive to the credibility of neighbors
who may have various motives to send incorrect recommen-
dations [19], [20], which is why a scheme which incorporates
recommendations solely from the trustworthy neighbors should
be developed. This scheme is called Recommendation Trust
(RT) and it is commonly used in establishing the trust in social
networks and e-commerce [21]–[23]. RT should be included in
CVs to have a better estimation of ECT.
The contribution of this paper is threefold: (1) a framework
that combines DCT, ECT and RT is proposed to complement the
existing DSRC security standard; (2) interaction of these three
trust mechanisms namely ECT, DCT and RT is investigated;
and (3) a comparison is done with a baseline trust mechanism
to demonstrate and verify the effectiveness of the proposed
framework. The innovation of this framework is to combine
event trust, effective node trust and recommendation trust. The
proposed framework provides a methodology by which to in-
vestigate the interactions of these tree modules, which are in
turn used to establish trust and to identify true events. A node
identifies malicious senders using its own experience and rec-
ommendations generated by neighboring nodes. The superiority
of the framework stems from how it uses RT to identify and iso-
late malicious nodes. The framework is designed to maintain
a low communication overhead since it does not require addi-
tional messages for the trust computations. The only additional
overhead is the recommendation data field which appends a
maximum (N−1)bits per message, where Nis the number of
neighboring nodes.
The performance of the complete trust framework has been
tested and is measured by using True Positive Rate (TPR) and
False Positive Rate (FPR), Event Detection Probability (EDP)
and Trust Computation Error (TCE).
In the remainder of this paper, the system model is provided in
Section II, the trust framework in Section III, network setup and
simulation results in Section IV, explanation of related works
in Section V, and Section VI provides conclusions and recom-
mendations for future studies. Pseudo-code of algorithms and
the baseline trust model are provided in the appendices.
II. SYSTEM MODEL
A. Network Model
The network model is designed to test the framework in sce-
narios with malicious nodes in which a number of nodes travel
together on a given route. On this route, a significant event such
as an accident has been generated. Nodes experiencing this event
have to change their behavior to adapt to the consequences of
the event, such as a rapid deceleration and a hard braking. At any
given time, there is only one true event in the network, but it may
change its location during the simulation. Initially, none of the
nodes has any knowledge about this event. Nodes experiencing
the event report the changes in their messages. Nodes that do
not experience the event do not alter their driving behavior but
may learn of it through the received messages.
In simulation studies, it is assumed that all nodes are equipped
with GPS as well as DSRC and have the same capabilities
in terms of memory and processing power. A node typically
generates 10 packets per second, but this rate can be manipulated
by the malicious nodes.
B. Attack Model
In order to simulate the malicious nodes, two types of attacks,
namely attack on event information and recommendation attack,
will be generated in an on-off manner.
1) Attack on Event Information: A malicious node may re-
port an arbitrary event and may modify its messages accordingly.
This attack model is similar to [24] in which the malicious node
injects a small amount of false information corresponding to ar-
bitrary events. In some scenarios, malicious nodes switch their
behavior from being honest to malicious in order to remain un-
detected. This type of malicious behavior is known as an on-off
attack [25]. Hence, distinguishing a malicious node from an
honest one, which may sporadically send erroneous messages
due to sensor faults, can be difficult. Malicious nodes may co-
ordinate and report a common false event. In order to have a
greater impact in the network, malicious nodes may not report
a true event when it is being experienced.
It is assumed that at any given time there is only one type of
event reported in messages. However, if there is more than one
event, the proposed scheme can be expanded for each event as
if they are independent.
2) Recommendation Attack: The malicious nodes can col-
lude by providing wrong recommendations about other nodes.
If the recommendation is represented as a binary value, the
malicious nodes flip the value in order to improve their own
reputation or to support other malicious nodes by ballot stuffing
and badmouthing honest nodes. This kind of attack is not lim-
ited to nodes reporting the false events. In fact, nodes reporting
true events may send false recommendations about other nodes
in order to have a stronger influence on the network. However,
in this study, it is assumed that only the malicious nodes send
false information as well as false recommendations.
III. THE PROPOSED TRUST FRAMEWORK
The objectives of the proposed trust framework are to de-
termine true events, identify false recommendations and mali-
cious senders from the received messages which contain (a) the
events being experienced by the nodes and (b) the recommenda-
tion about other nodes. The event related information is already
included in DSRC messages. However, the existing message
format needs to be modified to include recommendations. The
proposed trust framework provides these modified messages to
the following modules: Event Trust (EvT), Effective Node Trust
(ENT) and Recommendation Trust (RT). The proposed frame-
work provides a methodology to investigate the interactions of
these three modules, which are used to establish trust and to
identify true events.
Modules of the framework are illustrated in Fig. 1. The RT
module in the framework utilizes experiences of nodes about
their neighbors. With that, the node can learn the behavior of
9500 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 66, NO. 10, OCTOBER 2017
Fig. 1. The trust framework.
senders and identify malicious nodes accurately. The RT module
determines the credibility of recommendations by using simi-
larity and consistency of information received in consecutive
messages from the same sender. The ENT module establishes
the trust in senders which report the correct events. The ENT
module updates the trust value of nodes based on the receiver’s
own experience as well as recommendations of its neighbors.
The updated trust is used to determine whether reported events
are true or not. The EvT module determines in the decision logic,
if a reported event is actually occurring or not. The correct events
may help drivers to take a necessary action.
These three modules naturally depend on each other. For
example, the nodes sending correct information are trusted, and
the information coming from these nodes is likely to be true,
therefore can be trusted. However, if the nodes are defamed,
or their trustworthiness is misjudged, then their trust value may
decrease. The detailed working of each module will be discussed
in the following subsections.
A. Recommendation Trust
RT is derived from similarity of recommendations received
from nodes in the same vicinity as well as consistency of a
sender’s recommendations. RT is similar to feedback credibility
proposed by [19], [26]; however, RT in our framework considers
the consistency of information.
Once a node forms an opinion about its neighbors, it includes
this opinion in its broadcast messages as binary recommen-
dations, where a malicious node is represented as “0” and an
honest node is represented as “1”. The evaluating node, E, com-
putes the Jaccard similarity [27] between its own opinions and
received recommendations if its own opinions have been es-
tablished. Otherwise, Ecomputes the similarity between the
recommendation of the node Xand recommendations from
nodes which are geographically closer to X. Since geographi-
cally closer nodes are likely to have interacted with similar sets
of nodes, they may have similar information [4], [28].
Fig. 2. List of recommendations from neighbors sent to evaluator E.
Fig. 2 is provided to illustrate RT calculations. In this figure,
Eis new and it has not established its own opinion. Then, E
receives a set of recommendations from its neighboring nodes.
Now Ewants to establish RT about the node X. Therefore, E
looks up the list of its own neighbors and identifies a subset of
its neighbors, denoted as
Y={y1,y
2, ..., yd}, which are geo-
graphically closer to X.Ethen computes the Jaccard similarity
S(X, yc)between the set of recommendations received from
X, denoted as MX, and the sets of recommendations received
from {y1,y
2, ..., yd}, denoted as My1..Myd. Thus, the similarity
score between Xand yc∈
Yis given by
S(X, yc)= |MX∩Myc|
|MX|+|Myc|−|MX∩Myc|.(1)
The average similarity of Xwith its dneighbors, at time nis
taken as
Sn(X)=d
c=1S(X, yc)
d.(2)
In a case where Ehas its own opinion, only the similarity
between Eand Xwill be computed and Sn(X)=S(X, E).
The similarity score Sn(X)identifies the credibility of the
node Xat time instant n. The overall similarity score of Xuntil
nis given by
Simn(X)=λ∗Sn(X)+(1−λ)∗Simn−1(X),(3)
where 0 <λ<1.
AHMED et al.: NOVEL TRUST FRAMEWORK FOR VEHICULAR NETWORKS 9501
Although the similarity score provides a reasonable measure
to judge the trustworthiness of the recommendations, the con-
sistency of Xmust also be considered in RT, if Xis E’s only
neighbor. The consistency ηn(E,X)is the ratio of the recom-
mendations which have changed between time n−1 and n(say
ρ) to the total number of recommendations of Xat time n(say
|Mx|). Let the values of recommendations from Xat time n−1
be un−1and the values of recommendations at time nbe unthen
ρcan be computed as XOR between the unand un−1, hence
ηn(E,X)will be given by
ηn(E,X)= |Mx|
c=1un−1(c)⊕un(c)
|Mx|=ρ
|Mx|.(4)
While computing the consistency, it is considered that Xmay
always make new observations and these new observations do
not affect ηn(E,X). However, if Xstops providing recom-
mendations for a node, this is counted as a change. The node
Ecombines the previous consistency and the newly calculated
value as
¯ηn(E,X)=φ∗ηn(E,X)+(1−φ)∗¯ηn−1(E, X),(5)
where 0 <φ<1.
The total RT of Xat the node Eis labeled as Tr
n(E,X),
which combines the similarity and consistency of information
from X.Tr
n(E,X)indicates how much Etrusts Xand it is
given by
Tr
n(E,X)=θ1∗Simn(X)−θ2∗¯ηn(E,X)
+θ3∗Tr
n−1(E,X),(6)
where θ1,θ2and θ3∈[0,1], and θ1+θ2+θ3=1. θ1,θ
2and
θ3are selected to reflect the relative significance of the simi-
larity, consistency and previous trust respectively. For example,
if the similarity is more important, then θ1should be greater.
The value of Tr
n(E,X)determines whether Xis malicious or
not. Information from the malicious recommenders is excluded
from ENT calculations; with this the network is protected from
the malicious recommendations. The algorithm used for imple-
mentation of RT is provided in Appendix B. In the following
subsection, ENT is discussed.
B. Effective Node Trust
ENT is established by the evaluating node Eabout the sender
kusing direct and indirect trust of k. Direct trust is derived
from E’s own experience with k. Indirect trust is derived from
the recommendations about the node k. In establishing ENT, a
model called Logistic Trust (LT) [29] is used. LT relies on a
combination of E’s experience with neighbors’ recommenda-
tions to establish ENT for k. In the following subsection, LT
will be discussed in detail.
Logistic Trust (LT): This model uses the parameters namely
dissatisfaction, flag, and expectation to update ENT for the
sender k. Dissatisfaction, δn(E,k), indicates the measure of
contradictory information provided by k. Flag, fn(E,k), indi-
cates whether the sender has been recently identified as mali-
cious. Expectation indicates the trust of the neighbors regarding
this sender.
1) Dissatisfaction: Let the node ksend an(k)incorrect mes-
sages out of total vn(k)messages. The current dissatisfaction
δn(E,k)in time-slot nis given by
δn(E,k)=an(k)
vn(k)
The total dissatisfaction in time slot nis labeled as δn(E,k)and
is given by
¯
δn(E,k)=n
i=1δi(E,k)
n(7)
The dissatisfaction given in (7) is compared against a threshold
βto declare if a node is malicious based on the receiver’s own
experience. If the node is malicious, a flag denoted by fn(E,k),
is raised for this node. The flag can be lowered after observing
consistency of the honest behavior from this sender. However,
the threshold to raise the flag is smaller than the one to lower
it. These unequal thresholds allow a node to remember mali-
cious behavior for a longer time. The threshold to raise the flag,
β, is chosen adaptively to adjust the weight of the receiver’s
experience in ENT.
2) Expectation: Binary recommendations received from E’s
neighbors indicate whether they trust k. A binary “0” indicates
that the node is not trusted. The expectation ETn(E,k)is mean
of the recommendations about kin the time slot nwhich is given
by
ETLT
n(E,k)= rn(k)+1
rn(k)+sn(k)+2,(8)
where sn(k)represents the number of good recommendations,
1’s, received for node kand rn(k)is the number of bad recom-
mendations, 0’s, received for node k. In order to compensate for
the uncertainty in the received information, the expectation in
(8) adds 1 to each of r(k)and s(k).
ENT is calculated using the LT [29] function which captures
all the components needed to obtain the trust measure and is
given by
tn(E,k)= 1
1+e−(
X·
B+B0),(9)
where
X={¯
δn(E,k),ETLT
n(E,k),f
n(E,k),t
n−1(E,k)}.
The weight vector,
B, is calculated using the linear least squares
method. The previous trust of node kis tn−1(E,k)and its de-
fault value is 0.3 which is assigned to the new nodes whose
trustworthiness is not known. The trust value of 0.3 is selected
to allow the framework to function even with a dishonest ma-
jority and provided a best compromise in detecting malicious
and honest nodes. This was carefully tested in our previous
work [29]. If ENT of kfrom (9) is high (i.e., 0.6 or greater), it
is used to accept future information from k. The algorithm for
updating ENT under LT is provided in Appendix B.
C. Event Trust
EvT allows the node to assess whether the reported event
is true or not. Nodes receive event reports from other nodes.
9502 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 66, NO. 10, OCTOBER 2017
Each message from a node contains information that illustrates
whether it is experiencing an event or no-event. The evaluating
node upon receiving the information about an event, creates two
bins, namely bin-1 or bin-0. Bin-1 accumulates trust values of
nodes reporting events, whereas bin-0 accumulates trust values
of nodes reporting no-event. Each node’s report is recorded once
regardless of the number of received messages from this node.
Let us assume that node Ehas Psenders in bin-1 and Qsenders
in bin-0 with the trust value of each sender obtained from (9) as
tn(E,k)at the time n. The total number of senders in bin-1 and
bin-0 are |P|and |Q|respectively. The average trust of bins is
given by
T1
n(E)=i∈Ptn(E,i)
|P|;T0
n(E)=j∈Qtn(E,j)
|Q|.(10)
The weight, w,oftheith sender in P, and that of sender jin Q
is calculated as
wn(E,i)=tn(E, i)
T1
n(E);wn(E,j)=tn(E,j)
T0
n(E).(11)
EvT is the weighted mean of the number of reports in bin-1 and
it is calculated by
Tavg1
n(E)=wn(E,i)∗tn(E,i)
|P|.(12)
Similarly, the total trust for the no-event bin-0 is given by
Tavg0
n(E)=wn(E,j)∗tn(E,j)
|Q|.
The event is considered to have occurred if the following deci-
sion rule holds
Tavg1
n(E)−Tavg0
n(E)if |Q|>0,and,
Tavg1
n(E)>minWt if |Q|=0,(13)
where 0 ≤Tavg ≤1 and minW t is a threshold. This threshold
is selected on the basis of the trust values of the majority of
nodes. For instance, if the majority has trust values around 0.3
then the threshold is 0.3. The threshold minW t can be selected
adaptively based on the current trust of nodes; however, in this
study a static value of 0.3 is used.
In order to test the effectiveness of the proposed framework,
a CV scenario under the system model described in Section II,
is simulated.
Performance Parameters
The performance of the complete trust framework has been
studied using EDP, TPR, FPR and TCE. Some of these parame-
ters have been discussed in the literature while others have been
derived by us. For this reason, definitions of the performance
parameters are given as follows:
Event Detection Probability or (EDP): EDP is defined as the
ratio between the number of true events identified by the decision
logic to the total number of events learned by the decision logic.
True Positive Rate (TPR): TPR is defined as follows
TPR =PMal|Mal
PMal|Mal +PHon|Mal
,(14)
TAB L E I
SIMULATION PARAMETERS
Parameter Description Default Value
αWeight of Current Satisfaction 0.6
γWeight of Direct Trust 0.6
λWeight of Current Similarity 0.6
βThreshold to Declare Malicious 0.04
θ1Weight of Total Similarity 0.2
θ2Weight of Total Consistency 0.4
θ3Weight of Previous Trust 0.4
φWeight of Current Consistency 0.8
dNumber of Neighbors to Compare
Recommendation
4
Trust threshold to Declare Malicious 0.2
B0Bias for ENT in LT -0.85
M aliciousN odes Percentage of Malicious Nodes in the
System
80%
pPercentage of Messages Carrying
False Information
40%
eventDuration Time for which the Event is Valid 75s
simulationT ime Total Time for the Experiment 200s
where PMal|Mal is the probability of detecting a node as mali-
cious given the node is malicious, and PHon|Mal is the proba-
bility of detecting a node as honest given the node is malicious.
False Positive Rate (FPR): FPR is defined as follows
FPR =PMal|Hon
PMal|Hon +PHon|Hon
,(15)
where PMal|Hon is the probability of detecting a node as mali-
cious given the node is honest and PHon|Hon is the probability
of detecting a node as honest given the node is honest.
Two metrics, TPR and FPR, are computed for the on-off
attack on event-information as well as on recommendations.
Trust Computation Error (TCE): TCE is computed as mean
squared difference in the computed trust value and the expected
(or true) trust value of the node. In the simulation studies, this
parameter is calculated for the on-off attack on the event infor-
mation.
The performance of the proposed framework with LT model
is tested and compared using the Satisfaction-based Trust (SatT)
model in the ENT module. SatT is derived from the techniques
proposed in the literature [19] which averages the successful
interactions in order to predict the future behavior of the node.
A description of SatT is given in Appendix A in order to illustrate
our implementation for readers who would like to understand
this model.
IV. SIMULATIONS PARAMETER AND RESULTS
Simulation studies are undertaken to verify validity of the
trust framework. In order to create scenarios of nodes traveling
together for a period of time, a circular road with a radius of
about 475 meters is simulated. While vehicles are traveling, an
event, typically an accident or a road hazard, is generated at a
random location. Nodes closer to the event experience it first
and others may eventually observe it later. As soon as the event
is experienced by nodes, they start including parameters with
this event in their broadcast messages. Nodes that are not close
to the event receive such information through messages. Events
AHMED et al.: NOVEL TRUST FRAMEWORK FOR VEHICULAR NETWORKS 9503
Fig. 3. Trust Evolution with respect to percentage Anomaly for 100 Nodes network. (a) Trust evolution of LT. (b) Trust Evolution of SatT using the same legends
as (a).
Fig. 4. Interaction between RT, ENT and EvT modules. (a) Impact of recom-
mendations on event detection probability of LT model. (b) Impact of recom-
mendations on trust computation error in LT model.
last for 75 simulation seconds (s) and after this time period,
a new event is generated at a randomly selected location. The
nodes normally travel with a speed of 24 to 29 meters per second
(i.e., 87 to 104 kilometers per hour). When a node experiences
the event, it decelerates by 5 mps2at that location. Based on
the model described, if there is an accident, malicious nodes
keep transmitting regular traveling speeds in their messages
even though they have slowed down. However, these nodes
send tampered messages with a certain rate, called percentage
anomaly denoted as p.
Another parameter to investigate in the trust framework is
the impact of the number of nodes on the road (i.e., in the
network). In the simulations, a network of 20, 30, 50, 80 and
100 nodes is used. Nodes only receive messages and do not
update the trust values of their neighbors for the first 10 seconds
of the simulation. When this initialization period is over, the
nodes process the messages using the EvT and ENT modules
of the framework and formulate their recommendations. This
simulation model was created using OMNET++ discrete event
network simulator [30]. Important simulation parameters are
provided in Table I.
IV. Simulation Assumptions
In addition to simulation parameters, some assumptions made
in this work are discussed as follows. In order to test the frame-
work, it is assumed that there is only one true event in the
network at any given time. This event must be observed by
at least one node, who would make its observation based on
its direct experience. Nodes observing the event can only in-
form their neighbors within 300 m range, which is the standard
DSRC range. Nodes move on a predefined route. The identifi-
cation of a node does not change during the simulation time.
Thus our framework does not necessarily provide privacy and
is not designed for anonymous messages. In order to update
the trust of the senders based on the event report, both direct
and indirect experiences are used. However, the framework is
capable of updating trust on the basis of only direct experience
or only indirect experience. In this paper, it is assumed that the
malicious nodes send false information with a fixed probability
p. There are three types of misbehavior that a malicious node
can provide: (a) False event information, (b) false recommen-
dation (both badmouthing and ballot stuffing), (c) on-off attack.
Moreover, this paper only presents a defense strategy in order
to protect a node from specific attacks.
A. Impact of Percentage Anomaly on Trust
First, the impact of percentage anomaly, p, is studied in the
network of 100 nodes with 80 malicious nodes. The percentage
anomaly of these 80 nodes varied between 20–100%. The evo-
lution of trust values at one of the receiving nodes is plotted at
intervals of reception of 50 packets, which is called a time slot.
Initial trust value for all the nodes is 0.3 except that the receiving
node who assigns a value of 1.0 to itself.
The evolution of trust in both LT and SatT models is plotted
in Fig. 3. In these plots, nodes always had their own expe-
rience with the sender and may have received recommenda-
tions. The evolution of trust values under LT are consistent and
monotonically increasing for honest nodes and monotonically
decreasing for malicious nodes. Although the trust values of
9504 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 66, NO. 10, OCTOBER 2017
Fig. 5. Average trust with respect to percentage anomaly for 100 nodes network, at 75s and 150s intervals. (a) Average trust of honest nodes at 75s. (b) Average
trust of honest nodes at 150s. (c) Average trust of malicious nodes at 75s. (d) Average trust of malicious nodes at 150s.
honest nodes are increasing gradually, those of malicious nodes
are decreasing rapidly. Separation between the trust value of ma-
licious and honest nodes is considerable under LT. Therefore,
the probability of misclassification is low. This feature allows
the framework to identify malicious nodes earlier than honest
nodes and this is a desirable feature for a trust-based system.
However, the same evolution is not as consistent in SatT which
makes it less suitable for malicious node classification. Another
observation is that when pis small, SatT has less evidence
to identify nodes as malicious, which is why convergence is
slightly slower than LT when pis large. In order to evaluate the
impact of the recommendation trust module, the framework has
been tested with LT model under the following three scenarios:
(1) All nodes send their honest opinions about their neighbors.
This means that although malicious nodes report false events,
they do not modify their recommendations hence no recommen-
dation attacks are launched. (2) Malicious nodes not only report
false event but also send false recommendations. However, in
order to demonstrate the impact of recommendation attacks, the
framework does not filter those recommendations. (3) The sce-
nario 2 is repeated but false recommendations are filtered by the
framework.
In Fig. 4, the event detection probability reduces in case of
recommendation attacks. However, when the recommendation
trust module is used to filter the malicious recommenders, then
EDP improves significantly. In Fig. 4, the trust computation er-
ror increases with increased percentageAnomaly, in case of
recommendation attacks. However, when the recommendation
trust module is used to filter the malicious recommenders,
then the error reduces to the no recommendation attack sce-
nario. By using the recommendation trust module and fil-
tering the malicious recommendations, both EDP and TCE
improve and approach to the no recommendation attacks
scenarios.
Another set of experiments was carried out in order to demon-
strate how accurate the trust estimation is during the simulation.
In order to do that, the average trust at the end of each event
is calculated for the framework under LT and SatT. Results of
these experiments are provided in Fig. 5. During the first event,
nodes do not have a prior history with neighboring nodes and
the framework under both LT and SatT starts their evaluation of
trust values and events from no prior experience. However, in the
second event, the framework utilizes the trust values obtained in
the first event. For example, if a node is identified as malicious
during the first event, in the beginning of the second event, the
trust value of this node is low. Fig. 5(a) and (b) respectively
provide the average trust values of honest nodes with increas-
ing p. In Fig. 5(a) and (b), average trust values obtained in LT
is reflective of the nodes’ actual behavior. LT can successfully
identify honest nodes and assign them a high trust value even in
the first event, and can further increase their trust in the second
event. SatT, however, maintained a lower trust value for honest
nodes when compared with LT. In Fig. 5(c) and (d), the mali-
cious nodes are assigned a very low trust value in LT in the first
event which is maintained throughout the simulation, regardless
of p. However, SatT calculated high trust values for malicious
nodes when pis small because it works by averaging messages
deemed to be correct. This averaging does not capture the true
nature of nodes since nodes can hide their true intentions by
only occasionally sending wrong messages. For example, when
pis 20%, trust values of malicious nodes vary between 0.6 and
0.8 in SatT.
Fig. 6 provides a comparison of TPR and FPR of LT and SatT.
When pis low, LT performs significantly better than SatT. TPR
AHMED et al.: NOVEL TRUST FRAMEWORK FOR VEHICULAR NETWORKS 9505
Fig. 6. True and false positive rates for event reporting task with respect to
percentage anomaly. (a) True positive rates for ENT. (b) False positive rates for
ENT.
Fig. 7. EDP and TCE against percentage anomaly. (a) EDP with respect to
percentage anomaly. (b) TCE with respect to percentage anomaly.
of LT is very close to 1.0 since it can identify malicious nodes
very effectively by combining recommendations and event trust
information in the decision making process. When the malicious
activity gets higher, the framework under both LT and SatT
exhibits similar performance since increased malicious activity
can help SatT in decision making while LT in not affected much
from reduced malicious activity. FPR of LT is maintained at a
Fig. 8. True and false positiverates for RT with respect to percentage anomaly.
(a) True positive rates for RT. (b) False positive rates for RT.
low value regardless of p. However, SatT generates high FPR
when pis low.
Fig. 7(a) and (b) show EDP and TCE for both LT and SatT.
These figures illustrate whether the framework can identify
events correctly from the received messages. For example, if
a node receives a message indicating an event (i.e., accident),
it determines whether the event being reported is true or false.
EDP of LT is always better than that of SatT because the trust
values in the ENT module are reliable in identifying the true
events in the network. Fig. 7(b) shows TCE of LT and SatT.
TCE reflects the error between the trust value computed by the
framework and the actual trust value of the node. For example,
if the trust value of an honest node is 1.0, and the framework
evaluates this to be 0.9, then TCE is 0.1. A smaller TCE in-
dicates how accurately the framework performs in determining
the true nature of the node. TCE of LT is low and does not vary
with p, which indicates even at low malicious activity LT can
identify the true nature of the nodes accurately. However, SatT
improves with increasing psince it collects more evidence about
the malicious activity and performs better in identifying the true
nature of nodes.
Fig. 8 provides results of simulations when malicious nodes
manipulate their recommendations about other nodes in their
messages. For example, when pincreases, malicious nodes hide
the true nature of other nodes such that an honest node is rec-
ommended as malicious or a malicious node is recommended
as honest. In the framework, these recommendations are pro-
cessed in the RT module and the output of this module is given
to the ENT module. Both TPR of LT and SatT perform well if
the malicious activity is high because the misbehavior becomes
more obvious. In LT, TPR is low at lower percentage anomaly,
but the honest nodes are not misclassified as malicious since
9506 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 66, NO. 10, OCTOBER 2017
Fig. 9. Standard deviations in trust of honest and malicious nodes at 75s and 150s. (a) Deviations in trust of malicious nodes at 75s. (b) Deviations in trust of
honest nodes at 150s. (c) Deviation in Trust of malicious nodes at 75s. (d) Deviation in Trust of malicious nodes at 150s.
FPR is close to zero. The RT module does not perform well
with SatT since it does not clearly distinguish between the ma-
licious and honest nodes in the ENT module which also affects
the recommendations.
B. Impact of Number of Nodes
In this section, the impact of the number of nodes on the
performance of LT as well as SatT is examined. In this part of the
experimentation, 80% of the nodes were malicious, but they are
always malicious (i.e., p=100%). This parameter is selected
to test the framework in one of the worst case scenarios. The
number of nodes vary from 20 to 100. Based on this parameter,
in a 20 node scenario, there were 16 malicious nodes and 4
honest nodes.
In order to study the accuracy of the trust assignment under
LT and SatT for a given network size, the deviation of trust
values were captured after the first and the second event (i.e., at
75 s and 150 s). Fig. 9 illustrates trust deviations for the honest
and malicious nodes. The deviations of trust of the honest nodes
after event 1 and 2 are shown in Fig. 9(a) and (b) respectively.
The honest nodes in LT have fewer deviations which are further
reduced in the second event. SatT on the other hand, has higher
deviations in trust values of the honest nodes. Trust deviations
for malicious nodes are illustrated in Fig. 9(c) and (d). LT shows
minimum deviations, even in the first event whereas deviations
in SatT are higher. This is because the trust in SatT does not
decrease as rapidly as in LT. That is why even in the second event
the deviations in trust of the malicious nodes are considerable
under SatT.
In order to understand how the framework is affected by the
false recommendations, two scenarios are simulated. In the first
scenario, all recommendations are true but all the events are
falsely reported by the malicious nodes. In the second scenario,
Fig. 10. EDP and TCE when nodes launch on-off attack with false information
and recommendations. (a) EDP with respect to number of nodes. (b) TCE with
respect to number of nodes.
both the recommendations and the events are falsely reported
by the malicious nodes. Fig. 10(a) and (b) provide EDP and
TCE of LT and SatT in the above mentioned scenarios. Per-
formance of LT under the recommendation attack and the no
recommendation attack scenarios is very similar. The impact
of recommendation attack is only visible when the number of
nodes is higher. This finding indicates that the framework us-
AHMED et al.: NOVEL TRUST FRAMEWORK FOR VEHICULAR NETWORKS 9507
Fig. 11. True and false positive rates with respect to number of nodes for RT.
(a) True positive rate for RT. (b) False positive rate for RT.
ing LT can handle the recommendation attack very well. The
framework using SatT performs significantly worse in this at-
tack scenario. TCE of the scenarios shown in Fig. 10(b) point to
a similar conclusion where the error in the trust values of nodes
in LT is similar in both attack and no-attack scenarios, which
indicates that LT can accurately determine the true nature of
nodes.
Fig. 11 shows TPR and FPR performances of the RT module
with LT and SatT in the recommendation attack scenarios. In
these scenarios the number of nodes varied; however, premained
constant at a value of 100%. TPR of LT performs well under
the attack and does not change with the number of nodes in the
network. However, TPR of SatT gets better gradually with an
increasing number of nodes. FPR of LT is low and consistent
with an increasing number of nodes. FPR of SatT reduces with
an increasing number of nodes.
V. R ELATED WORKS
DSRC [1] is considered one of the biggest advancements to
provide active safety applications for vehicles by using V2V
and V2I communications. These safety applications effectively
prevent casualties and improve traffic management, but V2V
and V2I communications must provide privacy, authentication
and information integrity [31]. The IEEE 1609.2- Standard Se-
curity Services for Applications and Management Messages-
[3] defines a certificate based authentication scheme to restrict
unauthorized nodes from entering the network. However, the
IEEE standard 1609.2 does not provide any mechanism to iden-
tify malicious nodes once they are authorized. Such malicious
nodes can launch attacks including dissemination of false in-
formation, and on-off and recommendation attacks [32]–[34].
Receiving nodes using simple authentication and access control
mechanisms are likely to become victims of these attacks. Such
attacks must be averted by using complementary schemes. Trust
has been proposed for other applications in e-commerce, social
networks and Mobile Ad hoc Networks (MANETs) as a comple-
mentary scheme to authentication mechanisms [16], [21], [35],
[36]. Some of these applications are routing [18], [37], event
reporting [9], [10], [24], [38], service provisioning [39], and
recommendations [21], [26]. In this paper, trust is used to iden-
tify true events and to detect false recommendations and mali-
cious nodes.
The prime objective of DSRC is to exchange safety messages
to identify significant events related to the active safety. In order
to determine whether an event is occurring or not, characteris-
tics of the event and the reaction of the network to this event
can be used. This concept has been successfully demonstrated
in [40] where a node finds inconsistencies in data and corrects
them by using known events and available hypotheses. This
concept was further extended in [41] which enlists a set of usual
reactions corresponding to the significant events and effectively
distinguishes between the true and false events. These concepts
were utilized in our proposed framework where a set of possible
events and reactions of the sender are provided by the DSRC
messages.
In DSRC, collective reactions of nodes can indicate possible
events. When a large number of participants is available, voting
schemes can be useful to identify correct events. In the vot-
ing scheme proposed in [24], a receiving node checks reporting
frequency and number of affirming reports to identify a valid
event. The scheme assumes an honest majority but it may not
converge if the number ofmalicious nodes exceed the number of
honest nodes. The voting scheme proposed in [38] determines
trustworthiness of the event using the time of the message, confi-
dence of the sender about the event and the number of messages
corresponding to the event. The scheme assumes an honest ma-
jority and a priori event probabilities; therefore, if the event is
noticed by small number of senders or several malicious nodes
collaboratively send false information, the scheme may fail. In
our proposed framework, an honest majority is not assumed.
If the frequency of false messages is high and the behavior
of nodes changes with time, a normal voting system can be
defeated. Therefore, a weighted voting scheme is proposed as
an alternative for packet routing in [10] for CVs. That scheme
uses a sender’s own trustworthiness and its confidence about the
event as well as opinion of its neighbors to make a forwarding
decision. This scheme is tested with an honest majority, and the
trust values of the neighbors are not updated until the node it-
self becomes victim of the misbehavior. In that scheme another
assumption was the availability of trusted authorities such as
police. In our proposed framework, trusted authorities are not
available and trust values of nodes are updated based on expe-
rience and recommendations. Another weighted voting scheme
has been proposed in [12] which considers trustworthiness of a
source and compares weighted sum of messages reporting the
event or no-event. Weighted voting has also been studied and
9508 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 66, NO. 10, OCTOBER 2017
compared with DST and BT in [9]. Both weighted voting and
DST provided better results when compared with BT, even when
the evidence is less substantial and the average trust of nodes is
low. That is why modified weighted voting has been used in our
proposed framework.
Establishing trust is an important step in determining true
events and identifying malicious nodes. In order to update the
trust value of a node, a weighted sum of trust of neighbors
is used as well as prior trust of the node in [42]. Nodes used
reputation assistants to obtain the trust and these assistants are
honest in their recommendations. However, misbehavior of rep-
utation assistants is not considered in the work. Several other
schemes such as DST and BT have been used in the literature
to update the trust [43]–[47]. The trust obtained by these meth-
ods can be a linear or nonlinear function of the receiver’s own
experiences (i.e., DT) and recommendation of neighbors (i.e.,
IDT). DT can be obtained from conditional entropy as in [48]
and IDT can be obtained from a Beta reputation or DST [49],
[50]. However, if the trust uses a linear function, it may be more
steady and less sensitive towards a bad behavior. Therefore, a
logistic function is used in [39] to update the trust on a service
provider in a MANET. The logistic function uses the history of
service provisioning and the environment attributes. Weights of
the environmental attributes are calculated using an expectation
maximization algorithm and the weighted sum of attributes is
used by the logistic function. Although the system produces
fewer false positives, a large amount of observations is required
to obtain these weights which can make the system slow.
The trust is vulnerable to recommendation attacks, such as
ballot stuffing and badmouthing. Interestingly, both of these at-
tacks can be handled by similar techniques. For instance, [51]
modified BT reputation (Beta distribution) is used to calculate
the reputation of a node and the trustworthiness of its recom-
mendation. The trust and reputation are represented as (α, β)
where αand βinclude past good and bad experiences and as-
sociated uncertainty. However, a mechanism to identify false
recommendations is not illustrated. False recommendations can
be identified using similarity measures as done in e-commerce
systems such as [21] and other multi-agent networks as in [19].
The trust placed on a node by another is communicated to other
nodes. The receiving node then computes similarity between its
own opinion and received recommendations. In CVs, nodes do
not always know all of their neighbors and an evaluator may not
have its own opinion to compare with. Therefore, recommenda-
tions from other sources can be compared to determine whether
the incoming recommendation is correct or not [20]. However,
nodes have deviations in their recommendations. Knowing such
deviations is not necessary in CV communications and it makes
the system more vulnerable to selective misbehaviors. That is
why, in this paper, the neighbor recommendations are classified
as good or bad (i.e., binary), only.
VI. CONCLUSION
Trust can be an indispensable tool in CV communication
to decrease vulnerability of the network. Therefore, a complete
trust framework has been proposed in this paper. The framework
utilizes Event Trust, Effective Node Trust and Recommendation
Trust modules in order to identify whether events are true and
if nodes are malicious. The performance of the framework is
tested under a baseline model as well as our proposed logis-
tic trust model to demonstrate its effectiveness. The proposed
framework is designed for DSRC V2V communications, but it
can easily be extended for other applications. The framework
currently utilized LT, but it can integrate other machine learning
algorithms for trust estimation.
In this work, selective or conflicting behavior and collabora-
tive attacks have not been studied. Behavior of the framework
under these attacks will be studied in our future work.
APPENDIX
A. Satisfaction-Based Trust (SatT)
The SatT model uses three parameters: Satisfaction, Indirect
Trust and ENT. The satisfaction measures how reliable the in-
formation is from kand is labelled as Sat(E,k). The indirect
trust measures how much trust neighbors Nplace in kon av-
erage. ENT measures the total trust placed by Ein k. Let us
assume Ereceives vmessages from kin time slot n. Based on
this, the current satisfaction SatCurn(E,k)is computed by
SatCurn(E, k)=
⎧
⎪
⎪
⎨
⎪
⎪
⎩
1 if all reports are correct
0 if all reports are incorrect
gn(k)
vn(k)Otherwise,
(16)
where gn(k)is the number of correct messages received from
the node kin time slot n. The satisfaction is the average of
current satisfaction and previous satisfaction which is given by
Satn(E,k)=α∗SatCur(E,k)+(1−α)∗Satn−1(E,k)
(17)
where 0 <α<1.
Direct Trust(DT): The direct trust is the computed satisfaction
and it is labeled as DTn(E,k). The direct trust of a node Eon
node kis given by
DTn(E,k)=Satn(E,k)(18)
The direct trust can be used when the node Ehas a prior history
of k. The direct and indirect trust, when available, are used to
provide the ENT.
Indirect Trust(IDT): The indirect trust is computed by using
the information provided by the neighbors x∈Nabout k.The
node Ecollects the trust from neighbors x∈Nwho had been
providing correct recommendations in the past, and computes
indirect trust as:
IDTn(E,k)=
⎧
⎪
⎪
⎨
⎪
⎪
⎩
x∈Ntn(x, k)
|N|;if N>0 and
tn(x, k)=default
0 Otherwise
(19)
where tn(x, k)is the trust that node xhas for the node kat
time n. The indirect trust in SatT can be used if the node Ehas
no experience with the sender k.
The indirect trust IDT(E, k)computed in (19) and the direct
trust from (18) are used to give ENT tn(E,k), based on the
AHMED et al.: NOVEL TRUST FRAMEWORK FOR VEHICULAR NETWORKS 9509
correct reports sent by node kin ntime slots as :
tn(E,k)=⎧
⎪
⎨
⎪
⎩
γ∗n−1
i=1ti(E,k)
n−1+(1−γ)DT ∗
n(E,k);
if vn(k)>0
Otherwise
(20)
Algorithm 1: Event Trust Algorithm.
Record the first event from this sender in the event_list.
Count the vote in event bin and trust value
For each event there are P senders in event bin, Q senders
in no-event bin
for all events in event_list do
Calculate average trust of events as (10)
Calculate weight of this sender as (11)
Then calculate Tavg for the bins as (13)
For any event in the event_list
if
Tavg1
n(E)>Tavg
0
n(E)
when no-event bin =∅or
Tavg1
n(E)>min Wt
then
Record EVENT=event_list entry
end if
end for
Algorithm 2: Satisfaction Update Algorithm.
LT:
Input: vn(k),a
n(k)
Output: ¯
δn(E,k)
Calculate current dissatisafaction as
δn(E,k)=an(k)
vn(k)
Calculate dissatisfaction of this sender up to nas
¯
δn(E,k)=n
i=1δi(E,k)
n
SatT:
Input: vn(k),g
n(k)
Output: Satn(E,k)
Calculate current satisfaction of this sender as
SatCurn(E, k)=gn(k)
vn(k)
Calculate satisfaction of this sender up till nas
Satn(E,k)=α∗SatCur(E,k)+(1−α)∗Satn−1(E, k)
where
=⎧
⎪
⎨
⎪
⎩
ITn(E, k); if ITk(E, n)=default
1
|N|;if N=∅
0.3 Otherwise
Algorithm 3: Effective Node Trust Update Algorithm.
LT:
Input: Recommendation list from the nodes with
Recommendation Trust >threshold, tn−1(E,k),
ζ1,ζ
2
Output:tn(E,k),honestMalicious(E,k)
Calculate dissatisfaction as (7)
Calculate expectation as (8)
Calculate ENT as (9)
SatT:
Inputs: tn−1(E,.)from neighbors with Tr
n(E,X)>
threshold, Satn(E,k),vn(k)
Output:tn(E,k), honestMal icious(E, k)
DT(E,k)=Satn(E,k)
Calculate indirect trust from the neighbors with
Recommendation Trust >threshold if vn(k)=0as
(19)
Calculate ENT as (20)
For both models, determine if the node is honest or
malicious based on thresholds ζ1and ζ2as
if t(E,k)<ζ
1then
honestMalicious(E,k)=0
else if t(E,k)>ζ
2then
honestMalicious(E,k)=1
end if
Algorithm 4: Recommendation Trust Update Algorithm.
1: Inputs: Recommendation_list, honestMalicious_list of
receiver, Tr
n−1(E,k)
2: Output: Tr
n(E,k)
3: Calculate similarity between this node and
neighbors if any, as (2)
4: Compare current recommendations with historic
recommnedations as (4)
5: Compute average similarity and historic
behaviour as (5) and (3)
6: Calculate recommendation trust Tr
n(E,k)as (6)
B. Algorithms for Trust Framework
Working of the modules shown in Fig. 1 is illustrated in the
algorithms provided in this appendix. In the Event Trust mod-
ule in Fig. 1, the decision logic uses ENT to determine the
true events using the Event Trust Algorithm. When the true
event is known, the information from the incoming messages
is compared against the true event to determine if it is true or
false. The percentage of correct and wrong messages is used
to update the satisfaction or dissatisfaction in the node as per
Satisfaction Update Algorithm. The ENT module is illustrated
in Effective Node Trust Update Algorithm which takes the sat-
isfaction or dissatisfaction from the event trust module and the
neighbor’s recommendation and trust value from the RT mod-
ule in the Recommendation Trust Update Algorithm, along with
the node’s previous trust value, in order to update the ENT. The
9510 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 66, NO. 10, OCTOBER 2017
node considers only those neighbors whose RT is high. This
RT is calculated on the basis of similarity in the recommenda-
tions that the neighbor has been able to maintain as well as the
fluctuations in the recommendations over time as explained by
Recommendation Trust Update Algorithm.
REFERENCES
[1] J. B. Kenney, “Dedicated short-range communications (DSRC) standards
in the United States,” Proc. IEEE, vol. 99, no. 7, pp. 1162–1182, Jul.
2011.
[2] S. R. Narla, “The evolution of connected vehicle technology: From smart
drivers to smart cars to...self-driving cars,” J. Inst. Transp. Eng., vol. 83,
no. 7, pp. 22–26, 2013.
[3] “IEEE Approved Draft Standard for Wireless Access in Vehicular
Environments—Security Services for Applications and Management Mes-
sages,” IEEE P1609.2/D12,, pp. 1–241, Jan. 2016.
[4] K. Sampigethaya, L. Huang, M. Li, R. Poovendran, K. Matsuura, and K.
Sezaki, “Caravan: Providing location privacy for vanet,” in Proc. Embed-
ded Security Cars, 2005, pp. 1–15.
[5] A. Studer, E. Shi, F. Bai, and A. Perrig, “Tacking together efficient au-
thentication, revocation, and privacy in vanets,” in Proc. IEEE 6th Annu.
Commun. Soc. Conf. Sensor, Mesh Ad Hoc Commun. Netw., 2009, pp. 1–9.
[6] C. Lyu, D. Gu, X. Zhang, S. Sun, and Y. Tang, “Efficient, fast and scalable
authentication for vanets,” in Proc. IEEE Wireless Commun. Netw. Conf.,
Apr. 2013, pp. 1768–1773.
[7] M. Razzaque, A. Salehi, and S. M. Cheraghi, “Security and privacy in ve-
hicular ad-hoc networks: Survey and the road ahead,” in Wireless Networks
and Security. New York, NY, USA: Springer-Verlag, 2013, pp. 107–132.
[8] Y. Malhotra, “Quantitative modeling of trust and trust management
protocols in next generation social networks based wireless mobile
ad hoc networks,” Dec. 4, 2014. [Online]. Available: https://ssrn.com/
abstract=2539180. Accessed on: Jun. 3, 2017.
[9] J.-P. Hubaux, M. Raya, P. Papadimitratos, and V. D. Gligor, “On data-
centric trust establishment in ephemeral ad hoc networks,” in Proc. IEEE
Infocom, 2008, pp. 1912–1920.
[10] J. Zhang, C. Chen, and R. Cohen, “A scalable and effective trust-based
framework for vehicular ad-hoc networks,” J. Wireless Mobile Netw. Ubiq-
uitous Comput. Dependable Appl., vol. 1, no. 4, pp. 3–15, 2010.
[11] D. A. Rivas, J. E. M. Barcel O-Ordinas, M. G. Zapata, and J. N. D.
Morillo-Pozo, “Security on VANETs: Privacy misbehaving nodes false
information and secure data aggregation,” J. Netw. Comput. Appl., vol. 34,
no. 6, pp. 1942–1955, 2011.
[12] K. Sha, S. Wang, and W. Shi, “Rd4: Role-differentiated cooperative decep-
tive data detection and filtering in VANETs,” IEEE Trans. Veh. Technol.,
vol. 59, no. 3, pp. 1183–1190, Mar. 2010.
[13] S. Ruj, M. Cavenaghi, Z. Huang, A. Nayak, I. Stojmenovic, “On data-
centric misbehavior detection in VANETs,” in Proc. IEEE Veh. Technol.
Conf., 2011, pp. 1–5.
[14] S. Mazilu, M. Teler, and C. Dobre, “Securing vehicular networks based
on data-trust computation,” in Proc. IEEE Int. Conf. P2P, Parallel, Grid,
Cloud Internet Comput., 2011, pp. 51–58.
[15] M. Raya, R. Shokri, and J.-P. Hubaux, “On the tradeoff between trust and
privacy in wireless ad hoc networks,” in Proc. 3rd ACM Conf. Wireless
Netw. Security, 2010, pp. 75–80.
[16] Y. A. Kim and M. A. Ahmad, “Trust, distrust and lack of confidence of
users in online social media-sharing communities,” Knowl.-Based Syst.,
vol. 37, pp. 438–450, 2013.
[17] W. Bamberger, J. Schlittenlacher, and K. Diepold, “A trust model for
intervehicular communication based on belief theory,” in Proc. IEEE 2nd
Int. Conf. Social Comput. 2010, pp. 73–80.
[18] M. J. Probst and S. K. Kasera, “Statistical trust establishment in wireless
sensor networks,” in Proc. IEEE Int. Conf. Parallel Distrib. Syst., 2007,
vol. 2, pp. 1–8.
[19] A. Das and M. M. Islam, “Securedtrust: A dynamic trust computation
model for secured communication in multiagent systems,” IEEE Trans.
Dependable Secure Comput., vol. 9, no. 2, pp. 261–274, Mar./Apr. 2012.
[20] J. Luo, X. Liu, Y. Zhang, D. Ye, and Z. Xu, “Fuzzy trust recommendation
based on collaborative filtering for mobile ad-hoc networks,” in Proc.
IEEE 33rd Conf. Local Comput. Netw., 2008, pp. 305–311.
[21] L. Xiong and L. Liu, “Peertrust: Supporting reputation-based trust for
peer-to-peer electronic communities,” IEEE Trans. Knowl. Data Eng.,
vol. 16, no. 7, pp. 843–857, Jul. 2004.
[22] Y. Lu, K.-H. Su, J.-T. Weng, and M. Gerla, “Mobile social network based
trust authentication,” in Proc. IEEE 11th Annual Mediterranean Ad Hoc
Netw. Workshop, 2012, pp. 106–112.
[23] J.-H. Cho, A. Swami, and R. Chen, “Modeling and analysis of trust man-
agement with trust chain optimization in mobile ad hoc networks,”J. Netw.
Comput. Appl., vol. 35, no. 3, pp. 1001–1012, 2012.
[24] N.-W. Lo and H.-C. Tsai, “A reputation system for traffic safety event
on vehicular ad hoc networks,” EURASIP J. Wireless Commun. Netw.,
vol. 2009, 2009, Art. no. 9.
[25] Y. Sun, Z. Han, and K. J. R. Liu, “Defense of trust management vulner-
abilities in distributed networks,” IEEE Commun. Mag., vol. 46, no. 2,
pp. 112–119, Feb. 2008.
[26] M. Srivatsa, L. Xiong, and L. Liu, “Trustguard: Countering vulnerabilities
in reputation management for decentralized overlay networks,” in Proc.
14th Int. Conf. World Wide Web, 2005, pp. 422–431.
[27] P.-N.Tan, Introduction to Data Mining, vol.1. Boston, MA, USA: Pearson,
2006.
[28] C. L. Robinson, L. Caminiti, D. Caveney, and K. Laberteaux, “Efficient
coordination and transmission of data for cooperative vehicular safety
applications,” in Proc. 3rd Int. Workshop Veh. Ad Hoc Netw., 2006,
pp. 10–19.
[29] S. Ahmed and K. Tepe, “Misbehaviour detection in vehicular networks
using logistic trust,” in Proc. IEEE Wireless Commun. Netw. Conf., Doha,
Qatar, Apr. 2016, pp. 1–6.
[30] V. Andr´
as, and H. Rudolf, “An overview of the OMNeT++ simulation
environment,” in Proc. 1st Simutools, 2008, pp. 1–10.
[31] K. Rostamzadeh, H. Nicanfar, N. Torabi, S.Gopalakrishnan, and V. Leung,
“A context-aware trust-based information dissemination framework for
vehicular networks,” IEEE Internet Things J., vol. 2, no. 2, pp. 121–132,
Apr. 2015.
[32] D. A. Rivas, J. M. Barcel´
o-Ordinas, M. G. Zapata, and J. D. Morillo-Pozo,
“Security on VANETs: Privacy, misbehaving nodes, false information
and secure data aggregation,” J. Netw. Comput. Appl., vol. 34, no. 6,
pp. 1942–1955, 2011.
[33] D. He, C. Chen, S. Chan, J. Bu, and A. V. Vasilakos, “Retrust attack-
resistant and lightweight trust management for medical sensor networks,”
IEEE Trans. Inform. Technol. Biomed., vol. 16, no. 4, pp. 623–632,
Jul. 2012.
[34] J. Petit and S. E. Shladover, “Potential cyber attacks on automated ve-
hicles,” IEEE Trans. Intell. Transp. Syst., vol. 16, no. 2, pp. 546–556,
Apr. 2015.
[35] K. Govindan and P. Mohapatra, “Trust computations and trust dynamics in
mobile ad hoc networks: A survey,” IEEE Commun. Surveys Tuts., vol. 14,
no. 2, pp. 279–298, Apr. 2012.
[36] X. Liu, A. Datta, and E.-P. Lim, Computational Trust Models and Machine
Learning. Boca Raton, FL, USA: CRC Press, 2014.
[37] H. Sedjelmaci and S. M. Senouci, “A new intrusion detection frame-
work for vehicular networks,” in Proc. IEEE Int. Conf. Commun., 2014,
pp. 538–543.
[38] S. Mazilu, M. Teler, and C. Dobre, “Securing vehicular networks based
on data-trust computation,” in Proc. IEEE Int. Conf. P2P, Parallel, Grid,
Cloud Internet Comput., 2011, pp. 51–58.
[39] Y. Wang, Y.-C. Lu, I.-R. Chen, J.-H. Cho, A. Swami, and C.-T. Lu,
“Logittrust: A logit regression-based trust model for mobile ad hoc net-
works,” in Proc. 6th ASE Int. Conf. Privacy, Security, Risk Trust, 2014,
pp. 1–10.
[40] P. Golle, D. Greene, and J. Staddon, “Detecting and correcting malicious
data in VANETs,” in Proc. 1st ACM Int. Workshop Veh. ad hoc Netw.,
2004, pp. 29–37.
[41] S. Ruj, M. Cavenaghi, Z. Huang, A. Nayak, and I. Stojmenovic, “On data-
centric misbehavior detection in VANETs,” in Proc. IEEE Veh. Technol.
Conf.Sep. 2011, pp. 1–5.
[42] A. Boukerche and Y. Ren, “A security management scheme using a novel
computational reputation model for wireless and mobile ad hoc networks,”
in Proc. 5th ACM Symp. Perform. Eval. Wireless ad hoc, Sensor, Ubiqui-
tous Netw., 2008, pp. 88–95.
[43] R. Li, J. Li, P. Liu, and H.-H. Chen, “An objective trust management
framework for mobile ad hoc networks,” in Proc. IEEE 65th Veh. Technol.
Conf., 2007, pp. 56–60.
[44] R. Wu, X. Deng, R. Lu, and X. Shen, “Trust-based anomaly detection in
wireless sensor networks,” in Proc. 1st IEEE Int. Conf. Commun. China,
2012, pp. 203–207.
[45] H. Sedjelmaci and S. M. Senouci, “An accurate and efficient collaborative
intrusion detection framework to secure vehicular networks,” Comput.
Elect. Eng., vol. 43, pp. 33–47, 2015.
AHMED et al.: NOVEL TRUST FRAMEWORK FOR VEHICULAR NETWORKS 9511
[46] A. Whitby, A. Jøsang, and J. Indulska, “Filtering out unfair ratings in
Bayesian reputation systems,” in Proc. 7th Int. Workshop Trust Agent
Soc., vol. 6, 2004, pp. 106–117.
[47] K. Thirunarayan, P. Anantharam, C. Henson, and A. Sheth, “Compara-
tive trust management with applications: Bayesian approaches emphasis,”
Future Gener. Comput. Syst., vol. 31, pp. 182–199, 2014.
[48] Y. L. Sun, W. Yu, Z. Han, and K. Liu, “Information theoretic framework
of trust modeling and evaluation for ad hoc networks,” IEEE J. Sel. Areas
Commun., vol. 24, no. 2, pp. 305–317, Feb. 2006.
[49] Z. Wei, H. Tang, F. R. Yu, M. Wang, and P. Mason, “Security enhance-
ments for mobile ad hoc networks with trust management using uncertain
reasoning,” IEEE Trans. Veh. Technol., vol. 63, no. 9, pp. 4647–4658,
Nov. 2014.
[50] Z. Wei, F. R. Yu, and A. Boukerche, “Trust based security enhancements
for vehicular ad hoc networks,” in Proc. 4th ACM Int. Symp. Develop.
Anal. Intell. Veh. Netw. Appl., 2014, pp. 103–109.
[51] S. Buchegger and J.-Y. Le Boudec, “A robust reputation system for peer-
to-peer and mobile ad-hoc networks,”in Proc. P2PEcon 2004, 2004, Paper
LCA-CONF-2004-009.
Saneeha Ahmed received the Bs.Eng. and M.Eng. degrees in computer and
information systems engineering from NED University of Engineering and
Technology, Karachi, Pakistan, in 2004 and 2007, respectively, and the Ph.D.
degree in electrical and computer engineering from the University of Windsor,
Windsor, ON, Canada, in 2016. She joined NED University of Engineering
and Technology as Lecturer in 2005. She is currently working as an Assistant
Professor in the Department of Computer and Information Systems Engineer-
ing, NED University. Her research interests include quantum cryptography and
privacy in PKI systems, high-performance computing, machine learning, cloud
and cluster computing, network security, trust and privacy in ad hoc network
applications.
Sarab Al-Rubeaai received the B.Sc. and M.Sc. degrees in mathematics from
the University of Baghdad, Baghdad, Iraq, in 1992 and 1997, respectively, the
M.Sc. degree in mathematics from the University of Western Ontario, London,
ON, Canada, in 2009, and the Ph.D. degree in electrical and computer engi-
neering from the University of Windsor, Windsor, ON, Canada. Her research
interests include vehicle to vehicle communication, computer vision, image pro-
cessing, deep learning, machine learning, and big data.
Kemal Tepe received the B.Sc. degree from Hacettepe University, Ankara,
Turkey, in 1992, and the M.Sc. and Ph.D. degrees from Rensselaer Polytech-
nic Institute, Troy, NY, USA, in 1996 and 2001, respectively, all in Electrical
Engineering. He worked as a Research Scientist at Telcordia Technologies in
Red Bank, NJ, USA and as a Post Doctorate Researcher at Rutgers Univer-
sity between 2001 and 2002. He joined to the University of Windsor in 2004
and founded Wireless Communication and Information Processing Research
Laboratory. His research interest include wireless communication systems and
networks, wireless sensor networking, and vehicular networks. He is particu-
larly interested in real-time wireless communication protocols in sensor net-
works, applications of sensor networking in smart grid applications, vehicular
ad hoc networks for safety/emergency messaging, and vehicular internet access
protocols. His research projects are sponsored by the Canada Foundation for
Innovation, Natural Sciences and Engineering Research Council of Canada, the
Canada Federal Development Research Fund, and Communication Research
Centre of Canada. He served as publication, tutorial, and financial chairs, and
member of the technical program committee of IEEE conferences. He is Area
Editor for Elsevier Ad Hoc Networking Journal.