Conference PaperPDF Available

Towards Multi-metric Cache Replacement Policies in Vehicular Named Data Networks

Authors:

Figures

Content may be subject to copyright.
Towards Multi-metric Cache Replacement
Policies in Vehicular Named Data Networks
Svetlana Ostrovskaya, Oleg Surnin, Rasheed Hussain, Safdar Hussain Bouk,
JooYoung Lee, Narges Mehran
, Syed Hassan Ahmed §, and Abderrahim Benslimane
Institute of Information Systems, Innopolis University, Innopolis, Russia.
Email: {s.ostrovskaya,o.surnin,r.hussain, j.lee}@innopolis.ru
Department of Information and Communication Engineering, DGIST, Daegu 42988, Korea.
Email: bouk@dgist.ac.kr
Institute of Information Technology, Alpen-Adria-Universit¨
at Klagenfurt,
Klagenfurt, Austria. Email: narges@itec.aau.at
§Department of Computer Science, Georgia Southern University, Statesboro, GA 90458, USA.
Department of Computer Science, University of Avignon, France.
Email: abderrahim.benslimane@univ-avignon.fr
Abstract—Vehicular Named Data Network (VNDN) uses
NDN as an underlying communication paradigm to real-
ize intelligent transportation system applications. Content
communication is the essence of NDN, which is primarily
carried out through content naming, forwarding, intrinsic
content security, and most importantly the in-network
caching. In vehicular networks, vehicles on the road com-
municate with other vehicles and/or infrastructure network
elements to provide passengers a reliable, efficient, and
infotainment-rich commute experience. Recently, different
aspects of NDN have been investigated in vehicular net-
works and in vehicular social networks (VSN); however, in
this paper, we investigate the in-network caching, realized
in NDN through the content store (CS) data structure. As
the stale contents in CS do not just occupy cache space,
but also decrease the overall performance of NDN-driven
VANET and VSN applications, therefore the size of CS and
the content lifetime in CS are primary issues in VNDN com-
munications. To solve these issues, we propose a simple yet
efficient multi-metric CS management mechanism through
cache replacement (M2CRP). We consider the content
popularity, relevance, freshness, and distance of a node to
devise a set of algorithms for selection of the content to be
replaced in CS in the case of replacement requirement.
Simulation results show that our multi-metric strategy
outperforms the existing cache replacement mechanisms
in terms of Hit Ratio.
Index Terms—Vehicular Networks, Named Data Net-
working, Content Replacement, Content Store Manage-
ment.
I. INTRODUCTION
Advancements in the computations and communica-
tions as well as quality and speed of Internet over the
last couple of decades have resulted in the realization
of many new emerging technologies such as ad hoc
networks, cloud computing, Internet of Things, and
social networks, to name a few. These technologies both
directly and indirectly contribute to the betterment of
human lives. Intelligent Transportation System (ITS) is
realized through one such emerging technology, i.e., Ve-
hicular Ad hoc NETwork (VANET). In VANET, vehicles
on the road are employed as mobile nodes; however,
the movement of these nodes is restricted by the road
topology. To date, promising research results have been
achieved in the field of VANET that cover the diverse
range of areas such as applications, services, security,
privacy, and quality of service from both theoretical
and implementation standpoint [14]. These encouraging
research outcomes have also mandated for and resulted
in the standardization of VANET protocols. To this end,
Dedicated Short Range Communication (DSRC) and
Wireless Access in Vehicular Environment-WAVE (IEEE
802.11p, IEEE P1609.x) are considered to be one of the
promising vehicular communications standards [8].
VANET and its other breeds such as vehicular clouds
[6], [7] and vehicular social network [17] offer a plethora
of applications and services to consumers ranging from
safety to infotainment (information and entertainment)
applications [7]. The former class of applications and
services are of primary concerns for the consumers,
whereas the infotainment related features are catego-
rized as the value-added services. Safety-related appli-
cations exhibit different requirements than the infotain-
ment applications. For instance, the safety-related ap-
plications are delay-sensitive, whereas the infotainment
applications are usually delay tolerant. The common
phenomenon among these services and applications in
VANET is that the whole communication is based on
the content and data. That is why, in essence, vehicular
nodes while running VANET applications are just in-
2018 IEEE 29th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC)
978-1-5386-6009-6/18/$31.00 ©2018 IEEE
terested in the content, instead of the content source.
VANET communication is mostly used for informa-
tion and content exchange and therefore advocates for
content-driven approaches towards vehicular communi-
cation [3], [20].
Recently a new communication paradigm called
named data networking (NDN) has been developed that
considers data/content as a focal point of the whole
communication process. NDN-based communication is
different from traditional TCP/IP-based communication,
wherein TCP/IP, routing is used based on node addresses
and security is provided through the communication link
rather than the content itself [3], [20]. On the other hand,
NDN uses the content name for routing, which drasti-
cally decreases the routing overhead. Furthermore, other
limitations of traditional TCP/IP based communication
such as security, mobility, availability, and performance
for content-related applications are also addressed in
NDN. For instance, intrinsic security is provided by
NDN where security techniques are applied to content
rather than the communication path that content takes
from source to the destination. Content availability and
network performance are increased through in-network
caching where data is locally cached whenever it is
received as a result of a query or forwarded by the nodes
in the active communication path.
The rationale for using NDN-based communication in
VANET is several-fold. For instance, VANET commu-
nication is mostly broadcast-based and therefore NDN
architecture can be a good choice to distribute differ-
ent contents in VANET [18]. In-network caching and
intrinsic data security mechanisms are other advantages
of NDN that can be utilized by VANET to improve the
performance, efficiency, and security of the applications
running in VANET environment. It is also worth not-
ing that up until recently, VANET nodes are equipped
with special hardware called on-board unit (OBU) that
complies with DSRC standard (IEEE 802.11p). DSR-
C/WAVE standard mandates certain communication pa-
rameters such as transmission range up to 1000 meters,
bandwidth, frequency, and so forth. However, today’s
high-end cars are also equipped with multiple wireless
communication interfaces in addition to IEEE 802.11p
such as WiFi, WiMax, Bluetooth, 3G/LTE, and so forth.
These multiple interfaces also make the vehicles perfect
candidates for using NDN communication technology.
In other words, vehicles can use NDN-based communi-
cation to utilize all of these interfaces for different types
of communications [4].
VNDN leverages a simple pull-based communication
where interest message is broadcasted to the neighbors
by the vehicle that requires the content. The neighbors
then first check if they have the desired content in
their Content Store (CS) module, a module responsible
for caching the passing by contents. An intermediate
vehicle with matching content in its CS replies with
data message, once it receives the interest message.
Otherwise, the interest is forwarded further in the net-
work until the content is located. Upon reception of
the data message, the pending interest is removed and
the data of the corresponding interest is stored/discarded
and further forwarded downstream in the network. In
parallel to forwarding the content/data, a node may store
that content message in the CS depending upon the
implemented caching policy. For this caching strategy,
it is possible to select a special set of nodes to cache
the content in a network with an algorithm proposed in
[10], therefore, it is not required to store every content,
in every cache node on the path. However, if the CS
in one of the VNDN nodes reaches to its capacity, then
the new content is replaced with the stale, unpopular, or
old content in the CS, which requires an efficient cache
replacement mechanism.
Currently, CS uses several implementations of cache
replacement policies that include Priority First In First
Out (PFIFO), Least Recently Used (LRU), and Least
Frequently Used (LFU) [15]. These policies have their
merits and demerits; however, for vehicular commu-
nications environment, these policies may not work
efficiently because of the mobility, intermittence, data
relevance, and other parameters. Furthermore, a single
parameter-based policy does not work for VANET. For
instance, LRU is not an appropriate choice for vehicular
networks in certain scenarios. For example when a
content has not been requested recently, but is relatively
fresh and has the higher frequency of requests, and is
likely to be used by nodes where the current node is
heading. Such content may not be a suitable candidate
for replacement. Similarly, other policies individually
cannot guarantee the performance for VANET applica-
tions. Therefore, new multi-metric policies are essential
for NDN-driven VANET applications. The selection of
metrics is an important aspect of the CS management
and should address the dynamics of VANET architecture.
Furthermore, new policies must also take into account,
the mobility, location, and time of movement for vehicu-
lar nodes and priority of the content based on its retrieval
history.
To meet the aforementioned requirements, we pro-
pose multi-metric content replacement policies for CS
in NDN-driven VANET. Our proposed methods take
three metrics into account, freshness of the content, the
frequency of retrieval (popularity), and the distance be-
tween the location where the content was received/saved
in CS and the current location of the caching node. These
three metrics collectively encompass the requirements
for improved performance of the VANET application.
NDN involves three components, a) Forwarding strategy,
b) Cache decision policy, and c) Cache replacement
or eviction policy and our research focuses on the
replacement policies. Our simulation results show that
our proposed scheme performs better than the existing
cache replacement policies in terms of cache hits ratio.
The contributions of this paper are as follows:
Multi-metric cache replacement policies (M2CR P )
for NDN-driven vehicular networks.
Comparison of the proposed policies with the ex-
isting cache replacement mechanisms through sim-
ulations.
Implementation of proposed policies in ndnSIM for
vehicular networks.
Cache specific future research and open challenges
in named data vehicular networks.
The rest of the paper is organized as follows: Section
II discusses the background and related work whereas
Section III outlines our proposed mechanism for cache
management. In Section IV, we discuss simulations
results followed by concluding remarks in Section V.
II. BACKGROU ND A ND RE LATE D WORK
As stated earlier, NDN uses a pull-based strategy for
communication. Interest message is generated by the
node that requires the content. A node with matching
content in its CS replies with data message, once it
receives the interest message. Otherwise, the interest is
forwarded farther in the network after matching the name
prefix within the FIB and the Interest forwarding strategy
in use. Upon reception of the data message, a node first
checks whether the relevant interest is still pending in
Pending Interest Table (PIT) or not. If the interest for
that data message is still pending, data message is further
forwarded downstream in the network, otherwise the data
message is discarded. In parallel to forwarding the data, a
node may store that data message in the CS depending
upon the caching decision scheme in place. If the CS
reaches to its capacity, then the content from the data
message is replaced with the stale, unpopular, or old
content in the CS. A generalized working principle of
NDN-based communication is depicted in Fig. 1.
Fiore et al. [5] assessed a distributed caching strategy
(and the time for each chunk to be cached in a node);
in their approach, ad-hoc nodes independently act. How-
ever, as mentioned in [13], this approach would probably
violate NDN line of speed requirement. Without loss
of generality, this paper focuses on the applicability of
NDN in vehicular networks and more precisely on the
cache content replacement strategies in CS. Although
there are still many unsolved issues in NDN-driven ve-
hicular networks [16], we consider only CS management.
In this section, we outline the existing mechanisms of
cache replacement policies in NDN.
In NDN, every node implements a cache replacement
policy to keep required contents in CS. The common
policies implemented in NDN include random replace-
ment, LRU, LFU, and PFIFO. However, the afore-
mentioned replacement policies do not scale well with
Forwarding
Strategy
Update PIT entry
Discard Interest
Cache/CS
Hit?Yes
Send Data
No
PIT Hit?Yes
No
Add PIT entry
InFace i
OutFace j
Receive Interest [Name, Selectors, ...]
Interest forwarded
Interest Plane
FIB Hit?
Yes
(a) Interest Plane
PIT Hit? Yes
Send Data
InFace j
OutFace i
Receive Data [Name, Content, Sign., ...]
Cache
Content?
Is
old version of
Content in
CS?
Is
CS Full
?
No
No
Yes
Yes
No Yes
No
Caching Decision
Cache Replacement
Data forwarded
Data Plane
(b) Data Plane
Fig. 1: Interest and Data planes in vanilla NDN.
vehicular networks because of its unique characteristics
such as directed mobility, short interconnection time
among nodes, and high speed. For instance, random
replacement may lead to the replacement of an important
content that has many cache hits at that time. In [1], the
authors provided a survey regarding caching mechanisms
in information-centric networks (ICNs). The authors
outlined different parameters such that time of cache, the
content itself, its relevance, and lifetime in the cache that
affect the performance of the cache in ICN. In another
work, Lal et al. [9] proposed a content replacement
strategy for ICN where they considered the popularity
of the content as a metric for replacement decision.
They also consider global popularity in the network.
However, this scheme is not suitable for VANET for
three reasons: 1) the global content popularity is not
applied in VANET and will add more complexity to the
CS management, 2) it will increase cache retrieval time,
and 3) it will decrease the speed of the CS management
process. Furthermore, only popularity is not an enough
metric to replace a content in CS. Another popularity-
based cache replacement strategy named Fine-Grained
Popularity-based Caching (FGPC) is proposed in [11].
FPGC uses only frequency information to decide the
replacement of a content. Furthermore, these schemes
are proposed for wired networks and may not work
efficiently in wireless networks. A cooperative cache
management scheme for generic NDN is proposed in [2]
where the authors use buffer capacities of content routers
to keep the useful copies of the content for future use
and the aim is to increase hit ratio.
Effective cache replacement policy can improve
cache-hit rates and thus improve content distribution
performance. Most of the existing cache replacement
policies for mobile networks do not take into account
different parameters associated to nodes such as speed,
location, etc. The existing replacement policies mainly
focus on the usage frequency of the content. In mobile
networks in general, and in VANET in particular, the
direction, speed, and location of the vehicular node are
of prime importance to decide on the replacement of a
particular content. Therefore, these parameters can be
used to improve performance of the content store and
optimize its functionality. Furthermore, efficient cache
replacement policies will not only decrease the interest
satisfaction delay but also improve overall network per-
formance.
To this end, all aforementioned schemes are designed
for NDN-based wired networks. A comparative study
of different cache replacement strategies in wireless net-
works is carried out by Shailendra et al. [15] to compare
the performance of LRU, FIFO and universal caching
by considering different cellular service providers in the
United States. The existing cache replacement policies
are based on either single parameter, or not appropriate
for wireless, mobile, and ad hoc networks. Therefore,
to fill the voids, we propose multi-metric cache replace-
ment policies to increase the efficiency of NDN-driven
VANET applications.
III. PROP OS ED MU LTI-METRIC CACH E
REP LAC EM EN T (M2CRP)
In this section, we outline the proposed M2CRP mech-
anism for vehicular networks. The general principle of
our proposed scheme is to consider the following param-
eters and based on the outcome of these parameters, it is
decided whether to replace a certain content or preserve
it in the cache.
1) Freshness: The freshness of the content represents
the amount of time, up to which the content can re-
main stored in a cache. As mentioned in [12], NDN
exploits a freshness metric (FreshnessSeconds) in
every Data packet that specifies how long a certain
packet will be stored in the network Content Stores,
thus allowing the producers to control the packet
removal process from the network.
2) Frequency: The frequency metric is the number of
times a content has been requested while it was in
the current CS.
3) Distance: It is the distance between the location
of the node when it has received a content and its
current location.
Whenever a new content is received by a node, it is
stored in the current node’s CS and all the intermediate
nodes along the path. The update for CS is necessary
to accommodate the required replacement if necessary.
There are several ways to update the status of CS, for
instance, at constant intervals, random intervals, and
trigger-based. In other words, for constant interval-based
CS update, the current status of the content in CS
is updated after a certain time interval tupdate. This
update in CS includes the current number of hits for
each content, level of freshness, and the distance that
the current node has covered from the point when it
has received the content. CS update, after every tupdate
interval is easy to be implemented; however, it does
not work efficiently with vehicular networks because the
underlying content replacement policy might replace the
content before the important parameters were updated
for that content. The same argument holds true for the
random interval-based update as well. Therefore, in our
proposed scheme, we update the parameters of the CS
every time when a new content is received. In this way,
these parameters are applied for replacement candidate
selection.
In order to choose a content for replacement with a
new incoming content in CS, we calculate candidacy
score for each content in the CS, based on the afore-
mentioned parameters through an algorithm. The candi-
date selection for a particular content (ccur) mechanism
works as follows: first, the frequency of cache hits for
individual contents is updated and then the freshness is
calculated for ccur. In the next step, the distance of the
node (having ccur) is calculated from the location where
it had received ccur. Frequency has the highest priority
in our mechanism where the more the frequency, the less
is the probability that ccur will be replaced. Similarly,
fresh content in CS has an edge over old content where
the old content has a higher probability to be replaced.
It is worth noting that this policy is different from LRU.
LRU takes into account, the request for a content over
time (when this content was queried for, to be more
precise) and in our case, we consider the difference
between time when the content was added to CS and
the current time. The distance plays the same role as
freshness. The farther is the node from the location when
ccur was added to CS, the more is the probability that
the content will be replaced. However, these parameters
individually do not work for vehicular networks and may
have conflicts sometimes. For instance, if the ccur is old
but it has many cache hits (higher frequency), and the
node that has cached ccur is in a geographical region
where ccur is popular, then only freshness- or distance-
based replacement will not work.
The major factor in our policy is that these parameters
alone cannot define the true candidacy of the ccur to be
replaced. For instance, if a content is non-fresh, it does
not mean that it would be an appropriate candidate for
replacement. Other elements should also be investigated.
It is to be noted that the term non-fresh depends on the
size of CS, application scenario, and the time duration
for which ccur is cached in CS. Nevertheless, there
can be a scenario, where non-fresh content can stay in
the CS if it is accessed frequently. Similarly, only the
distance does not determine the candidacy of a content
for replacement; for instance, if a node travels certain
distance after the content has been cached in its CS
and the content is popular and requested many times
in certain region where the vehicle is currently present,
the content must not be replaced. Therefore, we need to
take all of these parameters into account. To this end,
we propose two methods to calculate the candidacy of
the content for replacement.
In the first method, we define a base lifetime (tbase)
of a content in CS. The purpose of this base lifetime
is to allow the content to stay in CS for time tbase and
do not apply replacement policy on this content because
it is too fresh and we predict that there will be some
requests for this content. In the current settings, we use
a constant value of 2secs for tbase. Afterwards, we apply
the policy considering frequency (hereafter denoted by
F, freshness (denoted by T), and distance (denoted by
D). According to the explanations given in advance, the
final score δciis calculated as follows:
δci=Fi
Ti+Di
Once δciis calculated for every content in the CS,
where nis the number of contents in CS and we have
{δci|1in}, the one with the minimum value of δ
is selected as the candidate for replacement as follows:
creplacement = min
δc1c2c3,...δcn
In the second approach, we take the same parameters
but instead of introducing, tbase, we use normalized
parameters. Among the three parameters chosen, two of
them are considered to be costly criteria, i.e., freshness
and distance, whereas the frequency is considered to
be a beneficial criterion. In other words, the increase
in frequency favors the importance of a content and its
priority in CS, whereas the increase in time and distance
do the opposite. After normalizing these parameters,
every parameter has a normalized weight combined with
the weights of the other parameters. Once the average
of these weights is computed, content with the minimum
weight is the candidate for replacement. It is also worth
mentioning that there may be a tie between more than
one content to be replaced. In that case, we can use the
conventional replacement strategies or the randomness
strategy. The overall weight for each content ci,δciis
calculated as follows:
δci=Finorm +Tinorm +Dinorm
3
Finorm ,Tinorm , and Dinorm are the normalized param-
eters. When a weight is chosen for every content, the
content with minimum weight can be selected to be
the candidate for replacement just like the previous
mechanism. Step by step process of the proposed cache
replacement policies is given in Fig. 2. In the next
section, we will discuss our simulation results for our
proposed mechanisms and compare it with two existing
mechanisms, i.e., LRU and PFIFO.
IV. PERFORMANCE EVALUATION
In this section, we present the performance evalu-
ation of our proposed scheme that is contrasted with
the existing cache replacement schemes through simu-
lations. We compared our proposed scheme with two
existing cache replacement schemes, PFIFO and LRU.
The simulations are carried out in NS-3 based NDN
simulator, ndnSIM12.3. The vehicular mobility model
with random trips and variable vehicle speeds is gen-
erated using SUMO2. We considered a 4km2map
from downtown Manhattan (converted through open-
streetmaps) with the constant 108 vehicles, variable
size of CS (50,75,100,125,150), variable number of
1http://ndnsim.net/2.3/index.html
2http://sumo.dlr.de/index.html
producers (10%,20%,30%,40%) and variable number
of consumers (25%,30%,35%,40%). The consumers
generated interests at a constant rate of 100 interests per
second. As in [10], in ndnSIM, [19] was selected as
a forwarding strategy. It defines how Interest and Data
packets are being forwarded while passing through NDN
routers.
In the preliminary stage of our investigation, we con-
sider cache hit ratios (%), interest satisfaction delay, and
interest satisfaction ratio (%) as performance metrics.
Figure 3 shows the cache hit ratio in vehicular networks
with different CS sizes. As observed, by increasing
the CS size, the hit ratio also increases because there
is more room for caching the contents in a CS. Our
proposed M2CR P 1outperforms the existing mecha-
nisms. It is also depicted that M2CR P 2is slightly
better than LRU and FIFO approaches. It is also worth
noting that by increasing the size of CS, the hit ratio
rises until the CS size of 100; however, after that, the
influence of CS size is decreased. In other words, the
cache hit ratios approximately converge at or near single
points when CS sizes are 100 and 150. By varying
Is
CS Full
?
Yes
No
M2CRP1
No Content in CS
Contents
in CS with
T > 2s?
Yes
Stop
No
(a) M2CRP 1
Is
CS Full
?
Yes
No
M2CRP2
No Content in CS
(b) M2CRP 2
Fig. 2: Proposed multi-metric cache replacement poli-
cies; (2a) M2CR P 1(2b) M2CRP 2.
50 75 100 125 150
CS Size
0
0.5
1
1.5
2
2.5
Cache Hit Ratio (%)
Producers: 10
Producer Mobility: Static
Consumers: 30
FIFO
LRU
M2CRP1
M2CRP2
Fig. 3: Measurement of cache hit ratio with respect to
different sizes of Content Stores.
the number of producers in the network and keeping
the consumers constant, the second experiment is con-
ducted. Vehicles move with variable speeds and 10% of
vehicles are consumers. For producers, 10% are static,
whereas the rest (10%,20%,30%) are mobile. Therefore,
the number of producers in the second scenario is
(10%,20%,30%,40%) of the total nodes. The results
are shown in Fig. 4. It can be observed that M2CRP 1
has better cache hit ratio than LRU and FIFO. But in
the case of varying consumers, LRU performs better.
It is also worth noting that the increase in the number
of producers has no significant effect on the hit ratio
until producers are 25%. In addition, in the current
simulations setup, the hit ratio is maximized when there
are 30% producers in the network. Nonetheless, our
proposed schemes achieve better hit ratio than LRU and
FIFO. Increasing the number of producers from 30%
have adverse effects on the hit ratio because there may
not be enough consumers. In the third scenario, we vary
10S+0M 10S+10M 10S+20M 10S+30M
Producers (%)
0.8
1
1.2
1.4
1.6
1.8
2
2.2
Cache Hit Ratio (%)
CS Size: 100
Consumers: 30
S=Static, M=Mobile
FIFO
LRU
M2CRP1
M2CRP2
Fig. 4: Measurement of cache hit ratio with respect to
different number of producers and constant consumers.
the number of consumers in the network and constant
interest rate, constant number of vehicles, and fixed
number of producers to produce the contents. It can be
seen in Fig. 5 that increasing the number of consumers
has a positive effect on the cache hit ratio because
the probability of the content availability increases. In
this scenario again our proposed M2CRP 1has better
cache hit ratio than LRU and FIFO. The change in
hit ratio increases until the number of consumers are
30% of the total vehicles and gradually decrease after
that. Furthermore, we also checked interest satisfaction
delay and interest satisfaction ratio for these scenar-
ios. However, in the VANET environment and in our
simulation settings, there was no significant difference
among various policies. Although our proposed schemes
perform better than LRU and FIFO in the current setup,
in terms of the cache hit ratio, we need to consider
other comparison metrics as well in the future work and
investigate other cache replacement policies in mobile
vehicular networks.
25 30 35 40
Consumers (%)
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
Cache Hit Ratio (%)
CS Size: 100
Producers: 10
Producer Mobility: Static
FIFO
LRU
M2CRP1
M2CRP2
Fig. 5: Measurement of cache hit ratio with respect to
different number of consumers and fixed producers.
V. CONCLUSION
In vehicular network environment, due to its
unique characteristics such as high-speed, mobility, and
ephemerality, traditional cache replacement policies such
as priority FIFO and LRU do not work efficiently.
Therefore, in this work, multi-metric cache replacement
policies for NDN-driven vehicular networks is proposed.
We considered three parameters namely frequency of
the content retrieval (popularity), freshness, and the
distance traveled since the content was stored in CS.
Based on these parameters we devised two mecha-
nisms namely M2CR P 1(without parameters normal-
ization) and M2CR P 2(with parameters normalization).
The simulation results demonstrate that, on average,
M2CR P 1and M2CRP 2show (94%) and (2.58%)
better hit ratio than FIFO with variable CS sizes. Further-
more, M2CR P 1has on average 71% more hit ratio than
LRU in different CS sizes. In case of a variable number
of producers, M2CR P 1and M2CRP 2achieve 25.36%
and 2.4% improved hit ratio than FIFO, respectively, and
M2CR P 1shows 9% better hit ratio than LRU. Finally,
in case of variable consumers, M2CRP 1and M2C RP 2
respectively achieve 79% and 7% better hit ratios than
the FIFO. On the other hand, M2CR P 1achieves on
average 62% better hit ratio than the LRU.
ACKNOWLEDGMENT
This work was partially supported by Institute for
Information & communications Technology Promotion
(IITP) grant funded by the Korea government (MSIT)
(No.2014-0-00065, Resilient Cyber-Physical Systems
Research) and also supported by Global Research Labo-
ratory Program through the National Research Founda-
tion of Korea(NRF) funded by the Ministry of Science
and ICT (NRF-2013K1A1A2A02078326).
REFERENCES
[1] Ibrahim Abdullahi, Arif Suki, and Suhaidi Hassan. Survey on
caching approaches in information centric networking. Journal
of Network and Computer Applications, 56:48 59, 2015.
[2] Miho Aoki and Tetsuya Shigeyasu. Effective content manage-
ment technique based on cooperation cache among neighboring
routers in content-centric networking. In 2017 31st International
Conference on Advanced Information Networking and Applica-
tions Workshops (WAINA), pages 335–340, March 2017.
[3] Syed Hassan Ahmed Dongkyun Kim Bouk, Safdar Hussain and
Houbing Song. Named-data-networking-based its for smart cities.
IEEE Communications Magazine, 55(1):105–111, January 2017.
[4] Dung Ong Mau Yin Zhang Tarik Taleb Chen, Min and Victor CM
Leung. Vendnet: Vehicular named data network. Vehicular
Communications, 1(4):208 213, 2014.
[5] Marco Fiore, Francesco Mininni, Claudio Casetti, and C-F Chi-
asserini. To cache or not to cache? In INFOCOM 2009, IEEE,
pages 235–243. IEEE, 2009.
[6] Rasheed Hussain, Zeinab Rezaeifar, Yong-Hwan Lee, and
Heekuck Oh. Secure and privacy-aware traffic information
as a service in vanet-based clouds. Pervasive Mob. Comput.,
24(C):194–209, December 2015.
[7] Rasheed Hussain, Zeinab Rezaeifar, and Heekuck Oh. A
paradigm shift from vehicular ad hoc networks to vanet-based
clouds. Wireless Personal Communications, 83(2):1131–1158,
Jul 2015.
[8] John B. Kenney. Dedicated short-range communications (dsrc)
standards in the united states. Proceedings of the IEEE,
99(7):1162–1182, July 2011.
[9] Kumari Nidhi Lal and Anoj Kumar. A cache content replacement
scheme for information centric network. Procedia Computer
Science, 89:73 81, 2016.
[10] Mehran Narges and Movahhedinia Naser. Randomized svd based
probabilistic caching strategy in named data networks. Journal
of Computing and Security, 3(4):217–231, 2018.
[11] Mau Dung Ong, Min Chen, Tarik Taleb, Xiaofei Wang, and
Victor C.M. Leung. Fgpc: Fine-grained popularity-based caching
design for content centric networking. In Proceedings of the
17th ACM International Conference on Modeling, Analysis and
Simulation of Wireless and Mobile Systems, MSWiM ’14, pages
295–302, New York, NY, USA, 2014. ACM.
[12] Jose Quevedo, Daniel Corujo, and Rui Aguiar. Consumer driven
information freshness approach for content centric networking. In
Computer Communications Workshops (INFOCOM WKSHPS),
2014 IEEE Conference on, pages 482–487. IEEE, 2014.
[13] Dario Rossi and Giuseppe Rossini. Caching performance of
content centric networks under multi-path routing (and more).
Relatorio tecnico, Telecom ParisTech, pages 1–6, 2011.
[14] Mukesh Saini, Abdulhameed Alelaiwi, and Abdulmotaleb El
Saddik. How close are we to realizing a pragmatic vanet solution?
a meta-survey. ACM Comput. Surv., 48(2):29:1–29:40, November
2015.
[15] Senthilmurugan Sengottuvelan Hemant Kumar Rath Bigh-
naraj Panigrahi Shailendra, Samar and Anantha Simha. Perfor-
mance evaluation of caching policies in ndn - an icn architecture.
In 2016 IEEE Region 10 Conference (TENCON), pages 1117–
1121, Nov 2016.
[16] Maria Rita Palattella Signorello, Salvatore and Luigi Alfredo
Grieco. Security challenges in future ndn-enabled vanets. In
2016 IEEE Trustcom/BigDataSE/ISPA, pages 1771–1775, Aug
2016.
[17] Anna Maria Vegni and Valeria Loscri. A survey on vehicu-
lar social networks. IEEE Communications Surveys Tutorials,
17(4):2397–2419, Fourthquarter 2015.
[18] Sherali Zeadally Yan, Zhiwei and Yong-Jin Park. A novel
vehicular information network architecture based on named data
networking (ndn). IEEE Internet of Things Journal, 1(6):525–
532, Dec 2014.
[19] Cheng Yi, Alexander Afanasyev, Ilya Moiseenko, Lan Wang,
Beichuan Zhang, and Lixia Zhang. A case for stateful forwarding
plane. Computer Communications, 36(7):779–791, 2013.
[20] Lixia Zhang, Alexander Afanasyev, Jeffrey Burke, Van Jacobson,
kc claffy, Patrick Crowley, Christos Papadopoulos, Lan Wang,
and Beichuan Zhang. Named data networking. SIGCOMM
Comput. Commun. Rev., 44(3):66–73, July 2014.
... To make it as much closer as possible to the content requester, the in-network caching is widely used and applied for several types of networks [36] including vehicular networks, e.g. Vehicular Named Data Network (VNDN) [37], Wireless Mobile Networks [38], as well as the Deviceto-Device (D2D) network [39]. The various types of content (multimedia, IoT data [40]) are cached at the base stations, Internet content routers [40], infrastructure Road Side Units (RSUs) [41], and drone access points [42][43][44]. ...
... A caching strategy involves different types of policies: Forwarding strategy, Cache decision policy and Cache replacement. Broadcast mode is the mainly forward strategy employed [37,45]. Due to the limited cache size, several update/ remplacement policies are proposed including Priority First In First Out, Least Recently Used, Least Frequently Used, Least frequently under-used, popularity-based caching [46], Proactive caching algorithm, Random, First In First Out, as well as Time To Live caching [47]. ...
... Due to the limited cache size, several update/ remplacement policies are proposed including Priority First In First Out, Least Recently Used, Least Frequently Used, Least frequently under-used, popularity-based caching [46], Proactive caching algorithm, Random, First In First Out, as well as Time To Live caching [47]. Several metrics can be combined [37,48] such as the content popularity, freshness, and the distance to the content requester to make an effective replacement policy. The content caching faces several security problems including the privacy and the non-repudiation. ...
Preprint
Full-text available
The Social Internet of Things (SIoT) is a new paradigm which represents a promising business solution proposed to exploit the large volume of data generated by the IoT devices. Accessing the shared resources and services in SIoT faces one major security challenge among others: access control. An access control mechanism evaluates the access policies and performs consecutive re-delegation operations based on delegation policies. However, the re-delegation cannot be performed when no trust relationships exist between the access requester devices and the permission delegator devices. To overcome this issue, we propose to perform friendship-based re-delegations through a path of trust social relationships. To find this path, we suggest to fetch the social device's profiles by integrating a caching strategy. The evaluation results show that a friendship-based delegation can be applied for any system distinguished by a distributed architecture, large scalability and high dynamicity such as the SIoT.
... In order to reduce this access delay in CCVNs, precaching schemes in RSUs to provide the requested content via V2I communication have been studied by many researchers [2,[25][26][27][28][29]. A precaching scheme is a technique that involves proactively caching the content that will be requested by new vehicles before they enter the coverage area. ...
... First, precaching schemes based on request prediction were studied, aiming to precache content that had a high probability of being requested before the vehicles made the requests [25][26][27]. Ostrovskaya et al. [25] introduced a novel multi-metric content replacement policy (M2CRP) intended for content stores in named data networking-driven VANETs. M2CRP took into account three key metrics that collectively addressed the need for enhanced performance in VANET applications. ...
... First, precaching schemes based on request prediction were studied, aiming to precache content that had a high probability of being requested before the vehicles made the requests [25][26][27]. Ostrovskaya et al. [25] introduced a novel multi-metric content replacement policy (M2CRP) intended for content stores in named data networking-driven VANETs. M2CRP took into account three key metrics that collectively addressed the need for enhanced performance in VANET applications. ...
Article
Full-text available
In vehicular networks, vehicles download vehicular information for various applications, including safety, convenience, entertainment, and social interaction, from the corresponding content servers via stationary roadside units. Since sufficient RSUs might be difficult to deploy due to rough geographical conditions or high deployment costs, vehicular networks can feature uncovered outage zones between two neighboring RSUs. In these outage zones, vehicles cannot download content, and thus the vehicle networks are defined as intermittently connected vehicular networks. In intermittently connected vehicular networks, the download delay and traffic overhead on the backhaul links are increased due to the large size of the content requested by vehicle users and the long distances between RSUs. Using the mobility information of vehicles, several schemes have been proposed to solve this issue by precaching and relaying content via multiple relaying vehicles in the outage zone. However, because they involved the individual ranking of vehicles for precaching and allocated all of the available precaching amounts to the top-ranking vehicles, they decreased the success rate of content requests and the fairness of vehicle precaching. To overcome the problem of these previous schemes, this paper proposes a multiple precaching vehicle selection (MPVS) scheme that efficiently selects a content-precaching vehicle group with multiple precaching vehicles to precache relayed content in outage zones. To achieve this, we first designed numerical models to decide the necessity and the amount of precaching and to calculate the available precaching amounts of vehicles. Next, MPVS calculates all available vehicle sets and ranks each set based on the available precaching amount. Then, the content-precaching vehicle group is identified from the sets by considering both set rankings and vehicle communication overheads. MPVS also provides a content downloading process through the content-precaching vehicle group in the outage zone. Simulation results conducted in various environments with a content request model and a highway mobility model verified that MPVS was superior to a representative previous scheme.
... To address this issue, research on reducing the traffic consumption has been studied [17]- [19]. As an approach to do this, some schemes tried to reduce traffics consumed to access to content providers by precaching the popular content based on the content popularity prediction. ...
... To reduce the access delay and improve the QoS in CCVN, the precaching scheme has been studied by many researchers [1], [8], [15], [17]- [19]. The precaching scheme is a technique that involves proactively caching the content that will be requested by the newly entering vehicles before the vehicle enters. ...
... There have been several approaches to precache the content: 1) prediction of the request based on the popularity of the content; 2) prediction of the requester vehicle's location based on its mobility. First, precaching schemes based on request prediction have been studied, which are to precache the content that has a high probability of requesting before the vehicle requests [17]- [19]. Ostrovskaya et al. [17] proposed a new multi-metric content replacement policy (M2CRP) for content stores in NDN-driven VANETs. ...
Article
Full-text available
The development of smart vehicles such as self-driving cars has resulted in an increased demand for content consumption in vehicles. This demand for content, coupled with the improved quality of the content and the expanding resolution of the display, leads to a significant increase in mobile data traffics with larger sizes in content-centric vehicular networks (CCVNs). Since a single RSU cannot fully provide the larger size contents in its communication coverage due to its limited transmission rate, many studies on precaching contents have been actively conducted. By considering only delay-sensitive contents, previous precaching schemes immediately precache contents in the next continuous RSUs on trajectories of vehicles through paying lots of traffics for reducing delays. However, every application content in CCVNs might generally have a tolerable delay time to vehicles according to its characteristics related to service satisfaction. By considering the tolerable delay time, the already stored content on RSUs reached by vehicles within the tolerable delay time can be exploited to download the content without precaching. As a result, precaching of contents can be reduced and thus the traffics can be also reduced. To address this issue, we thus propose a Traffic Optimized Content Precaching (TOCP) scheme based on the tolerable delay time in CCVNs, which minimizes traffics consumed for delay-tolerant contents and decreases delays for delay-sensitive contents. To do this, we first provide a numerical model to judge the precaching necessity by calculating the content provision possibility. Next, we build a Delay-tolerant content Management Module (DMM) for managing the updated information and design new packet formats for reducing the movements of the large-size contents to achieve the optimization purpose of TOCP. Then, we select Optimal Precaching RSUs (OPRs) by solving our optimization problem by using ILP. Last, we provide a process for allowing only OPRs to participate in the content provision. Through simulation results conducted in various environments, we demonstrate that TOCP minimizes the traffic and caching burden while maintaining high reliability in comparison with the previous schemes.
... A caching replacement policy based on multiple metrics is proposed in Ostrovskaya et al. 132 The scheme considers three factors for replacement decision: (1) the content popularity; (2) the freshness; and (3) the distance between the location of two specific nodes. When a replacement is required, the proposal uses the aforementioned factors to calculate the candidacy score for each content in CS. ...
... Three of the selected studies 132,142,143 propose only a caching replacement mechanism. Eleven studies, out of the 25 proactive-based caching solutions rely on mobility prediction for path and node selection. ...
Article
Full-text available
Named data networking (NDN) presents a huge opportunity to tackle some of the unsolved issues of IP‐based vehicular ad hoc networks (VANET). The core characteristics of NDN such as the name‐based routing, in‐network caching, and built‐in data security provide better management of VANET proprieties (e.g., the high mobility, link intermittency, and dynamic topology). This study aims at providing a clear view of the state‐of‐the‐art on the developments in place, in order to leverage the characteristics of NDN in VANET. We resort to a systematic literature review (SLR) to perform a reproducible study, gathering the proposed solutions and summarizing the main open challenges on implementing NDN‐based VANET. There exist several related studies, but they are more focused on other topics such as forwarding. This work specifically restricts the focus on VANET improvements by NDN‐based routing (not forwarding), caching, and security. The surveyed solution herein presented is performed between 2010 and 2021. The results show that proposals on the selected topics for NDN‐based VANET are recent (mainly from 2016 to 2021). Among them, caching is the most investigated topic. Finally, the main findings and the possible roadmaps for further development are highlighted. A systematic review on the realization of NDN‐based VANET, specifically for NDN‐based routing (not forwarding), caching and security issues, was performed for the period of 2010‐2021. The study concluded that caching has been the more investigated topic. There are, however, some gaps still to be tackled. For instance, a means to further reduce the broadcast storm problem in routing may include leveraging all overhead packets for location‐aware, and leverage caching for better routing decision.
... Many existing articles design caching strategies by exploiting various properties of the content. For example, Ostrovskaya et al. [10] proposed a caching strategy for vehicular named data networks. The strategy considers three metrics, freshness of the content, popularity and the distance between the cache location and the cache's current location. ...
Article
Full-text available
With the continuous development of intelligent vehicles, people's demand for services has also rapidly increased, leading to a sharp increase in wireless network traffic. Edge caching, due to its location advantage, can provide more efficient transmission services and become an effective method to solve the above problems. However, the current mainstream caching solutions only consider content popularity to formulate caching strategies, which can easily lead to cache redundancy between edge nodes and lead to low caching efficiency. To solve these problems, we propose a hybrid content value collaborative caching strategy based on temporal convolutional network (called THCS), which achieves mutual collaboration between different edge nodes under limited cache resources, thereby optimizing cache content and reducing content delivery latency. Specifically, the strategy first obtains accurate content popularity through temporal convolutional network (TCN), then comprehensively considers various factors to measure the hybrid content value (HCV) of cached content, and finally uses a dynamic programming algorithm to maximize the overall HCV and make optimal cache decisions. We have obtained the following conclusion through simulation experiments: compared with the benchmark scheme, THCS has improved the cache hit rate by 12.3% and reduced the content transmission delay by 16.7%.
Article
Full-text available
Named Data Networking (NDN), a data-centric enabled-cache architecture, as one of the candidates for the future Internet, has the potential to overcome many of the current Internet difficulties (e.g., security, mobility, multicasting). Influenced by using cache in intermediate equipment, NDN has gained attention as a prominent method of Internet content sharing. Managing the NDN caches and reducing the cache redundancy are the important goals in this paper. Our main contribution in this research is toward caching optimization in comparison with betweenness probabilistic in-network caching strategy. Therefore, with respect to combined impacts of long-term centrality-based metric and Linear Weighted Moving Average (LWMA) of short-term parameters such as user incoming pending requests and unique outgoing hit requests on caching management, a flexible probability caching strategy is proposed. Moreover, a simple Randomized-SVD approach is applied to combine averaged short-term and long-term metrics. The output of this data-fusion algorithm is used to allocate a proper probability to the caching strategy. Evaluation results display an increase in the hit ratios of NDN routers' content-stores for the proposed method. In addition, the producer's hit ratio and the Interest-Data Round Trip Time, compared to the betweenness scheme, is decreased.
Article
Full-text available
Nowadays, Information centric networking has tremendous importance in accessing internet based applications. The increasing rate of Internet traffic has encouraged to adapt content centric architectures to better serve content provider needs and user demand of internet based applications. These architectures have built-in network Caches and these properties improves efficiency of content delivery with high speed and great efficiency. By using Information centric architectures, users need not download content from content server despite they can easily access data from nearby caches. User requested content is independent of the location of the data by use of caching approaches and does not rely on storage and transmission methodologies of that content. There has been many researches going on which is based on caching approaches in the Content centric network. Efficient caching is essential to reduce delay and to enhance performance of the network. Along with caching, a good cache replacement scheme is also necessary to decide which content item should reside in the cache and which content should be evicted. So, in this paper, we presented a Cache content replacement (CCR) scheme for information centric network with evaluation of popularity of the content. Performance of CCR scheme will be evaluated in provision of cache hit ratio and Cache load.
Article
Full-text available
Vehicular Ad-hoc Networks (VANETs) are seen as the key enabling technology of Intelligent Transportation Systems (ITS). In addition to safety, VANETs also provide a cost-effective platform for numerous comfort and entertainment applications. A pragmatic solution of VANETs requires synergistic efforts in multidisciplinary areas of communication standards, routings, security and trust. Furthermore, a realistic VANET simulator is required for performance evaluation. There have been many research efforts in these areas, and consequently, a number of surveys have been published on various aspects. In this article, we first explain the key characteristics of VANETs, then provide a meta-survey of research works. We take a tutorial approach to introducing VANETs and gradually discuss intricate details. Extensive listings of existing surveys and research projects have been provided to assess development efforts. The article is useful for researchers to look at the big picture and channel their efforts in an effective way.
Conference Paper
Content Centric Networking (CCN) is a content name-oriented approach to disseminate content to edge gateways/routers. In CCN, a content is cached at routers for a certain time. When the associated deadline is reached, the content is removed to cope with the limited size of content storage. If the content is popular, the previously queried content can be reused for multiple times to save bandwidth capacity. It is, therefore, critical to design an efficient replacement policy to keep popular content as long as possible. Recently, a novel caching strategy, named Most Popular Content (MPC), was proposed for CCN. It considers the high skewness of content popularity and outperforms existing default caching approaches in CCN such as Least Recently Used (LRU) and Least Frequency Used (LFU). However, MPC has some undesirable features, such as slow convergence of hitting rate and unstable hitting rate performance for various cache sizes. In this paper, a new caching policy, dubbed Fine-Grained Popularity-based Caching (FGPC), is proposed to overcome the above-mentioned weak points. Compared to MPC, FGPC always caches coming content when storage is available. Otherwise, it keeps only most popular content. FGPC achieves higher hitting rate and faster convergence speed than MPC. Based on FGPC, we further propose a Dynamic-FGPC (D-FGPC) approach that regularly adjusts the content popularity threshold. D-FGPC exhibits more stability in the hitting rate performance in comparison to FGPC and that is for various cache sizes and content sizes. The performance of both FGPC and D-FGPC caching policies are evaluated using OPNET Modeler. The obtained simulation results show that FGPC and D-FGPC outperform LRU, LFU, and MPC.
Article
Smart city enhances the quality of its citizens’ life by providing ease of access to the ubiquitous services through integration using communication systems at the foundation. Additionally, Intelligent Transportation System (ITS) plays a major role in making a metropolitan area into a smart city. The current IP-based solutions for ITS have slanted the performance due to the high demand of Data on the move, especially, when the consumers became the producers. Meanwhile, the Named Data Network (NDN) has been evolved as a promising future Internet architecture and is being investigated extensively. In this article, we discussed the core functionality of the NDN followed by our new architecture proposed for ITS in smart cities. Also, we highlighted the current and future research challenges for NDN enabled ITS in the context of smart cities.
Article
The multitude of applications and services in Vehicular Ad hoc NETwork (VANET) and cloud computing technology that are dreamed of in the near future, are visible today. VANET applications are set to explode in the next couple of years as a result of the advancements in the wireless communication technologies and automobile industry. Nevertheless, it is speculated that future high-end vehicles will potentially under-utilize their on-board storage, computation, and communication resources. This phenomenon sets the ground for the evolution of traditional VANET to a rather more applications-rich paradigm referred to as VANET-based clouds. In this paper, we aim at a framework of VANET-based clouds namely VANET using Clouds (VuC) and propose a novel secure and privacy-aware service referred to as Traffic Information as a Service (TIaaS) atop the cloud computing services stack. TIaaS provides vehicles (more precisely subscribers) with fine-grained traffic information from the cloud as a result of subscribers’ cooperation with the cloud in a privacy-preserving way. Legitimate VANET users share their frequent whereabouts information referred to as Mobility Vectors (MV) with the cloud infrastructure through gateways (static Road Side Units—RSUs and mobile vehicles with 3/4 G Internet). The gateways forward coarse-grained traffic information (MVs) from vehicles to the cloud whereas after processing, cloud modules construct and re-forward the fine-grained traffic information along with location-based warnings to the subscribers based on their physical locations and moving directions. The communication among vehicles, gateways, and the cloud infrastructure is carried out in a privacy-preserving way. More precisely vehicles share their MVs with cloud infrastructure anonymously. The MVs are hard to link to the sender, until and unless necessary, otherwise. Similarly every vehicle receives fine-grained traffic information in a privacy-preserving manner. The proposed TIaaS keeps the adversaries at bay from abusing users’ privacy and/or constructing profiles against targeted users. Moreover for location confidentiality and privacy, we also propose a novel location-based encryption technique that keeps the insider and outsider adversaries at bay from manipulating the contents of the message. Furthermore, the proposed TIaaS preserves conditional privacy and with the help of an efficient revocation mechanism, revocation authorities can revoke any node in case of a dispute. The proposed TIaaS also introduces the thin-client concept for vehicles where most of the time-consuming processing is offloaded to the cloud and the processing resources of the vehicles can be used elsewhere, for instance for the critical safety related applications. More precisely the cloud processes the big traffic data (BTD) and produces timely, decisive, and meaningful results.
Article
Information Centric Network (ICN) is increasingly becoming the alternative paradigm to the traditional Internet through improving information (content) dissemination on the Internet with names. The need to reduce redundancy and frequent access to a host (provider of information) has raised an alternative of a man-in-middle concept of ICN. This has necessitated the introduction of some ICN popular architectures (such as Named Data Network (NDN), Content Centric Network (CCN), to name a few) to manage the salient advantages incorporated in ICN. Despite all efforts and issues in naming, security, routing and mobility, power consumption; caching has become the leading variable to fully actualize the future Internet dream by carefully solving the problems in frequency and recency (in objects). Determining what part of the content is to be cached? When is the most appropriate time for caching? How would the object be cached (placed and replaced) and also what path would the object be cached? Thus, this article span through some selected ICN architectures and projects to investigate and suggest forms of caching in minimizing the total bandwidth consumption, enhanced Delivery of Service (DoS), reduced upwards and downward streaming. In conclusion, pointing out some of the future probable ways to improve caching in ICN. This survey also highlighted the top sensitive issues that influence the active deployment of caches in ICN through recency, frequency, content size, cost of retrieval and coordination, update in caches and replacements. Several cache characteristics were further presented in ways that would improve cache techniques, deployments as research suggestions for content placement, replacement and quick scan on nodes on and off-path of the network.