Content uploaded by Abdulhalim Dandoush
Author content
All content in this area was uploaded by Abdulhalim Dandoush on Jan 31, 2022
Content may be subject to copyright.
M-CRP: Novel Multicast SDN based Routing
Scheme in CamCube Server-only Datacenter
Roua Touihri⇤†, Safwan Alwan†, Abdulhalim Dandoush‡, Nadjib Aitsaadi§†and Cyril Veillon⇤
⇤Devoteam R&D, F-91300, Massy, France
†University Paris-Est, LiSSi EA 3956, UPEC, F-94400, Vitry-sur-Seine, France
‡ESME-Sudria, F-94200, Ivry-sur-Seine, France
§University Paris-Est, LIGM-CNRS UMR 8049, ESIEE Paris, F-93160, Noisy-le-Grand, France
Abstract—Multicast routing provides an efficient way to sup-
port Data Center (DC) applications (e.g., replication process
of MapReduce jobs, Market data and stock notification apps,
and IPTV servers) as it conserves network bandwidth and
reduces server load. However, a good use of multicast within
the traditional DC networks requires higher performance and
capacity from the network devices such as the tables capacity
of access and aggregation switches. In this paper, we propose
and evaluate a novel multicast routing scheme, named M-CRP,
in a promising CamCube Server-Only DC architecture while
considering the Software Defined Network (SDN) paradigm. To
do that, first we formulate the problem as a lexicographic multi-
objective optimization problem. Then, we propose new optimized
SDN application, M-CRP, based on Branch and Cut and monitors
the CamCube DC infrastructure with OpenFlow southbound
protocol. We evaluate the performance of our proposal using
experimental platform built with ONOS controller and Mininet.
The obtained results show that M-CRP is better in terms of packet
loss, latency and jitter compared with traditional shortest path
multicast routing protocol.
Keywords—CamCube, Data-Center Networks, SDN, Multicast
Routing, Optimization.
I. INTRODUCTION
Datacenters running cloud-based applications, are commonly
using multicast group communication. In fact, the source trans-
mits one packet for many destinations (i.e., one to many). This
approach optimizes the network bandwidth consumption by re-
ducing the transmitted data volume by the source. Nowadays,
DataCenters (DCs) can run many types of applications such
as: i) online cloud applications, ii) back-end infrastructural
computations, iii) structured storage system and distributed
execution engine.
Actually, the increasing use of these applications and the
exponential growth of data within datacenters networks, push
companies and cloud/network operators to optimize traffic and
resource allocation inside these fabrics. According to a recent
Gartner’s report1, the worldwide public cloud services market
revenue will grow from 182.4$ billion in 2018 to 214.3$
billion in 2019. These data center applications are executed
in a large-scale fabrics in which it is difficult to guarantee the
requested Quality of Service (QoS) of traffic flows. For these
reasons, Internet actors like Google, Microsoft and Amazon
boost their investment in designing DCs. In fact, the Capital
Expenditure (CAPEX) has increased from 9.7% in 2016 to
55% in 20182. Besides, according to Cisco Global Cloud
Index3, traffic within hyper-scale data centers will attend 55%
by 2021. Thus, traffic volume will quadruple. This will pose
new challenges for researchers and industrials to design and
deploy new efficient and scalable multicast routing schemes.
The classification of DC architecture depends on the role
of deployed network equipment. Authors, in [1], distinguish
three basic categories of datacenter architecture: i) Switch-
only datacenters in which only switches forward packets
between racks (e.g., Fat Tree, Portland, VL2), ii) Server-only
topology where only servers play simultaneously both tasks
1https://www.gartner.com/en/newsroom/press-releases/2019-04-02-gartner-
forecasts-worldwide-public-cloud-revenue-to-g
2https://www.datacenterknowledge.com/cloud/cloud-giants-continue-
pouring-billions-data-centers
3https://www.cisco.com/c/en/us/solutions/collateral/service-
provider/global-cloud-index-gci/white-paper-c11-738085.html
of applications and networking (e.g., Camcube [2]) and iii)
hybrid topologies where both switches and servers forward
packets between racks (e.g., Bcube, DCell). In this paper, we
consider SDN-based CamCube DataCenter Network (DCN)
(i.e., server only) in order to take profit from the centralized
optimization of control plane performed in the SDN controller.
The majority of the distributed and cooperative applications
are simultaneously communicating with each other. Indeed,
multicast routing is considered as a solution to avoid the
congestion DCN. To do that, we emulate the CamCube DCN
managed by the ONOS SDN controller to optimize the for-
warding of multicast flows.
For this purpose, we first formulate the multicast routing
problem as a lexicographic multi-objective optimization. Our
objectives are to maximize the residual bandwidth and to
minimize the number of relay nodes in the multicast tree.
Then, we propose new SDN application named Multicast
CamCube Routing Protocol (M-CRP). It re-formulates the
problem as a single-objective ILP. M-CRP is based on Branch
and Cut algorithm and takes into consideration the state of
CamCube DCN thanks to the real-time monitoring performed
with ONOS controller over the OpenFlow Southbound pro-
tocol. Next, we deploy our application within our emulated
CamCube DCN controlled by ONOS. Finally, based on exten-
sive experimentations, we compare M-CRP with the shortest-
path approach. The QoS results obtained are very satisfying
in terms of i) packet loss, ii) latency, and iii) jitter.
The remainder of this paper is organized as follow. First, the
related work is summarized in Section II. Next, Section III de-
scribes the fully-emulated SDN-based CamCube architecture
using the ONOS controller and Mininet. Then, in Section IV,
we give the formal description of our multicast routing prob-
lem of flows inside CamCube DCN. Next, Section V details
our proposed SDN application Multicast CamCube Routing
Protocol (M-CRP). Afterwards, in Section VI, we discuss the
experimental obtained results. Finally, Section VII concludes
the paper.
II. RELATED WORK
Nowadays, the maturity of DCNs presents a good opportunity
to make use of multicast routing to forward flows within
groups. Multicast routing becomes more useful by saving
network traffic and avoiding repeated transmission task for
the sender [3]. As a result, several research works have been
published in this field. Consequently, it is considered as a hot
topic of research for both academic and industry.
Traditionally, switch only and hybrid data center network
topologies deploy switches and/or routers to forward packets
between active servers. Moreover, a large number of research
works addressed the multicast routing within these DCNs. But,
this kind of structures suffer from several problems to route
efficiently packets. In fact, these relay nodes are bandwidth-
hungry and limited routing space. These network equipments
contain narrow forwarding/routing tables (e.g., less than 1.500
entries) [4]. Besides, in order to reduce the packet loss caused
by the link failure in the multicast tree, the authors in [5]
propose RDCM (Reliable Data Center Multicast). The authors
leverage the rich link resource in the DCN in order to minimize
the impact of packet loss in the multicast throughput. RDCM
aims to increase the reliability of multicast by developing the
peer to peer low repair scheme. Moreover, to compress in-
switch multicast routing entries, FRM (Free Riding Multicast)
[6] has been proposed. This approach based on bloom fil-
ter consists on disassociating members and route discovery.
Hence, to build the multicast tree, the route discovery can
find out the unicast paths from any source to the known
members of the multicast group. Also, in [7], the authors use
the in-switch bloom filters to overcome the traffic leakage
by compressing the multicast forwarding states. However,
in DCNs built with high-density switches, the bloom filter
based on multicast routing still memory space demanding.
In addition, this approach suffers from the traffic leakage
because of the significant false-positive forwarding (i.e., using
uninvolved links in the multicast tree to send data) when the
group size becomes larger. Furthermore, in-packet bloom filter
concept [8] has been developed to improve the scalability. In
fact, one of the approaches adopting the in-packet bloom filter
is LIPSIN (Line Speed Publish/Subscribe Inter-Networking)
[9]. This proposal consists on avoiding the installation of
multicast entries in network equipment by encoding the tree
information into in-packet Bloom Filter. But, this approach
suffers from network bandwidth overhead. To overcome the
LIPSIN weaknesses, the authors in [4] propose a node-
based Bloom Filter instead of the link-based one for encoding
the tree. The objective is to build an efficient multicast tree
by eliminating unnecessary intermediate switches used in
receiver-driven multicast routing. ESM (Efficient and Scalable
Multicast routing) [10] combines both in-packet and in-switch
multicast in order to ensure the tradeoff between the bandwidth
overhead and the number of multicast groups. This approach
has been proposed to generate multicast trees within BCube
and Portland DCNs. Moreover, for multicast congestion con-
trol, the authors in [11] use the AIMD (Additive Increase
and Multiplicative Decrease) concept in their proposal named
Datacast. The latter approach proposes a new method of
congestion detection to reduce the packet loss by introducing
a simple soft-state based congestion control algorithm. Indeed,
to be transmitted, the data is divided into blocks where
different Steiner trees get different blocks. Datacast also
deploys multiple edge disjoint Steiner trees to reduce latency.
The proposed scheme benefits from a centralized Fabric Con-
troller for the topology management and for including changes
to the master of each group. Afterwards, the Steiner tree
will be updated. Finally, Datacast approach insures the
receivers synchronization and aims to reduce the cache size.
Furthermore, some other research works address the software
defined control plane for managing multicast traffic within
DCNs. In this context, authors [12] propose AvRa (Avalanche
Routing Algorithm) to develop multicast tree within Fat tree
topology. Close to Avalanche proposal, the authors in [13]
propose a refining of AvRa algorithm named MCDC (Multicast
routing for Data Centers). It makes use of link utilization and
load collected from the SDN controller to select the Core
switch instead of its random selection done by AvRa. The
authors in [13] aim to reduce the impact of multicast issues
by minimizing the latency or leveraging congestion control
algorithms for intra-datacenter traffic.
For server only topologies (e.g., CamCube), to the best of
our knowledge, this is the first study that addresses multicast
routing problem in a SDN-based CamCube DC topology. In
fact, we noticed in [14] that CamCube outperforms other
topologies in term of suitability. This performance is achieved
with various workload by implementing Hadoop Map Reduce
applications using multicast flows between servers. In this pa-
per, we tackle the multicast routing problem within SDN based
CamCube server-only DCN. Our objective is to maximize the
residual bandwidth of links while considering the real-time
infrastructure network monitoring in the SDN controller.
(0,2,0)
X
Y
Z
(2,2,2)
Fig. 1. CamCube Topology - Dimension 3⇥3⇥3
III. CAMCUBE DATA -CENTER ARCHITECTURE
CamCube datacenter topology is based on xservers directly
connected to each other. CamCube topology belongs to 3D
torus architecture (i.e., K-ary 3-cube). As illustrated in Fig. 1,
the CamCube DCN dimension is 3⇥3⇥3(i.e., 27 servers)
where 3x6.
Each server in this architecture is characterized by its high
performance with multi-core processors installed and high
performed with multiple ports [1]. The main question that can
arise if CamCube servers could play double role: performing
simultaneously routing and computing. In this context, in [15]
the authors describe the CamCubeOS as an operating system
designed for these servers and showing, by experimentation
study, a high achieved performance. Furthermore, it is wor-
thy to note that such topology becomes interesting to study
since the forwarding processes have a negligible impact on
computing task and deployment cost is considerably reduced
(no network equipment). Originally, CamCube DCN topology
makes use of the link state routing protocol (e.g., OSPF) by
exploiting the power of the multi-paths [2]. Also, this routing
function uses the distributed control plane deployed within
DCN.
Contrarily to the latter design, we propose a centralized
control plane deployed and executed within the Software
Defined Network (SDN) controller. Specifically, the whole for-
warding elements namely Open VSwitch (OVS) implemented
in CamCube servers are connected among a secure socket to
our Open Network Operating System (ONOS) SDN controller.
It can discover, build and get real-time information about the
DCN state such as: residual bandwidth, topology, switches,
links, hosts, etc.
ONOS uses the Southbound Interface (SBI) to retrieve
the collected network state statistics from the data plane.
Then, the SDN application (i.e., protocol) exploits the real-
time monitoring of the data plane for the optimization of the
control plane. Note that the SDN application communicates
with the ONOS controller with RESTful requests or gRPC.
In our case, ONOS computes the best path between any
source(s) and destination(s) while considering the residual
bandwidth and the number of hops. Next, ONOS controller
installs the flow rules in OVS with respect to the SDN Routing
Application. To do that, ONOS makes use of Openflow [16].
As shown in Fig. 2, depending on the current network state,
our ONOS application proposal exploits strong optimization
tools to customize the path calculation.
ONOS4is an open-source project of the Open Networking
Foundation (ONF). ONOS is largely deployed in the industry
area and achieves good QoS performance. This fact motivates
us to select it in our proposed architecture. Besides, ONOS is
widely deployed by Telecommunication and Cloud operators
such as: Google, Sky Telecom and AT&T.
In order to facilitate the deployment and configuration
process, ONOS offers two ways for setting up its features
4https://wiki.onosproject.org/
CamCube Topology
Southbound Protocol
Openflow Provider
Openflow
Custom Protocols
and Providers
Southbound Provider APIs (Openflow + driver)
Intent Framework Flow table Management
Configuration
Abstractions
Topology Management:
Global Network view
Network Graph etc.
Distributed Core
Layer
Northound APIs
Applications Layer
Restful
ONOS
ONOS
Fig. 2. SDN-based CamCube architecture
and applications for users which are Command Line Interface
(CLI) and a Graphic User Interface (GUI). Both approaches
make getting information about network devices, set con-
figuration and simply activating or de-activating applications
easy for final user or developer. In addition, ONOS is very
well documented and became the center of interest of huge
and active online community. Moreover, as detailed in [17]
ONOS outperforms the main concurrent products basically
Opendaylight and Ryu in terms of bandwidth usage.
IV. PROBLEM FORMULATION
In this section, we start by giving the definition of our
CamCube-based network model.
Next, we detail the multicast path computation problem for
intra-CamCube DCN traffic flows transmission.
A. CamCube Network Model
We denote Sithe set of all connected servers. The latters con-
stitute a directed weighted graph presenting the link capacities
in it. We denote this graph as G=(N(G),L(G)) where, N(G)
is the set of nodes in the graph (i.e., CamCube servers) and
L(G)the set of links between two directly linked neighbors.
We denote the link from ito jas ij 2L(G). The link ij has
an initial bandwidth capacity ˆ
Cij , and we denote its residual
bandwidth as Ck
ij at the time instant Tk.
B. Centralized multicast path computation within CamCube
DCN
In this proposal, we consider Constant Bit Rate (CBR) flows.
Each flow Firequests a fixed bandwidth equals to Bi=Vi
Ti.
The volume of transferred bits is denoted by Vi. It follows
a random uniform distribution within duration denoted by Ti
interval which follows the random exponential distribution. We
model the arrival rate of Fias a Poisson process with density
.In order to maximize the QoS satisfaction in the network,
our objective is to calculate the optimal multicast tree for
each Fiby allocating the requested bandwidth. Then, the
SDN controller exploits i) our optimization algorithm (SDN
application) proposal and ii) OpenFlow rules to respectively
calculate and install the multicast routing tree within CamCube
DCN.
The aim of this work is to maximize the QoS of flows
in CamCube DCN, we propound a multicast routing tree
that minimizes the number of links while maximizing the
minimal residual bandwidth in DCN. Besides, our proposal
maximizes the satisfaction in term of the requested bandwidth
and enhances the provider’s revenue while respecting the
Service Level Agreement.
To be more specific, for flow Fifrom source sto the set of
destinations D, we search for a multicast tree that maximizes
the residual bandwidth with the fewest possible number of
links. Then, we consider xij the binary variable equals to
1when the link ij is selected for the tree path allocation
and equals to 0otherwise. This can be stated formally as the
following objective functions.
maximize [min {yij |ij 2L(G)}]
Then
minimize X
ij
xij (1)
We denote yij as an auxiliary variable quantifying the
residual capacity of the link, expressed as:
yij =Ck
ij B
k·xij (2)
where Ck
ij should be greater than the requested bandwidth Bk
and represents the capacity of all links ij. Note that only links
ij with sufficient capacity Cij in terms of bandwidth can be
considered to build the multicast routing tree in our system.
Formally,
8ij 2L(G)Bk·xij Ck
ij (3)
The link selection is subjected to the following constraints
in order to respect the multicast tree structure. First, we can
select at most one link entering into the node nexcept at the
source swhere no link is selected as follow:
8n2N(G),X
ij|j=n
xij 1s
n(4)
To construct the routing tree, we must have at least one link
that goes out the source node s. Formally,
8n2N(G),X
ij|i=s
xij 1(5)
We require the routing tree to pass by all destinations.
Formally, this is expressed as:
8n2D,X
ij|i=n
xij =1 (6)
In line with the tree structure, we require that each link in
the tree must have at most one parent link except for the links
that start at the source node s.
8mn 2L(G)|m6=s, X
ij|j=m
xij xmn (7)
Similarly, we require each link in the tree to have at most one
child except for links terminating in the destination.
8mn 2L(G)|n/2D,X
ij|i=n
xij xmn (8)
Our multicast tree problem in the CamCube datacenter
network is summarized in Problem 1.
In fact, the problem is a lexicographic multi-objective
optimization one:
maximize f1(x)=min {yij |ij 2L(G)}(9)
maximize f2(x)=X
ij
xij (10)
Where f1(x)is calculated before f2(x). Thus, the multicast
tree computation is expressed as a lexicographic optimization
problem.
In the next section, we express our problem as a single-
objective Mixed Integer Linear Programming (MILP). After-
wards, we solve it by proposing a novel algorithm, named
M-CRP, based on Branch-and-Cut.
Problem 1 Multicast Tree Routing problem in SDN CamCube
DCN
maximize [min {yij |ij 2L(G)}]
Then
minimize X
ij
xij
Subject to :
yij =Ck
ij B
k·xij
Bk·xij Ck
ij 8ij 2L(G)
X
ij|j=n
xij 1s
n8n2N(G)
X
ij|i=n
xij =1 8n2D
X
ij|i=s
xij 18s2N(G)
X
ij|j=m
xij xmn 8mn 2L(G)|m6=s
X
ij|i=n
xij xmn 8mn 2L(G)|n/2D
V. PROPO SAL
In this section, we detail our multicast tree computation
algorithm named Multicast CamCube Routing Protocol
(M-CRP) aims to not only generating the best multicast
tree dealing with the bandwidth requirement of flows but
also maximizing load balancing in the CamCube datacenter
network infrastructure.
To get an ILP formulation, we propose to linearize the
first objective function which is related to maximizing the
minimum residual link capacity. To this end, we introduce
a continuous variable zwith the following constraint:
zyij 8ij 2L(G)
And then, we redefine the objective function f1as follows:
maximize f1=z
In the same vein, we propose to reformulate the multi-
objective optimization problem defined in Problem 1 as a
single-objective Mixed Integer Linear Programming (MILP)
as follow:
maximize f=Mf1+f2=Mz X
ij
xij
However, to ensure that the new formulation have the same
optimal solution(s) as the previous lexicographical instance,
we exploit the structure of the modified problem to set the
value of the constant Min order to allow such convergence.
We first note that for any feasible solution, (i.e., allocating
a route for the flow), the values of the minimum residual
capacities are included in the finite set ˜
Zdefined as:
˜
Z={Ck
ij Bk|ij 2L(G)}
Thus, as shown in Fig. 3, Mcan be selected such that
the (negative) slope of the line representing the total objective
function is at least equal to the slope of the line connecting
the points (zoptimal,|N|), along with (zoptimal z, 0) where
zdenotes the difference between the optimal capacity point
and the next-to-optimal capacity one.
Then, following the same line of reasoning, we note that
zf
z, where f
zis expressed as:
f
z=min{z2z1|z2z1,(z2,z
1)2˜
Z}
Stated differently, we must choose Msuch that it satisfies the
following relation:
M|N|
z|N|
f
z
Hereafter, we summarize the reformulation of our problem.
f1 = Z
-|N|
∆Z
0
f= M f1+f2
foptimal
XXXXXXXXX
f2 = -∑
Feasible Suboptimal Region
ϵ Ẑ
zoptimal
Fig. 3. Calculation of the factor M
Problem 2 MILP Reformulation of CamCube Multicast Rout-
ing problem
maximize Mz X
ij
xij
subject to :
zyij 8ij 2L(G)
Bk·xij Ck
ij 8ij 2L(G)
X
ij|j=n
xij 1s
n8n2N(G)
X
ij|i=n
xij =1 8n2D
X
ij|i=s
xij 18s2N(G)
X
ij|j=m
xij xmn 8mn 2N(G),m6=s
X
ij|i=n
xij xmn 8mn 2N(G)|n/2D
To resolve the above MILP problem, we propose M-CRP
scheme described in Algorithm 1. Note that, Algorithm 2 is
based on the Branch-and-Cut to resolve the MILP problem.
M-CRP is an on-line approach and hence executed at every
arrival of traffic flow request. Moreover, once Algorithm 2
cannot provide a resolution for an arrived flow request, the
latter will be added to a waiting queue in order to be scheduled
for a new resolution attempt every t.
VI. PERFORMANCE EVA L UA T I O N
In this section, we assess the performance of M-CRP proposal
within SDN based CamCube DCN conducting extensive ex-
periments using real-world emulated test bed. We first detail
the experimental platform based on ONOS SDN controller
and Mininet emulator. Afterwards, we enumerate the perfor-
mance metrics. Finally, we evaluate the proposed M-CRP by
analyzing the obtained experimentation results in comparison
with the traditional multicast protocol based on shortest path
in term of hops.
A. Emulated Environment and Scenarios
We emulate the CamCube DCN infrastructure within Mininet.
It is an open source emulator that supports research, develop-
ment and learning by providing a set of API helping user to
automate the nodes creation and their communication. Thanks
to Mininet, we create a large-scale virtualized CamCube DCN
infrastructure, to analyze network and virtual functions. In our
scenario, we test scalability by varying our DCN topology size
from 10 ⇥10 ⇥5(i.e., 500 servers) to 10 ⇥10 ⇥12 (i.e., 1200
servers) and we fix the maximum of capacity link to 100 Mbps
Algorithm 1 Pseudo-algorithm of the Multicast CamCube
Routing Protocol (M-CRP)
1: Inputs: N(G),L(G),Bk,Ck
ij
2: Output: tree Tfor flow Fk
3: Lk {ij |Ck
ij Bk}
4: ˜
Z {Ck
ij Bk|ij 2L(G)}
5: if |˜
Z|1then
6: T calculated by single source to multi-destination
routing
7: else
8: f
z min{z2z1|z2z1,(z2,z
1)2˜
Z}
9: M |N|
f
z
10: Construct the MILP as detailed in Problem 2.
11: Solve Problem 2 with Algorithm 2.
12: if not solved then
13: Go back to 11 every t
14: end if
15: T {ij |xij 1}, where ij 2L(G)
16: Installation of multicast tree Tvia SDN Con-
troller(ONOS)
17: end if
Algorithm 2 Pseudo-algorithm to Solve the MILP model
1: Let t0the initial problem and L={t0}the set of active
problem nodes/servers.
2: Let x⇤=0,8ij 2L(G);y⇤=1
3: repeat
4: Select and delete a problem tkfrom L
5: Resolve ˆ
tkwhere ˆ
tkis the LP relaxation of tkwith ˆxij ,
8i, j 2N(G)take continuous values between 0 and 1.
6: if ˆ
tkis infeasible then
7: Go back to step 3
8: else
9: ˆ
Xis the optimal solution with objective value ˆy
10: if yy⇤then
11: Go back to 3
12: end if
13: if ˆxij 2{0,1},8ij 2L(G)are all integers then
14: y⇤ ˆy
15: X⇤ ˆ
X
16: Go back to step 3
17: end if
18: end if
19: Search for cutting planes Cviolated by x
20: if C6=?then
21: for c2Cdo
22: ti=tk[{c}
23: end for
24: Go back to 5
25: else
26: Branch to partition the problem into new problems
with restricted feasible regions.
27: Add these problems to L
28: Go back to 3
29: end if
30: until L=?
31: return X⇤
connecting neighboring servers.
For experimentation setup, we set the total number of flows
to 500. The arrival rate follows a Poisson process with density
f= 64 flows per second. We set t=1s as the frequency
of resolution attempts. Based on a real Cisco router setup5,
we set IP queue size in CamCube servers to 20000 packets.
5https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos conmgt
/configuration/xe-3s/qos-conmgt-xe-3s-book/qos-conmgt-qdepth.html
To generate the Constant Bit Rate (CBR) multicast traffic, the
size of multicast follows uniform random distribution between
2and 20. The selection of the latter hosts follows a random
uniform distribution among all CamCube servers. The duration
of flows follows the exponential distribution with average of
d= 180s. Moreover, we set the packet size to 1500 bytes and
we generate UDP flows. The throughput of CBR traffic follows
uniform random distribution between 20 and 30 Mbps. We
make use of iPerf tool to generate CBR/UDP traffic flows. The
performance results are calculated with confidence intervals
equal to 95%.
B. Performance Metrics
•Packet Loss Rate: We define the Packet Loss Rate (P)
as the percentage of IP packets lost for the total number
of flows.
•Latency: We consider the all finished transmitted flows
in the network and we calculate the latency (L) as their
total packet average delay. For a single flow Fi, we define
lias the average delay for its total transmitted packets. N
is the number of all finished flows in the network. Then,
L=1/N PN
i=1 li
•Jitter: We define (J) the variation of the latency for all
transmitted flows within CamCube DCN size.
•Number of Resolution Attempts: If the arrived flows can-
not be admitted by our proposal M-CRP due to network
congestion (i.e., not sufficient resources), the multicast
traffic flows wait in the queue until M-CRP succeeds its
deployment in the network. We define Aas the average
number of attempts for all arrived flows to be admitted
and resolved by M-CRP SDN application proposal.
•Full Tree Per Multicast Group: This metric quantifies
the average ratio between the size of i) full multicast tree
nodes and ii) multicast group. Note that if this metric
is large means that many relay nodes have been used to
build the multicast tree. Tidenotes this ratio for flow Fi.
Thus, T=1/N PTi
C. Experimental Results
In this section, we start by analyzing figures related to the
QoS of our proposal in terms of packet loss, latency and jitter.
Then, we will discuss the cost and the quality of multicast trees
generated by M-CRP in comparison with multicast trees based
on shortest path routing.
In Fig. 4(a), we illustrate the IP packet loss (P). It is
noticeable that the packet loss rate for our proposal M-CRP
keeps a constant behavior and is equal to zero for all size
topologies (i.e., 500 to 1200 servers). This can be explained
by the fact that M-CRP deploys the less congested links to
transmit packets. However, for the shortest path, it is straight
forward to see that Pis slowly decreasing even when the size
of DCN is increasing (i.e., more resources). For example, when
we increase the capacity from 600 to 900 (50%), the packet
loss is equal respectively to 2.5% ±0.002 and 1.3% ±0.003.
In Fig. 4(b), we show the latency (L) of flows for both pro-
tocols in comparison. As we can see, in spite of the increasing
number of servers, our proposal outperforms the shortest path
approach. For instance, when the size of Camcube DCN is
900 servers, the average latency of flows is respectively equals
to 0.03 ±0.0004 ms and 326.7±71.05 ms for M-CRP and
shortest path tree. We can conclude that our proposal exploits
better the less congested links and hence the latency is deeply
minimized.
In Fig. 4(c), we illustrate the jitter (J) for M-CRP and
the multicast shortest path. Results show that shortest path is
higher by almost 100 times than M-CRP. For instance, respec-
tively for 600 and 900 servers, Jreaches 0.003 ±0.0003 ms
and 0.004 ±0.0002 ms for M-CRP whereas it is equal to
31.63 ±2.68 ms and 19.74 ±2.13 ms for shortest path. As
we can see that our proposal M-CRP can deeply minimize the
jitter in comparison with shortest path approach.
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
500 600 700 800 900 1000 1100 1200
Packet loss rate
Topology Size - number of servers
M-CRP Shortest Path
0.01
0.1
1
10
100
500 600 700 800 900 1000 1100 1200
Latency (ms)
Topology Size - number of servers
M-CRP Shortest Path
0.001
0.01
0.1
1
10
500 600 700 800 900 1000 1100 1200
Jitter (ms)
Topology Size - number of servers
M-CRP Shortest Path
(a) Packet Loss Rate - P(b) Latency - L(c) Jitter - J
Fig. 4. QoS performance
0
5
10
15
20
25
30
35
500 600 700 800 900 1000 1100 1200
Number of Resolution Attempts
Topology Size - number of servers
M-CRP
Shortest Path
Fig. 5. M-CRP – Number of Resolution Attempts - A
Fig. 5 illustrates the average number of resolution attempts
(A) for all arrived flows performed by our proposal M-CRP.
It is straight forward to see that the shortest path immediately
admits all arrived flows even the network is congested and
there is no available resources. However, M-CRP performs an
admission control and consider in the decision process the
amount of residual resources. For example, for M-CRP,Ais
equal to 30.5±4.33 and 8.56 ±1respectively for DCN size
600 and 900. We notice that the average of the resolution
attempts is decreasing with respect to the DCN size. This can
be explained by the fact that the network is less congested
with more resources (i.e., links) and hence M-CRP can easily
find the available paths.
Finally, in Fig. 6, we illustrate the average ratio (T) between
the sizes of the multicast tree and the multicast group. This
figure shows that with M-CRP the ratio is stable. For instance
at 600 and 900 servers respectively, Tis equal to 4.24 ±0.07
and 4.48 ±0.15. In the other hand, with shortest-path algo-
rithm, there is a linear growth for instance from 600 to 900
servers, Tis equal to 4.67±0.02 and 5.25±0.03. These results
confirm that M-CRP routing makes use of less intermediate
nodes to build the tree than multicast shortest path routing. In
other words, the multicast trees generated by M-CRP are very
dense (depth and width).
VII. CONCLUSION
In this paper, we addressed the multicast routing problem
within SDN based Server only CamCube data center networks.
We deployed the experimental platform with ONOS SDN
controller and Mininet emulator. Besides, we proposed novel
Multicast SDN application named M-CRP. The experimental
results show that M-CRP outperforms the traditional multicast
shortest path in terms of latency, jitter, packet loss and the
quality of generated multicast tree.
REFERENCES
[1] J. W. Fan Yao, “A comparative analysis of data center network archi-
tectures,” pp. 3106–3111, 2014.
[2] H. Abu-Libdeh, P. Costa, A. Rowstron, G. O’Shea, and A. Donnelly,
“Symbiotic Routing in Future Data Centers,” in Proceedings of the ACM
SIGCOMM 2010 Conference, ser. SIGCOMM ’10, 2010, pp. 51–62.
[3] K. Chen, C. Hu, X. Zhang, K. Zheng, Y. Chen, and A. V. Vasilakos,
4.2
4.4
4.6
4.8
5
5.2
5.4
5.6
5.8
500 600 700 800 900 1000 1100 1200
Average size ratio -- full multicast tree per multicast group
Topology Size - number of servers
M-CRP
Shortest Path
Fig. 6. Size ratio - Full Tree Per Multicast Group - T
“Survey on routing in data centers: insights and future directions,” IEEE
network, vol. 25, no. 4, 2011.
[4] D. Li, J. Yu, J. Yu, and J. Wu, “Exploring efficient and scalable multicast
routing in future data center networks,” in 2011 Proceedings IEEE
INFOCOM. IEEE, 2011, pp. 1368–1376.
[5] D. Li, M. Xu, Y. Liu, X. Xie, Y. Cui, J. Wang, and G. Chen, “Reliable
multicast in data center networks,” IEEE Transactions on Computers,
vol. 63, no. 8, pp. 2011–2024, 2014.
[6] S. Ratnasamy, A. Ermolinskiy, and S. Shenker, “Revisiting ip multicast,”
ACM SIGCOMM Computer Communication Review, vol. 36, no. 4, pp.
15–26, 2006.
[7] D. Li, H. Cui, Y. Hu, Y. Xia, and X. Wang, “Scalable data center mul-
ticast using multi-class bloom filter,” in 2011 19th IEEE International
Conference on Network Protocols. IEEE, 2011, pp. 266–275.
[8] M. S¨
arel¨
a, C. E. Rothenberg, T. Aura, A. Zahemszky, P. Nikander, and
J. Ott, “Forwarding anomalies in bloom filter-based multicast,” in 2011
Proceedings IEEE INFOCOM. IEEE, 2011, pp. 2399–2407.
[9] P. Jokela, A. Zahemszky, C. Esteve Rothenberg, S. Arianfar, and
P. Nikander, “Lipsin: line speed publish/subscribe inter-networking,”
ACM SIGCOMM Computer Communication Review, vol. 39, no. 4, pp.
195–206, 2009.
[10] W.-K. Jia, “A scalable multicast source routing architecture for data
center networks,” IEEE Journal on Selected Areas in Communications,
vol. 32, no. 1, pp. 116–123, 2014.
[11] J. Cao, C. Guo, G. Lu, Y. Xiong, Y. Zheng, Y. Zhang, Y. Zhu, C. Chen,
and Y. Tian, “Datacast: A scalable and efficient reliable group data
delivery service for data centers,” IEEE Journal on Selected Areas in
Communications, vol. 31, no. 12, pp. 2632–2645, 2013.
[12] A. Iyer, P. Kumar, and V. Mann, “Avalanche: Data center multicast using
software defined networking,” in 2014 sixth international conference on
communication systems and networks (COMSNETS). IEEE, 2014, pp.
1–8.
[13] S. Shukla, P. Ranjan, and K. Singh, “Mcdc: Multicast routing leveraging
sdn for data center networks,” in 2016 6th International Conference-
Cloud System and Big Data Engineering (Confluence). IEEE, 2016,
pp. 585–590.
[14] Z. Kouba, O. Tomanek, and L. Kencl, “Evaluation of datacenter network
topology influence on hadoop mapreduce performance,” in 2016 5th
IEEE International Conference on Cloud Networking (Cloudnet), 2016,
pp. 95–100.
[15] P. Costa, A. Donnelly, G. O’Shea, and A. Rowstron, “CamCubeOS:
A Key-based Network Stack for 3d Torus Cluster Topologies,” in
Proceedings of the 22Nd International Symposium on High-performance
Parallel and Distributed Computing. ACM, 2013, pp. 73–84.
[16] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson,
J. Rexford, S. Shenker, and J. Turner, “Openflow: Enabling innovation
in campus networks,” SIGCOMM Comput. Commun. Rev., pp. 69–74,
2008.
[17] A. L. Stancu, S. Halunga, A. Vulpe, G. Suciu, O. Fratu, and E. C.
Popovici, “A comparison between several software defined networking
controllers,” in 2015 12th International Conference on Telecommunica-
tion in Modern Satellite, Cable and Broadcasting Services (TELSIKS),
2015, pp. 223–226.