Conference PaperPDF Available

Delay analysis of fronthaul traffic in 5G transport networks

Authors:

Figures

Content may be subject to copyright.
1
Delay Analysis of Fronthaul Traffic in 5G Transport
Networks
Gabriel Otero P´
erez, Jos´
e Alberto Hern´
andez, and David Larrabeiti L´
opez
Department of Telematics Engineering, Universidad Carlos III de Madrid, Spain
Email: {gaoterop, jahgutie, dlarra}@it.uc3m.es
AbstractCloud Radio Access Network (C-RAN) architecture
claims to reduce capital costs and facilitate the implementation
of multi-site coordination mechanisms. This paper studies the
delay constraints imposed by the Common Public Radio Interface
(CPRI) protocol in ring-star topologies used by mobile operators.
Simulations demonstrate that centralised implementations are
feasible via functional split in the baseband processing chain.
We derive theoretical expressions for propagation and queueing
delay, assuming a G/G/1 queueing model. Then, we examine
the properties of the fronthaul traffic flows and their behaviour
when they are mixed. We show that the theoretical queueing
delay estimations are an upper bound on the simulation output
and accurate under certain conditions. Based on our results, we
further propose a packetisation strategy of the fronthaul traffic
which helps reduce the worst case aggregated queueing delay
by 30%. Also, the benefits of a bidirectional ring topology are
shown, achieving a worst average queueing delay 10 times lower
than that of unidirectional topologies.
Index Terms—5G, CPRI, C-RAN, Fronthaul, Delay Analysis
I. INTRODUCTION
A. Motivation
ACCORDING to the Visual Networking Index (VNI)
Global Mobile Data Forecast [1] released by Cisco
in February 2016, there will be 5.5 billion global mobile
users by 2020. Nielsen’s law [2] advocates that the required
bandwidth in networks is steadily increasing, approximately,
50%every year. In the near future, a high-bandwidth, low
latency interconnection network will be mandatory. In order
to cope with the ever increasing traffic load that the networks
will need to support, a new approach for planning cellular
deployments should be followed.
Cloud Radio Access Network (C-RAN) architecture pre-
sented by China Mobile [3], introduces the idea of a cloud
computing-based processing of baseband signals on cellular
networks. Experiments testing the C-RAN framework reveal
that significant savings in both operational expenditure (OPEX)
and capital expenditure (CAPEX) can be achieved. This concept
represents large-scale centralised base station deployments,
achieving significant cost reductions by separating the radio
equipment of each base station from the elements that process
the signals, which now are centralised and possibly virtualised.
In this scenario, the Common Public Radio Interface (CPRI) [4]
protocol provides an interface between the radio transceivers,
Remote Radio Heads (RRHs), and the processing units, i.e.,
Baseband Units (BBUs) to transport the so-called fronthaul
traffic generated at the RRH through the backhaul network.
CPRI is an industry standard that can be used to implement
the Digitized radio-over-fiber (DROF) concept proposed in,
e.g., [5], [6]. Additionally, there exist several packetisation
projects which aim at developing protocols for the transport
of radio samples over packet-switching networks. It is worth
highlighting the work being carried out in the Time-Sensitive
Networking for Fronthaul IEEE 802.1CM standard [7], which
pursues to enable the transport of time-sensitive fronthaul
streams over Ethernet bridged networks. However, to the date,
no characterisation of fronthaul traffic and the aggregation of
fronthaul flows has been performed, which is a preliminary
step to fulfill the stringent delay and jitter requirements of
this type of traffic.
We address these questions as follows: Section II presents
the problem scenario, assumptions and analyses the propa-
gation delay. In Section III, we analyse the behaviour of the
fronthaul traffic in a simulated ring-star topology and compute
the end-to-end queueing delay. We conclude our paper in
Section IV.
II. PROBLEM STATEMENT AND ASSUMPTIONS
A. Reference scenario
Figure 1 illustrates the topology used to measure and evalu-
ate the performance of the different approaches. Since network
coverage and spatial reuse are fundamental issues of wireless
networks, operators traditionally followed a hexagonal grid
deployment architecture, in which a target field is partitioned
into hexagonal grids. Assuming a hexagonal cell coverage
area, we have groups of 7cells that, from now on, we will
refer to as hives. Every cell comprises three sectors, each
served by a 120sector antenna. Regarding the uplink (the
one that imposes the most stringent delay requirements [12]),
the traffic originating from the three sectors is aggregated at
the centre of the cell in a tree topology, using direct optical
fibre links. Later, the traffic of each cell in a hive is mixed
together at its central point (see numbered circles). The green
star at the center of Fig. 1 represents the point where the
flows coming from all topolgy sectors meet. It is also part
of a higher level (ring) network, connecting another groups of
hives or clusters. In addition, it is a great candidate for hosting
the BBU processing unit for the entire cluster. We number, in
clockwise descending order each hive starting in the yellow
shouth westernmost one. We will refer to this hive as hive #7.
Then, the last hive in the ring is hive #1.
One of the end-to-end delay components is the propagation
delay of the packets. Assuming the physical properties of the
ª*&&&
Authorized licensed use limited to: UNIVERSIDAD CARLOS III MADRID. Downloaded on July 22,2020 at 19:32:16 UTC from IEEE Xplore. Restrictions apply.
2
Fig. 1. Regular hexagonal lattice deployment.
TABLE I
PROPAGATION DELAY
Propagation delay - Worst case - Ring-Star Topology
Unidirectional Bidirectional
Dense Urban (R = 300 m) 43.84 μs 22.22 μs
Urban (R = 500 m) 73.069 μs 38.70 μs
Rural (R = 1500 m) 219.21 μs 116.10 μs
optical links to be fixed, the propagation delay depends only
on the total physical distance between the radio equipment
and the processing units. Regarding the cell coverage radius,
common values used in the related literature [8] are 300 m, 500
m and 1500 m for Dense-urban,Urban and Rural scenarios,
respectively. On the other hand, we must take into account
the queueing delay, which appears whenever we packetise the
user’s uplink radio signals. Since the communication process
occurs in a ring we should have in mind that it might be
unidirectional or bidirectional. In the first case, we will have
a worst case scenario where the packets originating in hive
#7are suffering the highest propagation and queueing delays
since they have to travel the longest distance to reach the
final aggregation point in hive #1. In the bidirectional ring
case, the propagation delay is expected to be half of that
in the unidirectional case and the queueing delay would be
affected by half the number of aggregation points. Trivial
geometry calculations [9] lead to a closed form expression
for the propagation delay, which is defined as follows
dprop =R·cos(30)·(N28 + 2)
vprop
(1)
where Rdenotes the radius of a hexagonal cell, Nrepresents
the number of hives a given packet needs to cross until it
reaches its final destination and vprop is the speed of light in op-
tical fiber. Since the refractive index of glass is approximately
1.5, we assume vprop 2·108m/s, which means a 5μs/km
delay. Table I shows the results of applying the derived analytic
expression (see eq. (1)) for the propagation delay in the
proposed topology. Worst-case scenarios have been taken into
Fig. 2. Uplink LTE processing chain.
account to compute the final delay values for each case, i.e.,
the propagation delay values in the table are those experienced
by the packets originating in the furthest cells from the central
BBU in the ring topology. 100 μs is the target maximum delay
budget determined by the IEEE 802.1CM working group. Half
of it can be spent in the propagation delay, which allows a
target RRH-BBU distance up to 50 μs
5μs/km =10km. As for the
other half, it should be enough to allow a reasonable amount of
hops, each one of them adding both processing and queueing
delays.
B. Functional split B
The first option to connect a RRH and the BBU consists
of transmitting the pure sampled radio signal [10]. Since no
further processing is done at the remote radio equipment,
overhead information , such as the cyclic prefix (CP), is being
sent over the link to the BBU. Complex processing devices
are no longer needed at the RRH because all the functions
required to decode the signal are located at the BBU. The data
rate assuming a single carrier implementation is:
RSplit A =2·fs·Nov ·Nbits ·Nant (2)
where Nov is the oversampling factor. Nbits and Nant repre-
sent the bit resolution we use to quantise the signal samples
and the number of receiving antennas, respectively. A factor
of 2 accounts for the complex nature of each I/Qsample.
For instance, a 2-branch antenna system with 10 bits per
sample, assuming an oversampling factor of 2 and a sampling
frequency of 30.72 MHz requires 2.46 Gbit/s per sector. In
order to relax the bandwidth burden we may remove the cyclic
prefix from the quantised signals and perform a Fast Fourier
Transform (FFT) to decode the OFDM subcarriers. Subcarriers
used as a guard band, typically around 40%, are no longer
necessary. Assuming a resolution of 10 bits per sample, a
subcarrier spacing of 15 KHz and a symbol rate of 66.7μs
to maintain orthogonality, the new rate is
RSplit B =2·fs·NSub ·Nov ·Nbits ·Nant 720 Mb/s (3)
where 1/fs=66.7μs and 1200 active subcarriers are con-
sidered. In Split B, the bandwidth requirement is clearly
Authorized licensed use limited to: UNIVERSIDAD CARLOS III MADRID. Downloaded on July 22,2020 at 19:32:16 UTC from IEEE Xplore. Restrictions apply.
3
reduced. Further processing of the signals at the base station
may leverage the fact that only part of the resource blocks
of the base station are being assigned to users. This is the
case of functional splits C, D and E. Bandwidth requirements
obviously decrease as we leave more processing equipment at
the RRHs but, on the other hand, deployment and maintenance
costs increase and we progressively lose the benefits of cen-
tralisation [11]. Detailed investigations of the above-mentioned
splits have been conducted in [12].
C. Queueing theory review
M/M/1 and M/G/1 models are attractive because closed
expressions can be obtained for the main metrics of interest,
such as average waiting time in queue, mean number of users
in the queue, the server load, etc. However, exponentially
distributed interarrival times assumptions are made. Time be-
tween arrivals is not exponentially distributed (until we merge
enough number of constant bit rate flows, see Section III-A)
and our service time is constant and does not follow an
exponential distribution. This is the reason why the G/G/1
model, generalising both interarrival and service times, is
preferred to obtain the estimations of the mean wait time
in queue in our multihop tolopogy. Unfortunately, no closed
expressions exist for the mean waiting time in queue under
these assumptions. Let Tbe the random variable modeling
the interarrival times of packets at the queue and S, the
service time random variable. Also, we write ρfor the system
utilisation. Defining the squared coefficient of variation of a
random variable Xas C2[X]=Var [ X]
E[X]2, an upper bound on this
parameter is [13]
WqE[S]·ρ
1ρ·C2[T]+C2[S]
2(4)
which claims to be a good approximation to the mean queue
waiting time when ρ1. There also exists a lower bound
which is not very useful since it often gives negative results,
which are trivial outcomes.
III. EXPERIMENTS
After analysing the bandwidth requirements in Section
II-B, we focus our study on Split B. Recall that, in Split
B, traffic follows a Constant bitrate (CBR) pattern with a
rate of 720 Mb/s (90 MB/s) and an OFDM symbol is sent
every 66.7μs. Accordingly, we can compute the burst size
as 90 MB/s ·(15KHz)1= 6000 bytes. Figure 3 shows
the initial packetisation scheme chosen to conduct our study,
which consists of four back-to-back packets. The efficiency
of each packet is given by
η=Packet payload
Packet Header +Packet Payload (5)
Consequently, packets with a 46 bytes (RoEover MAC-in-
MAC) header and 1500 bytes of payload, lead to an efficiency
η0.97. Table II shows the efficiency values for different
payload sizes. Additionally, we implemented a custom discrete
event simulator so as to assess the validity of the theoretical ap-
proximations as well as to unveil the behaviour and properties
Fig. 3. Split B burst.
TABLE II
Payload Size Efficiency (η)
500 bytes 0.916
1000 bytes 0.956
1500 bytes 0.970
2000 bytes 0.978
0 100 200 300 400 500 600 700 800
Number of merged flows
1
2
4
6
8
10
12
14
16
18
20
12 Packets per Burst
6 Packets per Burst
4 Packets per Burst
3 Packets per Burst
Poisson Arrival Process
Fig. 4. Arrivals squared coefficient of variation.
of the traffic under different conditions. We make use of this
simulator in the following experiments, considering 100 Gb/s
links for the ring-star topology explained in Section II-A.
A. Aggregation of multiple fronthault flows
The aim of this experiment is to check the steady state
convergence of the arrivals squared coefficient of variation, if
any, when we aggregate more and more flows in a given hive’s
hub. Flows are merged applying an offset to each deterministic
burst flow, uniformly distributed between 0and the burst
period, U(0,T), where T=66.7μs. In the worst case, two
flows are completely aligned, that is, their bursts arrive at the
same time to the aggregation point.
As shown in Figure 4, the squared coefficient of variation
of the packet arrivals, converges to unity as we increase
the number of mixed flows. This behaviour is explained by
the Palm-Khintchine theorem [14], which claims that if we
combine a large enough number of independent and not
necessarily Poissonian renewal processes, each with small
intensity, we encounter Poissonian properties1. It is rather
1Squared Coefficient of Variation: C21
Authorized licensed use limited to: UNIVERSIDAD CARLOS III MADRID. Downloaded on July 22,2020 at 19:32:16 UTC from IEEE Xplore. Restrictions apply.
4
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
System load [ ]
0.5
1
1.5
2
2.5
3
3.5
4
4.5
Estimation Ratio
3 Flows
10 Flows
50 Flows
100 Flows
150 Flows
Fig. 5. Queueing delay vs System Load. 4 packets per burst.
important to stress that the rate of convergence to the steady
state is different depending on payload size of the packets
we use to transport the burst. Note that the distribution of the
interarrival times is not Poissonian, (i.e, it does not converge to
one) until we merge, approximately, more than 600 flows with
the aforementioned bursty structure. Hence, we cannot assume
the M/M/1 nor the M/G/1 model as good approximations since
only (3 sectors ·7cells ·7hives = 147) fronthaul flows are
merged at the last hop of the topology, in the worst case. In
addition, for 150 flows, the squared arrival coefficient of
variation reduces from 8 while using 12 packets per burst
to roughly 2, when we use 3. This factor strongly affects
the queueing delay in view of equation (4). Also, Figure 3
shows that the service time of the packets is not exponentially
distributed, since they are all of the same size and, thus, it
follows a deterministic distribution. In order to take all these
aspects into account, assume both distributions as generalised
distributions characterised by the appropriate coefficients of
variation. This fact supports the decision of using the G/G/1
queueing model so as to estimate the theoretical queueing
delay.
B. Theoretical estimations vs Simulation
In this section, we assess the validity of the analytic estima-
tions and how close they are to the simulation outputs. Figure
5 shows the evolution of the ratio between the theoretical
G/G/1 estimation and the simulation results for the queue
waiting time as we increase the load of a given aggregation
point, for different number of aggregated flows. It is worth
noting that the more flows we merge, the more similar the
analytic estimations and the simulation outputs become until
we reach heavy load states. The ratio approaches to unity
for system loads ρ0.4while merging 50 or more flows.
Additionally, we observe that the theoretical G/G/1 queueing
model is, indeed, an upper bound on the simulation outputs.
Since theoretical values are only approximations and consid-
ering the gap between the estimated and the simulated values
for some load conditions, we present only the simulation
outputs from now on.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Simulated Mean Aggregated Delay
Hive#1 Hive#2 Hive#3 Hive#4 Hive#5 Hive#6 Hive#7
0
2
4
6
8
10
12 packets per burst
6 packets per burst
4 packets per burst
3 packets per burst
(a) Unidirectional ring
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Simulated Mean Aggregated Delay
Cell Hive#4 Hive#5 Hive#6 Hive#7
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
12 packets per burst
6 packets per burst
4 packets per burst
3 packets per burst
(b) Bidirectional ring
Fig. 6. End-to-end queueing delay.
C. End-to-end delay: deciding on the packet size
We now focus on measuring the average and worst case end-
to-end delay of a packet, that is, in the unidirectional case, we
measure the queueing delay a given packet experiences on its
path originating in a RRH of hive #7all the way throught to
the BBU facilities, located at hive #1. Note that the fiber link
between hive #1and the BBU is the bottleneck here due to the
fact that it has to deal with the largest amount of traffic of all.
In the unidirectional case it serves the traffic resulting from
the aggregation of every single hive’s flow. In the bidirectional
case, we may consider each half of the ring separately. One
side merges the traffic flows coming from hives #5,#6and
#7. The other one, has to aggregate the traffic from hives #4,
#3,#2and #1, which is the worst case scenario. Figures 6(a)
and 6(b) show, respectively, the mean aggregated queueing
delay of a packet for the unidirectional and bidirectional worst
cases, as it traverses the ring topology. Note that the aggregated
queueing delay does not grow linearly as we approach the final
destination, the BBU located at the center of hive #1. Also, the
Authorized licensed use limited to: UNIVERSIDAD CARLOS III MADRID. Downloaded on July 22,2020 at 19:32:16 UTC from IEEE Xplore. Restrictions apply.
5
Cell Hive#7 Hive#6 Hive#5 Hive#4 Hive#3 Hive#2 Hive#1
0
5
10
15
20
25
Fig. 7. Queueing delay percentiles. Unidirectional Ring & 12 packets per
burst.
aggregated queuing delay is below 2μs until the 5th hop in
both cases. With the aim of assessing the effects of different
packetisation policies on the queueing delay, we illustrate the
behaviour of the system for different packet payload sizes.
Closer inspection of the plots, reveals that the more packets we
use to transport a given burst, the higher the average queueing
delay. In addition, we find that at the first hops of the topology,
the packetisation strategy does not really affect the aggregated
queueing delay. Conversely, when many flows are mixed in the
last hops at the very end of the ring, choosing the right number
of packets per burst can make the difference. In this scenario,
choosing 12 packets to encapsulate the OFDM symbol leads to
an average queue waiting time of 10.48 μs (see Figure 6a). On
the other hand, employing 3packets to encapsulate the burst
means, on average, 7.22 μs which represents an approximate
31% saving in terms of waiting time at the concentrator’s
packet queue. It is important to have in mind that, in the
unidirectional ring case, the system load at the last hop (hive
#1) is close to unity. Additionally, note that, by considering
a bidirectional ring, the average worst case queueing delay
reaches approximately 1μs at the last hop, which is 10 times
lower than that of the unidirectional case. Figure 7 shows
the aggregated queueing delay statistics obtained from the
simulator in the unidirectional worst case for the 12 packets
per burst case. Notice that interquartile range increases as we
traverse the ring towards the BBU. Furthermore, regarding the
last hop (hive #1), 5% of the packets are likely to suffer an
aggregated queueing delay of more than 23 μs. As a rule
of thumb, the experiments show that we can obtain a better
performance by using packets with a greater payload and, thus,
decreasing the overhead and increasing efficiency.
IV. SUMMARY AND CONCLUSION
The benefits of a centralised processing architecture are
clear from the mobile operator perspective. Users may also
benefit from it by using cloud’s powerful and specialised
hardware. Enhancements may include robust and complex
forward error correction algorithms, parallel computing, more
sophisticated coordination multi-point algorithms, etc. We con-
clude that a real world deployment is achievable using Split B.
We found that the rate of convergence of the arrivals squared
coefficient of variation is different depending on the packet
payload size. Furthermore, when aggregating 150 flows, it can
be reduced by a factor of 4 by using 3-packets bursts, instead
of 12. Regarding the tightness of the theoretical estimations,
we show that they are close to the simulation outputs for
system loads ρ0.4while combining more than 50 flows.
Aggregated queueing delay is, on average, far from exceeding
the 50 μs budget envisioned in Section II-A. Also, the worst
average queuing delay is 10 times smaller in the bidirectional
case, compared to the unidirectional ring topology. Neverthe-
less, two aspects must be analysed carefully. Regarding the
statistical properties of the queueing delays, some packets may
suffer from a queueing delay which is much higher than the
average. Secondly, more sources of delay should be added to
the propagation and queueing delays, such as switching delay,
packet processing delays, etc. Dynamic optimal allocation of
flows into different dedicated optical circuits and studies about
the optimal location of the BBU are natural extensions to this
work.
ACKNOWLEDGEMENTS
The authors would like to acknowledge the support of the
Spanish project TEXEO (grant no. TEC2016-80339-R) and the
EU-funded 5G-Crosshaul project (grant no. H2020-671598).
REFERENCES
[1] Cisco, Visual Networking Index, February 2016, White paper at cisco.com
[2] J. Nielsen, Nielsen’s Law of Internet Bandwidth, April 5, 1998
https://www.nngroup.com/articles/law-of-bandwidth
[3] China Mobile, The Road Towards Green RAN (White Paper),
http://labs.chinamobile.com/cran/wp-content/uploads/2014/06/
20140613-C- RAN-WP- 3.0.pdf
[4] Common Public Radio Interface (CPRI), Interface Specification v5.0,
http://www.cpri.info/spec.html.
[5] B. R. Ballal and D. Nema, ”Performance Comparison of Analog and
Digital Radio Over Fiber Link,” International Journal of Computer
Science & Engineering Technology (IJCSET), 2012.
[6] S. Kuwano and Y. Suzuki, ”Digitized Radio-over-Fiber (DROF) System
for Wide-Area Ubiquitous Wireless Network,” IEEE Xplore, April 2007
[7] IEEE Time-Sensitive Networking for Fronthaul 802.1CM: http://www.
ieee802.org/1/pages/802.1cm.html
[8] S. Purge, D. Breuer, ”Convergence of fixed and Mobile Broadband
access/aggregation networks,” Technical Report, October 2016
[9] P. Suksompong, ”The Cellular Concept,” Technical Report, November
2009, http://www2.siit.tu.ac.th/prapun/tcs455/Cellular455.pdf
[10] A. De la Oliva, J. A. Hern´
andez, D. Larrabeiti, et al, ”An overview of the
CPRI specification and its application to C-RAN-based LTE scenarios,
IEEE Communications Magazine, 2016
[11] D. Wubben et al., ”Benefits and impact of cloud computing on 5g signal
processing: Flexible centralization through cloud-ran,” Signal Processing
Magazine, IEEE, 2014.
[12] U. D¨
otsch, M. Doll, H. P. Mayer, F. Schaich, J. Segel, and P. Sehier,
Quantitative analysis of split base station processing and determination
of advantageous architectures for LTE, Bell Labs Tech. J., May 2013.
[13] J. Kingman, ”The single server queue in heavy traffic”. Mathematical
Proceedings of the Cambridge Philosophical Society, 1961
[14] J. A. Hern´
andez, P. Serrano: Probabilistic models for computer networks:
tools and solved problems, 2013 (ISBN: 978-1291546873)
[15] F. Cavaliere, P. Iovanna, et al , ”Towards a Unified Fronthaul-backhaul
Data Plane for 5G The 5G-Crosshaul Project Approach,” Comput. Stand.
Interfaces, Elsevier Science Publishers B. V., March 2017
[16] C. Divya, K. Koteswararao Kondepu, ”5G Fronthaul Latency and Jitter
Studies of CPRI Over Ethernet,” Opt. Commun. Netw., February 2017
Authorized licensed use limited to: UNIVERSIDAD CARLOS III MADRID. Downloaded on July 22,2020 at 19:32:16 UTC from IEEE Xplore. Restrictions apply.
... In [7][8][9] the Cloud Radio Access Network (CRAN) topology based on the CPRI for virtualising the RAN function elements was modelled using G/G/1 queues to ascertain the performance metrics for 5G IoT networks. These works suggest high accuracy in the use of G/G/1 queuing models for aggregated virtual flows. ...
... Perez et al. [7,8], Gowda et al. [9] Use of G/G/1 queuing models with high accuracy for aggregated virtual flows ...
... The queuing network can be said to have virtual network nodes, each represented by the index . The total traffic arrival rate to the th node is denoted as and estimated using a system of linear equations given in (8) where 0 represents the external traffic arrival rate to the node; represents the traffic arrival rate at the node where the traffic leaves to join the node ; ℎ is the transition probability which defines the probability that a frame that has completed service at will go to node . is used to represent the creation or combination of new traffic (customers) at a given node. Where there is no creation of traffic frames, is defined as 1. ...
Article
Full-text available
Recent approaches, technologies and standards have been developed to address scalability in very large IoT networks with an emphasis on the Media Access Control (MAC) component. One of the best strategies to manage media access for a large number of devices is to provide concurrent media access. This can be achieved through the concept of Radio Frequency (RF) virtualisation. However, to support such strategies in a Large Scale Internet of Things (LS-IoT) network, the Gateway’s (GW) MAC functions need to equally manage transmissions concurrently otherwise congestion is imminent due to the resource constraints of the GW node. In this work, a virtual media access approach using virtual Access Groups (vAG) for supporting the scalability of Large-Scale IoT (LS-IoT) networks over a virtual network framework is proposed. The virtual network and the impact on the MAC throughput are modelled analytically using the Queuing Network Analyser (QNA) method and the unsaturated throughput model for the IEEE 802.11ah standard. The impact of managing large devices using vAGs for a resource-constrained Gateway MAC is also modelled and the results are compared and analysed. The results obtained suggest that the throughput can be improved for a large number of devices based on vAGs when the GW MAC is virtualised using the proposed framework as opposed to using a dedicated localised GW MAC component.
... Bayesian networks [69] Tree WSN, Internet, online video conferencing [24], [70] Delay Wireless networks [15] Highways Road networks [73], [103] Current Electric converters, distribution networks Power grids [59], [71], [72] Link Smart grids, Wireless mesh networks, data transmission, WSN, satellite communication Security of cloud data [37], [75], [76], [26], [33], [77] Traffic WSN, Internet Inter-vehicle communication [13], [78], [79], [80] Motion Microscopic flow in active gels [81] Topology Large-scale clusters, 5G transport networks [32], [82] Ring 5G transport networks, NoC [82], [83], [84] Flushing Water distribution network [85] Messaging Smart grid applications [37] Loop Material transportation [86] Composites Microvascular networks [87] Communication Plug-in electric vehicles, smart energy management, smart grids WSN [27], [37], [78], [88] The attributes listed in Fig. 1 is obtained after surveying the targeted papers, which use the property of unidirectional. In this context, we found that there are total 22 attributes listed. ...
... Bayesian networks [69] Tree WSN, Internet, online video conferencing [24], [70] Delay Wireless networks [15] Highways Road networks [73], [103] Current Electric converters, distribution networks Power grids [59], [71], [72] Link Smart grids, Wireless mesh networks, data transmission, WSN, satellite communication Security of cloud data [37], [75], [76], [26], [33], [77] Traffic WSN, Internet Inter-vehicle communication [13], [78], [79], [80] Motion Microscopic flow in active gels [81] Topology Large-scale clusters, 5G transport networks [32], [82] Ring 5G transport networks, NoC [82], [83], [84] Flushing Water distribution network [85] Messaging Smart grid applications [37] Loop Material transportation [86] Composites Microvascular networks [87] Communication Plug-in electric vehicles, smart energy management, smart grids WSN [27], [37], [78], [88] The attributes listed in Fig. 1 is obtained after surveying the targeted papers, which use the property of unidirectional. In this context, we found that there are total 22 attributes listed. ...
... In [23], the researchers studied the flow of traffic data in vehicle networks. In [82], the studies on information packets and queuing delays considering unidirectional topologies is mentioned. In [99], the importance of unidirectional process while managing risks and uncertainty in financial sector is discussed. ...
Article
Full-text available
Studies and applications related to unidirectional flow are gaining attention from researchers across disciplines in the recent years. Flow can be viewed as a concept, where the material, fluid, people, air, and electricity are moving from one node to another over a transportation network, water network, or through electricity distribution systems. Unlike other networks such as computer networks, most of the flow networks are visible and have strong material existence and are responsible for the flow of materials with definite shape and volume. The flow of electricity is also unidirectional, and also share similar features as of flow of materials such as liquids and air. Generally, in a flow network, every node in the network participates and contributes to the efficiency of the network. In this survey paper, we would like to evaluate and analyze the depth and application of the acyclic nature of unidirectional flow in several domains such as industry, biology, medicine, and electricity. This survey also provides, how the unidirectional flow and flow networks play an important role in multiple disciplines. The study includes all the major developments in the past years describing the key attributes of unidirectional flow networks, including their applications, scope, and routing methods.
... Moreover, the sizes of the packets from the user equipment in mobile networks (e.g., 4G and 5G) are smaller than the IP packet sizes handled in its transport networks. The aggregation and multiplexing of fronthaul and backhaul traffic into IP packets before transmission in the transport network of 5G Cloud Radio Access Network (C-RAN) was discussed in [11][12][13]. ...
... An ethernet switch is deployed to aggregate packet flows from the fronthaul network and then multiplex them with those from the backhaul networks and transported through optical links to a pool of BBUs. The authors in [11] proposed a C-RAN architecture shown in Figure 3 in which an ethernet switch connected to the RRHs aggregates fronthaul traffic and forward the aggregated traffic to the cloud. The authors in [12] discussed the problem of multiplexing and aggregating fronthaul and backhaul traffic on C-RAN optical ethernet link. ...
... A Cloud Radio Access Network (C-RAN) architecture with packet aggregation at the fronthaul network[11]. ...
Article
Full-text available
The transmission of massive amounts of small packets generated by access networks through high-speed Internet core networks to other access networks or cloud computing data centres has introduced several challenges such as poor throughput, underutilisation of network resources, and higher energy consumption. Therefore, it is essential to develop strategies to deal with these challenges. One of them is to aggregate smaller packets into a larger payload packet, and these groups of aggregated packets will share the same header, hence increasing throughput, improved resource utilisation, and reduction in energy consumption. This paper presents a review of packet aggregation applications in access networks (e.g., IoT and 4G/5G mobile networks), optical core networks, and cloud computing data centre networks. Then we propose new analytical models based on diffusion approximation for the evaluation of the performance of packet aggregation mechanisms. We demonstrate the use of measured traffic from real networks to evaluate the performance of packet aggregation mechanisms analytically. The use of diffusion approximation allows us to consider time-dependent queueing models with general interarrival and service time distributions. Therefore these models are more general than others presented till now.
... d1, and d2 are the length of feeder fiber and distribution fiber. α is the propagation delay per kilometer of fiber which is 5 µs; The maximum propagation delay over CRAN is 50 µs [18]. ...
... d1, and d2 are the length of feeder fiber and distribution fiber. α is the propagation delay per kilometer of fiber which is 5 µs; The maximum propagation delay over CRAN is 50 µs [18]. ...
Preprint
Full-text available
With the rapid growth in the telecommunications industry moving towards 5G and beyond (5GB) and the emergence of data-hungry and time-sensitive applications, Mobile Network Operators (MNOs) are faced with a considerable challenge to keep up with these new demands. Cloud radio access network (CRAN) has emerged as a cost-effective architecture that improves 5GB performance. The fronthaul segment of the CRAN necessitates a high-capacity and low-latency connection. Optical technologies presented by Passive Optical Networks (PON) have gained attention as a promising technology to meet the fronthaul challenges. In this paper, we proposed an Integer Linear Program (ILP) that optimizes the total cost of ownership (TCO) for 5G using CRAN architecture under different delay thresholds. We considered the Time and Wavelength Division Multiplexing Passive Optical Network (TWDM-PON) as a fronthaul with different splitting ratios.
... The distribution of queuing delay, D q k,s , can be derived under the assumption of exponential inter-arrival time for input traffic [46]. However, there is no closed form expression for general case of arrivals [47]. In our study, we fixed the queuing delay for the sake of simplicity. ...
Article
Full-text available
Network slicing is a promising technology for realizing the vision of supporting a wide range of services with diverse and heterogeneous service requirements. With network slicing, the network is partitioned into multiple individual dedicated networks tailored and customized for specific services. However, this causes extra energy consumption to reserve resources for each slice. On the other hand, minimizing the energy consumption in radio access network (RAN) may result in increasing the energy consumption in the cloud and the fronthaul due to higher required processing and data transport. Therefore, the energy should be evaluated from an end-to-end perspective. In this study, we address the problem of minimizing the end-to-end energy consumption of a network with network slicing by jointly reserving communication and computation resources among slices. First, we propose an end-to-end delay and energy model for each slice. We take into account the delays and energy consumption of the radio site, midhaul/fronthaul transport, and the cloud site in an Ethernet-based cloud RAN (C-RAN). Then, we formulate a non-convex optimization problem to minimize the total energy consumption of the network by jointly allocating transmission bandwidth and processing resources in the digital unit pool of the cloud, respectively, to each slice. The constraints of the optimization problem are the total delay requirement of each slice, the maximum allowable bandwidth at each radio unit, the maximum rate limitation of the Ethernet links, and the total processing limit of the cloud. To solve the problem optimally, we transform it into a convex quadratic programming problem. The simulation results show that end-to-end network slicing can decrease the total energy consumption of the network compared to only RAN slicing. We also investigate the impact of the 5G numerology on the allocated resources to each slice and the total energy consumption. We show that using mixed numerology depending on the slice type, we can interplay between delay and energy consumption for each slice.
... , where d f l is the distance between the cloud BBU pool and RRH l. It then shows that the closer the RRH is to the cloud BBU pool, the shorter the fronthaul delay [36]. Likewise, the propagation delay over the air is predicted based on the distance d u l between the RRH and its associated user and is calculated as D a u = max l∈L [d u l /(3×10 8 )]. ...
Article
As one of the healthcare development trends, mobile hospital systems nowadays need more advanced means of treatment and diagnosis and require higher data transmission speed and capacity with lower latency. With its ability to simultaneously support a variety of services in a wide range of application scenarios using network slicing technique, the emerging fifth generation communication system (5G) is expected to offer significant communication performances, including higher throughput, higher mobility, higher connection density, higher transmission reliability with lower latency than the current fourth generation (4G). Hence, 5G can serve as an effective transmission means to meet the modern mobile hospital systems requirements. This paper proposes a 5G network slicing-based mobile hospital system where two types of slices, namely enhanced mobile broadband (eMBB) slice and ultra-reliable and low-latency communications (uRLLC) slice, are dedicated to the medical data. We propose an optimization method to maximize the throughput of the medical data assigned to the eMBB slice. We also propose a resource allocation algorithm for very high transmission reliability with very low latency of the medical data assigned to the uRLLC slice. Simulation results indicate that our proposed approach can effectively meet mobile hospital systems requirements for data transmission throughput, reliability, and latency from remote sites to hospital centers.
... In the D-RAN, only one RRU is being served by a DU. The approach of C-RAN has shown through experiments, to save both operational expenditure (OPEX) and capital expenditure (CAPEX) [6]. ...
Article
Full-text available
In order to address the challenges that have come with the exploding demand for higher speed, traffic growth and mobile wireless devices, Mobile network operators have decided to move to the notion of small cells based on cloud radio access network. The merits of cloud based RAN includes the ease of infrastructure deployment and network management as well as the fact that its performance are optimized and it is cost effective the merits of cloud based RAN includes the ease of infrastructure deployment and network management as well as the fact that its performance are optimized and it is cost effective. Notwithstanding, cloud radio access network comes with so many strict requirements to be fulfilled for its fronthaul network. In this paper, we have presented these requirements for a 5G fronthaul network. Particular interest on the time division multiplex passive optical network’s challenge of latency was treated by proposing an optimized version of the round robin dynamic bandwidth allocation algorithm. Results obtained show an improvement in the latency of the original algorithm which meets the fronthaul requirement. Other test parameters like jitter and BER were also improved by our proposed optimized algorithm.
Article
Full-text available
With the rapid growth of the telecom sector heading towards 5G and 6G and the emergence of high-bandwidth and time-sensitive applications, mobile network operators (MNOs) are driven to plan their networks to meet these new requirements in a cost-effective manner. The cloud radio access network (CRAN) has been presented as a promising architecture that can decrease capital expenditures (Capex) and operating expenditures (Opex) and improve network performance. The fronthaul (FH) is a part of the network that links the remote radio head (RRH) to the baseband unit (BBU); these links need high-capacity and low latency connections necessitating costeffective implementation. On the other hand, the transport delay and FH deployment costs increase if the BBU is not placed in an appropriate location. In this paper, we propose an integer linear program (ILP) that simultaneously optimizes BBU and FH deployment resulting in minimal capital expenditures (Capex). Simulations are run to compare the performance of star and tree topologies with the varying line of sight probabilities (LoS) and delay thresholds. We consider fiber-optic (FO) and free-space optics (FSO) technologies as FH for the CRAN. Finally, we provide an analysis of Opex and the total costs of ownership (TCO), i.e., a technoeconomic analysis.
Article
Full-text available
Common Public Radio Interface (CPRI) is a successful industry cooperation defining the publicly available specification for the key internal interface of radio base stations between the radio equipment control (REC) and the radio equipment (RE) in the fronthaul of mobile networks. However, CPRI is expensive to deploy, consumes large bandwidth, and currently is statically configured. On the other hand, an Ethernet-based mobile fronthaul will be cost-efficient and more easily reconfigurable. Encapsulating CPRI over Ethernet (CoE) is an attractive solution, but stringent CPRI requirements such as delay and jitter are major challenges that need to be met to make CoE a reality. This study investigates whether CoE can meet delay and jitter requirements by performing FPGA-based Verilog experiments and simulations. Verilog experiments show that CoE encapsulation with fixed Ethernet frame size requires about tens of microseconds. Numerical experiments show that the proposed scheduling policy of CoE flows on Ethernet can reduce jitter when redundant Ethernet capacity is provided. The reduction in jitter can be as large as 1 μs, hence making Ethernet-based mobile fronthaul a credible technology.
Article
Full-text available
Cloud computing draws significant attention in the information technology (IT) community as it provides ubiquitous on-demand access to a shared pool of configurable computing resources with minimum management effort. It gains also more impact on the communication technology (CT) community and is currently discussed as an enabler for flexible, cost-efficient and more powerful mobile network implementations. Although centralized baseband pools are already investigated for the radio access network (RAN) to allow for efficient resource usage and advanced multicell algorithms, these technologies still require dedicated hardware and do not offer the same characteristics as cloud-computing platforms, i.e., on-demand provisioning, virtualization, resource pooling, elasticity, service metering, and multitenancy. However, these properties of cloud computing are key enablers for future mobile communication systems characterized by an ultradense deployment of radio access points (RAPs) leading to severe multicell interference in combination with a significant increase of the number of access nodes and huge fluctuations of the rate requirements over time. In this article, we will explore the benefits that cloud computing offers for fifth-generation (5G) mobile networks and investigate the implications on the signal processing algorithms.
Article
The paper presents a study of key aspects in the design of a flexible unified data plane capable of integrating both fronthaul and backhaul transport in future 5 G systems. In this study, we first review candidate access and multiplexing technologies from the state of the art and assess their capability to support legacy and new fronthaul and backhaul traffic. We then propose a new design framework for the targeted flexible unified data plane, featuring a primary packet-switching path supported by an auxiliary circuit-switching for extreme low latency scenarios. This comprises a summary of the first results achieved in the 5G-Crosshaul EU project since its kick-off in July 2015.
Article
The CPRI specification has been introduced to enable the communication between radio equipment and radio equipment controllers, and is of particular interest for mobile operators willing to deploy their networks following the novel cloud radio access network approach. In such a case, CPRI provides an interface for the interconnection of remote radio heads with a baseband unit by means of the so-called fronthaul network. This article presents the CPRI specification, its concept, design, and interfaces, provides a use case for fronthaul dimensioning in a realistic LTE scenario, and proposes some interesting open research challenges in the next-generation 5G mobile network.
Article
Centralization of the baseband processing in radio access networks may reduce radio site operations costs, reduce capital costs, and ease implementation of multi-site coordination mechanisms such as coordinated multipoint transmission and reception (CoMP). However, the initial architecture proposals for a centralized Long Term Evolution (LTE) deployment using transport of radio samples require a high-bandwidth, low-latency interconnection network. This may be uneconomical, or it may only be cost effective for a limited number of sites. To mitigate that deficiency without sacrificing the benefits of centralized processing, we identified alternative interfacing options between central and remote units. To do so we analyzed possible splits of the Long Term Evolution (LTE) baseband processing chain for their bandwidth and latency requirements. Next, we analyzed the operational impacts of potential splits based on a number of criteria including ease of CoMP introduction, the possibility of realizing pooling gains, and the ability to update the system and introduce new features. Based on our results, we propose architectures that can leverage the benefits of centralization at a much-reduced cost. © 2013 Alcatel-Lucent.
Conference Paper
Site diversity techniques using a distributed wireless access point for a large cell are important to achieve wide-area ubiquitous wireless networks that support embedded ubiquitous services. We propose a digitized radio-over-fiber (DROF) system for ubiquitous wireless network. The DROF system uses standard IP networks as the transmission lines of radio signals for distributed access points and can establish site diversity techniques at low cost. We also propose a network time protocol based reference frequency and time distribution scheme to synchronize distributed access points. Experiment results show that the DROF prototype achieves precise frequency and time distribution over IP based networks
Probabilistic models for computer networks: tools and solved problems
  • J A Hernández
  • P Serrano
J. A. Hernández, P. Serrano: Probabilistic models for computer networks: tools and solved problems, 2013 (ISBN: 978-1291546873)
Nielsen's Law of Internet Bandwidth
  • J Nielsen
J. Nielsen, Nielsen's Law of Internet Bandwidth, April 5, 1998 https://www.nngroup.com/articles/law-of-bandwidth
Performance Comparison of Analog and Digital Radio Over Fiber Link
  • B R Ballal
  • D Nema
B. R. Ballal and D. Nema, "Performance Comparison of Analog and Digital Radio Over Fiber Link," International Journal of Computer Science & Engineering Technology (IJCSET), 2012.