Content uploaded by Jean-Baptiste Monteil
Author content
All content in this area was uploaded by Jean-Baptiste Monteil on Aug 13, 2021
Content may be subject to copyright.
Resource Reservation within Sliced 5G Networks:
A Cost-Reduction Strategy for Service Providers
Jean-Baptiste Monteil∗Jernej Hribar∗, Pieter Barnard∗, Yong Li†, and Luiz A. DaSilva∗
∗CONNECT Research Centre, Trinity College Dublin, Ireland
†Department of Electronic Engineering, Tsinghua University, China
E-mail: { jmonteil, jhribar, barnardp, dasilval}@tcd.ie, liyong07@tsinghua.edu.cn
Abstract—In future cellular networks, Mobile Network Op-
erators (MNOs) will be able to dynamically allocate dedicated
resources to third-party service providers (SPs) through network
slicing. In this paper, we investigate the problem of resource
reservation, whereby the SP must reserve future network re-
sources from the MNO in order to guarantee a minimum
quality of service (QoS) for its future traffic demand. However,
the reservation of resources in a slice incurs an additional
cost to the SP. Therefore, the SP must develop a strategy to
determine the optimal amount of resources it should reserve
ahead of the time when they are needed. We employ a data-
driven approach using two state-of-the-art Machine Learning
(ML) techniques, namely deep neural networks (DNNs) and long
short-term memory (LSTM) recurrent networks, to minimise
the amounts of over/under reservations made by the SP. Our
solutions demonstrate high performance in a real-world data set
containing cellular traffic for the city of Shanghai. In addition,
we demonstrate the robustness of our designs by comparing their
performance against a baseline solution based on the widely
adopted ARIMA forecasting model.
Index Terms—Network slicing, DNN, LSTM, machine learning,
resource management.
I. INTRODUCTION
5G and beyond cellular networks have been envisioned to
support a wide range of services, and use cases that will require
a significant improvement over today’s networks. Indeed, the
proliferation of connected devices and envisioned use cases,
such as virtual reality, autonomous vehicles, and smart factories,
will require improved reliability, lower latency, and higher
capacity. This has led to the concept of network slicing, tailoring
virtual networks to the diverse needs of these different services.
Under this framework, Mobile Network Operators (
MNO
s) who
deploy their own physical network infrastructures and hold
spectrum licenses, will be able to lease portions of their network
resources (referred to as slices) to third party tenants such
as vertical industries or over-the-top (
OTT
) service providers
(
SP
s), who in turn may use these resources to deliver network
services to their customers in a flexible and dynamic manner.
The 5G network slicing framework is expected to comprise
both a forward market, where the
SP
may lease network
resources from the
MNO
well in advance of the time at which
these resources will be used, as well as a real-time market,
where the
SP
can request additional resources from the
MNO
within a short time.
In this paper, we explore the resource reservation problem
from an
SP
view-point. The
SP
has access to two types of
resources, representing portions of the overall
MNO
capacity:
guaranteed resources, and unreserved resources available on a
best-effort basis. Guaranteed resources are those that the
SP
booked in advance, while best-effort resources are portions
of the spare capacity (after we account for the pre-reserved
resources) shared between multiple
SP
s. The guaranteed
resources will come at a higher cost, and therefore, to minimise
its costs, the SP is inclined to reserve resources only when it
expects that best-effort resources will not suffice. We adopt
state-of-the-art techniques in machine learning to aid in the
SP
’s
decision making process, which must determine the quantity
of resources it should reserve to guarantee reliable quality of
service (QoS) to its own customers.
To date, this aspect of resource reservation for a network
slice remains largely unexplored, with the majority of related
work focusing on resource allocation from the perspective of
the
MNO
or other central entities, such as capacity brokers
[1]–[3]. In other studies, the main focus has been on the
business and economic aspects of network slicing, for example,
in the form of revenue or cost optimisation for the MNO [4],
profit maximisation for the tenants [5], or through optimal
design of the auctioning process [6], [7]. Other works address
only the case of short-time resource reservation from the
SP
’s
perspective [8]. The scope of [8] is to determine the optimal
reservation policy that maximises the surplus of resources,
under different traffic levels and on-demand price variation
parameters. In contrast, our work focuses on minimising both
the surplus and under reservation, to provide the
SP
with the
means to reserve only the resources it requires.
A key element of our problem is that the
SP
lacks knowledge
about the
MNO
’s real-time resource availability and overall
traffic demand. We have designed a framework where two
supervised learning solutions, namely a deep neural network
(
DNN
) and a long short-term memory (
LSTM
) solution, are
trained and then loaded into an online environment with
conditions an
SP
has to face in a real-world scenario, to
carry out near optimal reservations. We compare our proposed
solutions with a baseline approach using the classical ARIMA
model. We show that our solutions outperform the baseline in
terms of both the over reservation and the under reservation.
II. SY ST EM M OD EL
We consider a single
MNO
with its physical network
simultaneously serving the traffic demands of multiple
SP
s.
In this paper, we outline the model considering a single base
station; the solution presented here can be replicated at multiple
base stations. The base station has limited capacity, which
we denote as
c
. We represent the total traffic demand faced
by the base station as a time-series signal consisting of two
components: the first component is the traffic demand related to
the
SP
of interest; we denote this as
sk
, with
k∈N
representing
the
k
-th time-step . The second component is the aggregate
traffic demand of all other
SP
s that are tenants of the network,
denoted as
lk
. Throughout this paper, we will refer to
sk
as
simply the
SP
traffic and to
lk
as the
MNO
traffic. Note that we
measure both traffic and
c
in bytes/time-step. Furthermore, we
denote the past sequences of traffic for
lk
and
sk
, respectively,
as the vectors
lK= [l1, ..., lK]
and
sK= [s1, ..., sK]
, with
K
representing the current time-step.
The
SP
can reserve resources in advance and the
MNO
will
ensure that those resources will be available when needed. The
SP
can also rely on resources that will be available in the
network on a best-effort basis. Note that when the
SP
relies on
the best-effort approach, there might not be enough resources
available to serve all of its traffic demands, especially during
periods of peak traffic. At time-step
k
, the
SP
leases for the next
h
time steps a given amount of guaranteed resources, which we
model as a portion of the overall capacity
rk= [rk+1, ..., rk+h]
.
Therefore, the next reservation decision will occur at time-step
k+h
. A time-step can range from a few minutes to a few
hours in a dynamic reservation system.
We assume both
c
and
lk
are unknown to the
SP
, as the
MNO
may be reluctant to share this information in a real-world
context. Instead, the
SP
must rely on other signals, including
its delivered traffic,
dk
, undelivered traffic,
uk
and the portion
of its traffic that was served on a best-effort basis,
bk
, as
summarised in Table I. The delivered traffic includes the
SP
’s
reservation and its traffic served as best-effort. If the aggregated
traffic from all
SP
s is less than the capacity of the network,
then all of the
SP
traffic is delivered. If the aggregated traffic
is greater than the capacity, the delivered traffic corresponds
to the reservation
rk
plus the resources that are available on a
best-effort basis, equal to c−lk.
TABLE I
SYS TEM M OD EL EQ UATIO NS
if sk+lk≤cif sk+lk> c
dk=skdk= min(max(c−lk+rk, rk), sk)
uk= 0 uk=sk−dk
bk=max(dk−rk,0) bk= max(dk−rk,0)
Additionally, in Table II we list the complete information
which the
SP
possesses, including its own past traffic demands,
sK
, and past reservations
rK
, and how much of its traffic has
been served in the past.
Guaranteed resources, i.e. those reserved in advance by the
SP
, come at a higher cost than resources that are made available
to the
SP
on a best-effort basis. Thus, the
SP
is motivated to
minimise its reservation of resources, if it believes that there
TABLE II
SUMMARY OF SP KN OWLE DG E
Known to SP: sK,rK,dK,uK,bK
Unknown to SP: c,lK
would be sufficient capacity in the network to serve its traffic
as best-effort. We also assume that, the farther in advance the
SP
makes a reservation, the cheaper that reservation will be,
and therefore if the
SP
is to make a reservation it will try to
do it as far in advance as possible.
In Fig. 1, we depict some possible scenarios of traffic load
experienced by the
MNO
and reservations by the
SP
. For each
case, we indicate a hypothetical reservation
rk
and the resulting
portion of the SP traffic that is treated as best-effort.
Fig. 1. Example reservation policies for various edge cases of our model.
We summarise the key assumptions as follows:
Assumption 1.
When the
SP
reserves guaranteed resources,
these will get allocated regardless of the network load. 1
Assumption 2.
“multi-steps-ahead reservation”: At each time
step, if needed, the
SP
can reserve resources for the
h
next
time-steps.
Assumption 3.
At each time
k
, the spare network capacity
allocated on a best-effort basis to the
SP
of interest is what is
left after all other network traffic is served, i.e.
max(min(sk−
rk, c −lk),0).
III. PROB LE M FO RM UL ATIO N
The objective of the
SP
is to avoid over or under reservation
of
MNO
’s resources, while getting its traffic served. When
the reservation surpasses the optimal value, resulting in over-
reservation, the
SP
incurs higher cost than necessary, as a
portion or all of its traffic could have been served with best-
effort resources. In the case of under-reservation, the
SP
does
not reserve enough resources and therefore a portion or all of its
traffic is not delivered. The
SP
aims to make reservations that
1
The
MNO
can exercise admission control, and only accept reservations
that can be met with its available capacity.
avoid both unnecessary guaranteed resources and undelivered
traffic. We can formulate the optimal reservation as follows:
Lemma 1. ∀k∈N
, let
c−lk
be the
MNO
’s capacity
availability; then, the optimal reservation policy for the
SP
is
r∗
k= 0
if
sk+lk≤c
, and
r∗
k= min(min(sk−(c−lk), sk), c)
otherwise.
Fig. 2. Graphical example of the
SP
’s reservation problem we address in this
paper.
We illustrate the decision making problem faced by the
SP
in
Fig. 2. The
SP
must reserve for the future time-steps, without
the knowledge of the
MNO
load and the
MNO
capacity. We
can now formulate the problem as follows:
Problem 1.
At time-step
k
, if needed, make a reservation for
the next
h
time-steps so as to minimise the expected Mean
Square Error (MSE) Ph
j=1 |rk+j−r∗
k+j|2.
As indicated previously, we assume the
SP
does not have
access to the
MNO
traffic load
lk
or the
MNO
capacity
c
, and
therefore the
SP
must base its decision on its past history of
reservations and how much of its traffic has been served, i.e.,
on
bK
,
dK
and
uK
. We note that the biggest difficulty resides
in the prediction of future availability of excess capacity, as
both
c
and
lk
are unknown, whereas the prediction of future
SP
demand can rely directly on
sK
. As
dk+uk=sk
, we
include only the signals
sk
,
rk
,
uk
and
bk
in our final solution,
described in the sequel.
We adopt supervised machine learning in our solution, due
to the following reasons:
•
The
SP
faces a prediction problem with exogenous data
from
sk
,
bk
and
uk
. In comparison to unsupervised
learning, supervised methods are generally more robust
and successful at extracting patterns from exogenous data.
•
We have an exact solution for the optimal reservation
policy,
r∗
k
, for all
k
. By having access to a data set
consisting of multiple weeks of recorded data, we can
construct the necessary labels
r∗
k= [r∗
k+1, ..., r∗
k+h]
and
signal traces that are required in training a supervised
Machine Learning (
ML
) model based on our previously
defined loss function in Problem 1.
IV. PROPOSED SOLUTIONS
In this section, we describe the two proposed solutions the
SP
should adopt to address the resource reservation problem
we defined in Section II. In the first subsection, we specify in
detail how the
SP
can employ the
DNN
-based solution. In the
second subsection, we explain how the
SP
can make use of
an
LSTM
model. Finally, in the third subsection, we describe
the framework we developed that would enable the
SP
to use
our solution in real network deployments.
A. DNN model
Our first solution adopts a
DNN
architecture. The previous
work of [9] has demonstrated how a sliding window technique
can be used on time-series data to capture temporal correlations
with
DNN
models: the moving window generates sequences
that are fed as inputs to predict the next time-steps’ reservation
values.
In our model, we use
q
to denote the length of the past
sequences, i.e., the length of
sk,q = [sk−q+1, ..., sk]
, for all
k≤K
, and the
k
-th sequence sample we denote with
xk,q =
[sk,q ,rk,q ,uk,q,bk,q]
. Using these samples, we construct a
two-dimensional matrix with dimensions
(< Tx>, 4q)
, where
< Tx>
represents the number of used samples, and employ
the matrix as an input to the DNN.
We visualize the structure of our neural network with two
hidden layers in Fig. 3. We can write its output as:
[rk+1, ..., rk+h] = f[3]
W,b ◦f[2]
W,b ◦f[1]
W,b(xk,q ),(1)
where
W
is the matrix of weights,
b
the vector of biases, and
fthe activation function at each layer.
We have selected the gradient descent method to update the
weights and biases of the architecture. In our
DNN
architecture,
there are
4q
input neurons, 50 neurons in the first hidden layer,
and 10 neurons in the second hidden layer. The output layer
has
h
neurons, corresponding to the predictions for the next
h
time steps. We use
tanh
activation functions for all layers,
except for the last layer, which uses a sigmoid function.
xk,q=a[0]
. . .
. . .
rk+1
rk+2
rk+3
rk+h
. . .
Hidden
Layer 2
Hidden
Layer 1
Input
Layer
Output
layer
Fig. 3. DNN structure.
<Tx>
4
q
x<1>
Fig. 4. LSTM input tensor.
B. LSTM model
We designed the second solution using an
LSTM
model
[10], which is a type of Recurrent Neural Network (
RNN
) that
has proven to be highly suitable for time series forecasting
[11]. We use the same sequences that we generated previously
for the
DNN
solution. For the
LSTM
model, we convert the
xk,q
super vector into a 3D tensor, represented in Fig. 4, with
shapes
(< Tx>, 4, q)
corresponding to the number of samples,
number of input time-series and sequence length, respectively.
The first hidden layer of type
LSTM
outputs to a fully
connected layer with ten neurons. The output layer has
h
neurons, corresponding to the predictions for the next
h
time-
steps. The activation functions used are
tanh
, followed by
sigmoid
for the dense layers. Both our neural network models
are trained using the Adam optimiser [12] and the loss function
defined in Problem 1. Additionally, the learning rates we use
are 0.01 for the DNN and 0.001 for the LSTM.
C. Development of the framework
In Fig. 5, we provide a high-level overview of the process
the
SP
has to follow during the training phase and how it can
then transition to use it in the online phase. In the training
phase, we use the signals
sk
,
lk
and
c
, in accordance with
Lemma 1, to calculate the optimal reservation labels at each
time-step
k
of our data set. During the training phase, we use
only the signals
sk
,
uk
,
bk
,
rk
as inputs to the model, recalling
that these are assumed to be the only signals available to
SP
in a real-world context. We construct
rk
as a random trace
within the range
[0, max(sk)]
at each time-step
k
of our data
set. Furthermore, we extract
uk
and
bk
using the equations
we listed in Table I. During the online phase, the
SP
obtains
the real-time values for signals
uk
and
bk
at each time-step
k
, which the
SP
then combines with
rk
and
sk
, in order to
predict the future reservations. The predicted outputs
rk
are fed
back as input into the model to perform the next predictions.
We initialise the first sequence for
rk
at
max(sk)
during this
phase.
V. VALIDATION
In this section, we validate our solution using real-world data
collected in the city of Shanghai. To evaluate the effectiveness
of our solution we designed a baseline approach using an
ARIMA model. Then we compare our two proposed solutions
Training phase
SP model
MNO model
Lemma
NN training
predictr
and fit tor*
sk
clk
sk
uk
bk
r*
sk
clk
load weights
MNO model
Lemma
SP model
clk
sk
uk
bk
sk
clk
MNO model
Lemma
skrun
simulation
r
labels
r*
Online phase
compute metrics
rk
rk
Fig. 5. System model for reservation decision making.
with the baseline according to three metrics, namely
MSE
, over-
reservation and under-reservation, and over three distinct use
cases, low traffic, medium traffic and high traffic conditions.
A. The data set
The data set we use contains the aggregated traffic volumes
seen across 15 base stations owned by a major
MNO
within
the city of Shanghai. For each base station, traffic volumes
have been recorded over a one-month period, spanning from
Friday 1 August 2014 00:00 to Sunday 31 August 2014 23:50,
with each recording averaged over a period of 10 minutes.
Hence, there are 6 measurements per hour and a total of 4464
measurements for each base station over this period. We use
14 base stations to train and test the models, and 1 base station
to evaluate the model. We split the data into training and test
sets. The training set consists of
90%
of the data points, while
the remaining 10% we use for testing.
In order to train and evaluate our proposed solutions, we
are required to extract signals from our data set corresponding
to the
MNO
traffic
lk
, the capacity of the base station
c
and
the
SP
traffic
sk
. We use the aggregated traffic volumes of the
MNO
of Shanghai as the
MNO
traffic
lk
. We model the
SP
signal as a percentage of the
MNO
traffic plus a noise term. In
this case, we have chosen a percentage of
15
100
, as we expect
the
MNO
to host between 5 to 10 other
SP
s on its network.
We also add Ornstein-Ulhenbeck (
OU
) noise instead of white
noise, as we can see from Fig. 6, as it is more realistic to have
correlated noise, as opposed to independent spurious noise.
By varying the base station capacity
c
, we are also able to
simulate three use cases: low traffic, medium traffic, and high
traffic. For example, by selecting
c
as the 70th percentile of
the
MNO
traffic, we evaluate the case that 70% of the
MNO
traffic data points is less or equal than
c
. By selecting
c
as the
90th, 80th and 70th percentiles we are able to simulate low,
medium, and high traffic, respectively.
2014-08-01 2014-08-02 2014-08-03
2
4
6
t
CMNO
SP traffic
capacity
Fig. 6. Example traffic traces and capacity model. The y-axis units are
normalized values of the traffic volumes, i.e. the number bytes per time period
in the original dataset after normalization.
B. The baseline: ARIMA
To evaluate the performance of the two proposed solutions,
we have constructed a third model based on the classical
ARIMA predictor, which we use as a baseline. We have chosen
this particular model as it has previously been proven to yield
great predictive ability for single-step-ahead predictions [13].
ARIMA does not require training. Instead, we employ two
instantiations of the ARIMA model to predict estimates of the
MNO
traffic and the
SP
traffic, based on their past traces. The
ARIMA model has access to the
MNO
traffic, meaning that
the baseline possesses more information than our two solutions.
To this extent, the baseline is idealized.
We denote the
k
-th sequence of
SP
traffic as
sk,q =
[sk−q+1, ..., sk]
and the
k
-th sequence of
MNO
traffic as
lk,q =
[lk−q+1, ..., lk]
. For both cases, we predict
[sk+1, ..., sk+h]
and
[lk+1, ..., lk+h]
and from Lemma 1, obtain
[rk+1, ..., rk+h]
. On
a separate development set, we tested different combinations
for the choice of our ARIMA coefficients, comparing each
based on their
MSE
. From this, we found the lowest
MSE
was
given by the (3,0,0)-ARIMA model.
C. Simulation results
In this subsection, we compare the two proposed solutions
against the baseline approach. For that, we use three metrics:
the
MSE
, and the average values of under-reservation and over-
reservation. We define them for all future steps
j= 1, . . . , h
,
as we are interested in multi-steps-ahead reservation.
M SEj=P<Tx>−h
k=1 |rk+j−r∗
k+j|2
< Tx>−h
Overj=P<Tx>−h
k=1 (rk+j−r∗
k+j)
1
{rk+j> r∗
k+j}
P<Tx>−h
k=1
1
{rk+j> r∗
k+j}
U nderj=P<Tx>−h
k=1 (rk+j−r∗
k+j)
1
{rk+j< r∗
k+j}
P<Tx>−h
k=1
1
{rk+j< r∗
k+j}
(2)
In Figs. 7, 8, and 9, we compare the
DNN
and
LSTM
performance against ARIMA’s. After testing different sequence
lengths, we have selected the time sequence that yielded the best
results for each use case. We recall that our model simulates
three types of traffic conditions, low, regular and high. We
conduct a 9-step ahead prediction in each of our experiments.
This corresponds to one hour and a half of advance reservation.
In Fig. 7, we examine the quality of the reservation in terms
of
MSE
. Under high traffic conditions, Fig. 7(a) shows that
the three methods perform similarly for the case of single-
step-ahead reservation. However, as the number of decision
steps is increased, the learning-based solutions soon begin to
outperform the baseline, with the
DNN
performing slightly
better than the
LSTM
. For the regular traffic case, as shown in
7(b), the
LSTM
initially under-performs the other two models
but then gradually outperforms the baseline as the number of
steps is increased. Overall, the
DNN
appears to be the best
method when considering the MSE metric.
In Fig. 8, we consider the over-reservation metric and see
that both supervised solutions outperform the baseline by a
wide margin. This demonstrates a major gain in the use of the
DNN
and the
LSTM
models, indicative of the fact that these
learning based solutions were able to successfully extract the
information related to the
MNO
’s availability
c−lk
from the
traffic traces rk,bkand uk.
In Fig. 9, we focus on the under-reservation case. Gener-
ally, a method that demonstrates good performance on the
over-reservation case, tends to under-perform in the under-
reservation case. As the solutions we propose bring a major
improvement in reducing over-reservations, it is also essen-
tial that they do not under-perform with respect to under-
reservations, as this could lead to a consequent traffic outage
and therefore greatly impact the
QoS
observed by tenants and
end users. We can see that the baseline initially outperforms
the supervised solutions for single-step-ahead reservations.
However, at multi-steps-ahead reservations, the
DNN
and
LSTM
methods demonstrate superior performance. For high
traffic, as shown in Fig. 9(a), the
DNN
surpasses the baseline
at the 4th time-step and the LSTM at the 5th time-step.
We observe that the learning based solutions consistently
outperform the baseline solution for multi-steps-ahead reserva-
tions. The traffic traces can cycle rapidly from normal demand
to extreme peak values. For a linear model such as ARIMA,
which only considers the last few hours in its prediction, a
sudden change of this nature becomes extremely challenging to
predict. In contrast, the
DNN
and
LSTM
are able to understand
that these fast fluctuations are not noise but are in fact part of
the daily/seasonal trends of the data.
VI. CONCLUSION
In this paper, we have proposed a
DNN
and an
LSTM
solution to resolve the resource reservation problem from a
SP
’s perspective. Using data obtained from a real
MNO
, we
have demonstrated that the
SP
can, by employing both
DNN
and
LSTM
approaches, make a more accurate reservation in
comparison to an idealised baseline approach. Our initial work,
presented in this paper, shows promise, and in our next step, we
will extend the reservation decision to broader time scales. In
particular, we will focus on how can a
SP
makes reservations
days or weeks in advance.
123456789
number of steps ahead
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.10
mean square error
high traffic
baseline
DNN
LSTM
(a)
123456789
number of steps ahead
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
mean square error
regular traffic
baseline
DNN
LSTM
(b)
123456789
number of steps ahead
0.02
0.03
0.04
0.05
0.06
mean square error
low traffic
baseline
DNN
LSTM
(c)
Fig. 7. MSE comparison under different traffic conditions
123456789
number of steps ahead
0.05
0.10
0.15
0.20
0.25
0.30
average over reservation
high traffic
baseline
DNN
LSTM
(a)
123456789
number of steps ahead
0.05
0.10
0.15
0.20
0.25
0.30
average over reservation
regular traffic
baseline
DNN
LSTM
(b)
123456789
number of steps ahead
0.05
0.10
0.15
0.20
0.25
0.30
average over reservation
low traffic
baseline
DNN
LSTM
(c)
Fig. 8. Over-reservation comparison under different traffic conditions
123456789
number of steps ahead
−0.40
−0.35
−0.30
−0.25
−0.20
average under reservation
high traffic
baseline
DNN
LSTM
(a)
123456789
number of steps ahead
−0.45
−0.40
−0.35
−0.30
−0.25
average under reservation
regular traffic
baseline
DNN
LSTM
(b)
123456789
number of steps ahead
−0.475
−0.450
−0.425
−0.400
−0.375
−0.350
−0.325
−0.300
−0.275
average under reservation
low traffic
baseline
DNN
LSTM
(c)
Fig. 9. Under-reservation comparison under different traffic conditions
ACK NOW LE DG EM EN TS
The authors are grateful to Dr Carlo Galiotto and Dr Andrei
Marinescu, who contributed greatly to the system model, the
problem formulation and the proposed solutions. This work
was supported by a research grant from Science Foundation
Ireland (SFI) and the National Natural Science Foundation Of
China (NSFC) under the SFI Grant Number 17/NSFC/5224.
REFERENCES
[1]
Y. L. Lee et al., “Dynamic network slicing for multitenant heteroge-
neous cloud radio access networks,” IEEE Transactions on Wireless
Communications, vol. 17, no. 4, pp. 2146–2161, 2018.
[2]
M. R. Raza et al., “Dynamic slicing approach for multi-tenant 5G
transport networks,” Journal of Optical Communications and Networking,
vol. 10, no. 1, pp. A77–A90, 2018.
[3]
G. Tseliou et al., “A capacity broker architecture and framework for
multi-tenant support in LTE-A networks,” in 2016 IEEE International
Conference on Communications (ICC). IEEE, 2016, pp. 1–6.
[4]
A. Baumgartner et al., “Network slice embedding under traffic uncer-
tainties—a light robust approach,” in 13th International Conference on
Network and Service Management (CNSM). IEEE, 2017, pp. 1–5.
[5]
G. Wang et al., “Resource allocation for network slices in 5G with
network resource pricing,” in GLOBECOM IEEE Global Communications
Conference. IEEE, Dec 2017, pp. 1–6.
[6]
M. Jiang et al., “Network slicing in 5G: An auction-based model,” in
IEEE International Conference on Communications (ICC). IEEE, 2017,
pp. 1–6.
[7]
K. Zhu and E. Hossain, “Virtualization of 5G cellular networks as
a hierarchical combinatorial auction,” IEEE Transactions on Mobile
Computing, vol. 15, no. 10, pp. 2640–2654, 2015.
[8]
Y. Zhang et al., “Joint spectrum reservation and on-demand request for
mobile virtual network operators,” IEEE Transactions on Communica-
tions, vol. 66, no. 7, pp. 2966–2977, 2018.
[9]
G. Dorffner, “Neural networks for time series processing,” Neural
Network World, vol. 6, pp. 447–468, 1996.
[10]
S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural
computation, vol. 9, no. 8, 1997.
[11]
L. Yunpeng et al., “Multi-step ahead time series forecasting for different
data patterns based on LSTM recurrent neural network,” in 14th Web
Information Systems and Applications Conference (WISA). IEEE, 2017,
pp. 305–310.
[12]
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
arXiv preprint arXiv:1412.6980, 2014.
[13]
A. Sang and S. Q. Li, “A predictability analysis of network traffic,” in
Computer Networks, July 2002.