Conference PaperPDF Available

Quality-Aware Video Offloading in Mobile Edge Computing: A Data-driven Two-stage Stochastic Optimization

Authors:
Quality-Aware Video Offloading in Mobile Edge Computing:
A Data-driven Two-stage Stochastic Optimization
Weibin Ma, Lena Mashayekhy
Department of Computer and Information Sciences, University of Delaware, Newark, Delaware 19716, USA
{weibinma, mlena}@udel.edu
Abstract—Most camera-based mobile devices require ultra
low-latency video analytics such as object detection and action
recognition. These devices face severe resource constraints, and
thus, video offloading to Mobile Edge Computing (MEC) seems
a reasonable solution. However, MEC is facing several key
challenges–especially due to uncertainties caused by dynamic
device mobility–to provide efficient video offloading solutions
that enable both maximum performance for video analytics and
minimum latency. In this paper, we study the Video Offloading
Problem (VOP) in MEC in detail to address these challenges. We
formulate VOP as a Two-stage Stochastic Program, called TSP-
VOP, to model the uncertainties in the environment. We propose
a novel clustering-based Sample Average Approximation to effec-
tively solve TSP-VOP in uncertain dynamic environments, while
satisfying the required latency. We perform extensive experiments
to validate the effectiveness of our proposed algorithm.
Index Terms—Mobile Edge Computing, Video Offloading, Mo-
bility, Video Quality, Two-Stage Stochastic Program, Clustering.
I. INTRODUCTION
It is predicted that videos will account for 79% of the
world’s mobile data traffic by 2022 [1]. Many camera-based
mobile devices (e.g., surveillance drones and vehicle dash
cameras) require real-time analytics of high-resolution video
streams such as object detection and action recognition. Mo-
bile Edge Computing (MEC) has recently been introduced as
an emerging solution that enables mobile devices to offload
their delay-sensitive tasks such as video analytics to physi-
cally proximal mini-datacenters, called cloudlets, in order to
improve the quality of service (QoS) [2].
Differing from conventional computation offloading [3],
video offloading in MEC brings its own unique challenges.
The performance of video analytics is greatly affected by
the quality of videos [4], defined in terms of the number
of frames captured per second and the size of frames. To
maximize the performance of video analytics (e.g., increase
object detection accuracy, detect more objects), a higher video
quality is beneficial [5]. However, to reduce latency and
satisfy QoS while offloading, a lower video quality should
be selected. Therefore, there is a trade-off between video
quality and latency that needs to be considered in the design of
video offloading mechanisms. A few studies investigated the
analytics performance degradation caused by the video quality
in offloading [4]–[6]. However, they do not consider the device
mobility and service migration possibility in video offloading,
since the bandwidth each device received greatly depends on
its physical location which could change over time.
Mobility of devices makes video offloading a more chal-
lenging problem, especially when the movements of the
devices are unknown by the MEC system. When a device
moves, keeping its connected cloudlet invariant may greatly
increase the communication delay due to its expanded network
distance. On the other hand, excessively changing the con-
nected cloudlet could lead to significant migration overhead
and massive data movement over MEC. To balance the service
performance and migration cost, Sun et al. [7] developed
a user-centric energy-aware mobility management scheme to
optimize the sum of computation and communication delays
under the long-term energy budget of the device. Ouyang et
al. [8] investigated service migration in MEC and proposed
a mobility-aware dynamic service placement technique based
on Markov approximation. Gao et al. [9] proposed an iterative
algorithm for offloading with minimum latency considering
both cloudlet selection and access point selection. However,
these approaches are not suitable for video offloading, since
they do not consider video quality to improve the performance
of video analytics. Moreover, these approaches are designed
for the long-term cost minimization, whereas the performance
of video quality only depends on the occurrence of service
migration between two consecutive time slots.
In this paper, we propose and formulate the Video Of-
floading Problem (VOP) as a Two-stage Stochastic Program
with recourse, called TSP-VOP, due to uncertainties caused
by the dynamic movements of mobile devices. Stochastic
programming approach is a promising solution for modeling
optimization problem that involves uncertainty [10]. The goal
of TSP-VOP is to find optimal offloading decisions for all
mobile devices to offload their videos with maximum quality
and minimum migration cost, while other desirable constraints
(e.g., latency and energy requirements) are satisfied. However,
the formulated TSP-VOP model requires a large number of re-
alizations, called scenarios, to obtain a good representation of
the uncertainties for a more accurate estimation. To resolve this
issue, we propose a novel Clustering-based Sample Average
Approximation Video Offloading Algorithm, called CSAA-
VOA, to approximate the expected cost of TSP-VOP with
much fewer scenarios. Our proposed CSAA-VOA guarantees
computational tractability by reducing the sample size, while
it does not negatively impact the quality of the obtained
solutions. To the best of our knowledge, this is the first work
that addresses the VOP in MEC by considering uncertain
device mobility via exploiting stochastic programming.
II. SY ST EM MO DE L AN D PROB LE M FOR MU LATI ON
A. System Model
We denote a set of cloudlets by M={1,2, . . . , M }and a
set of mobile devices capturing videos by N={1,2, . . . , N}.
To characterize the movements of devices, we consider a
time-slotted format, where the time horizon is discretized into
multiple time slots T={1,2, . . . , T }.
The quality of videos is defined as a function of a video
coding ratio, defined below. The captured videos are divided
into multiple video chunks for offloading [11]. Each chunk
is compressed by a specific video coding ratio that is defined
as the ratio of the size of a compressed video chunk to the
size of the original (uncompressed) video chunk. We denote
A={a1, a2, . . . , aK}to represent a set of available video
coding ratios to be selected for offloading.
At any time slot t T , device n∈ N has a video
chunk with a data size τnand deadline dnto offload. The
location of device nat time slot tis specified by its two-
dimensional coordinate (lx
n(t), ly
n(t)). The locations of devices
at the current time slot are known by the MEC system, while
the locations of devices at the future time slots are unknown
by the MEC system. We use rnik(t)to express the selected
coding ratio of device nwhen offloading its video chunk to
cloudlet iwith coding ratio ak∈ A at time slot t.
Each cloudlet i∈ M has computing capabilities Ci, band-
width Bi, and energy consumption capacity Ei. The location
of each cloudlet iis fixed and specified by a two-dimensional
coordinate (lx
i, ly
i). We capture the distance between a device
and a cloudlet by using the Euclidean distance. For cloudlet i
and device nat time slot t, the distance between them is
calculated by fni(t) = p(lx
n(t)lx
i)2+ (ly
n(t)ly
i)2.
Video offloading to a cloudlet causes some data transmis-
sion delay, called offloading delay. The offloading delay for
offloading the video chunk of device nto cloudlet iwith a
specific coding ratio akat time slot tis calculated by:
Λr
nik(t) = τnrnik(t)
Rni(t),(1)
where Rni(t),Bilog(1 + p0gni(t)
N0)is the transmission rate
from device nto cloudlet iat time slot t. Moreover, p0is the
transmission power, N0is the noise power, and gni(t)is the
channel gain at time slot t, which is a function of fni(t).
We consider energy consumption as a resource limitation for
cloudlets. The energy required for executing a video chunk of
device non cloudlet iat time slot tcan be expressed as:
Enik(t) = κ τnrnik (t)·(C2
i),(2)
where κis the number of required CPU cycles for processing 1
bit of the task and is the effective switched capacitance of
the cloudlet processor.
To avoid excessive service migration, similar to [8], [9], we
introduce a migration cost into our model. We denote mij as
the migration cost from cloudlet ito cloudlet j, when a service
is migrated accordingly.
B. Two-Stage Stochastic Video Offloading Formulation
We formulate VOP as a Two-stage Stochastic Program with
recourse, called TSP-VOP, since VOP is not deterministic.
TSP-VOP finds the offloading schedules and qualities of the
video chunks by maximizing video coding ratios and minimiz-
ing their expected migration cost, while explicitly considering
future device mobility.
We define the decision variables X(t),Y(t), and Z(t)as
the cloudlet allocation decision vector, the migration decision
vector, and the video coding ratio decision vector for all
devices at time slot t, respectively. More specifically, xni(t)is
1 if if cloudlet iis allocated to device nfor offloading at time
slot t, and 0, otherwise; ynij (t)is 1 if the service of device n
is migrated from cloudlet ito cloudlet jat time slot t, and
0, otherwise; znik(t)is 1 if device noffloads video chunk to
cloudlet iwith coding ratio akat time slot t, and 0, otherwise.
In our proposed TSP-VOP model, the offloading decision
(assigned cloudlet decision variables X(t)and video coding
ratio decision variables Z(t)) at current time slot tare defined
as the first stage variables which have to be decided prior to
the realization of a scenario ω(i.e., new locations of devices
at time slot t+ 1), that instead is known when the recourse
decisions X(t+1),Y(t+ 1), and Z(t+ 1) at the second stage
are made. A scenario is defined as a possible realization of
(uncertain and unknown) future devices’ mobility. We assume
each scenario ωoccurs with probability pω.
The objective of the formulated TSP-VOP is to determine
the first stage variables that maximizes the video coding ratios
and minimizes the migration cost. In other words, the objective
is to minimize the sum of negative video coding ratios at the
current time slot and mathematical expectation of the recourse
cost, which will be defined in the second stage. This objective
function is defined as:
TSP-VOP-Stage 1 Objective:
Γ = X
n∈N X
i∈M X
k∈A
akznik(t)+
Ep[Wt+1(X(t), Z (t), ξ(ω))]
(3)
where Ep[·]is the expected recourse cost under all scenarios
generated according to a probability distribution p. In addition,
Wt+1(X(t), Z (t), ξ(ω)) represents the optimal value of the
second stage problem knowing X(t)and Z(t), where ξ(ω) =
(< Lx(ω), Ly(ω)>)such that < Lx(ω), Ly(ω)>denotes the
vector of new locations of devices in a realized scenario ω.
TSP-VOP-Stage 2 Objective: Given the first stage vari-
ables X(t)and Z(t)and a realized scenario ωfor next time
slot t+ 1, we define the second stage objective as follows:
Wt+1(X(t), Z (t), ξ(ω)) = min X
n∈N X
i∈M X
k∈A
akznik(t+ 1) + βX
j∈M
ynij (t+ 1)mij (4)
This objective function, or the recourse cost, minimizes the
sum of negation of video coding ratios and the migration cost
at time slot t+ 1. In addition, to solve our multi-objective
optimization problem and find solutions with the best tradeoff
between the conflicting objectives, we introduce a constant
coefficient or weight βto the migration cost. This weight is
chosen in proportion to the relative importance of migration
cost, and it can be flexibly adjusted based on the preferences
of MEC service provider.
TSP-VOP: Now, we present TSP-VOP:
Minimize Γ(5)
Subject to:
X
i∈M
xni(t0) = 1,n, t0(6)
X
i∈M X
k∈A
znik(t0) = 1,n, t0(7)
X
k∈A
znik(t0)xni (t0),n, i, t0(8)
X
i∈M X
k∈A
znik(t0nik (t0)dn,n, t0(9)
X
n∈N X
k∈A
znik(t0)Enik (t0)Ei,i, t0(10)
X
i∈M X
j∈M
ynij (t+ 1) 1,n(11)
xni(t) + xnj (t+ 1) 1ynij (t+ 1),n, i, j, i 6=j(12)
xni(t0), ynij (t0), znik (t0)∈ {0,1},n, i, j, k, t0(13)
where t0∈ {t, t + 1}. The objective function in Eq (5)
represents an optimal cost that minimizes the sum of negative
video coding ratios at the current time slot and the expected
cost at the next time slot. Constraint (6) ensures each device
is served by only one cloudlet. Constraint (7) ensures each
device selects only one coding ratio for its video chunk
each time. Constraint (8) is to ensure the coding ratio is
selected only if a device is assigned to a cloudlet. Constraint
(9) guarantees that the offloading delay does not exceed the
deadline. Constraint (10) ensures total energy consumption
of executing received video chunks in each cloudlet does
not exceed its energy capacity. Constraint (11) guarantees
the service migration happens for each device at most once
between two consecutive time slots. Constraint (12) sets the
migration decision variables according to whether a service
migration happens or not. Constraint (13) guarantees that the
decision variables are binary.
Let be a set of all possible scenarios {ω1, . . . , ω||},
each of which has an associated probability ps, where s
{1,...,||} represents an index of a scenario. The mathe-
matical expectation E[Wt+1(X(t), Z (t), ξ(ω))] can then be
evaluated by:
E[Wt+1(X(t), Z (t), ξ(ω))] =X
s∈||
psWt+1(X(t), Z (t), ξ(ωs))
As the number of possible scenarios grows exponentially with
respect to the size of problem (||= Ψn, where nis the
number of mobile devices and Ψis the number of possible
location changes for each device), solving above equation
is computationally intractable. To address this challenge, we
propose an approximate data-driven solution presented next.
III. DATA-DRIVEN VIDEO OFFLOA DI NG ALGORITHM
The basic ideal of SAA method is simple indeed: a sample h
with specific size S(i.e., the number of scenarios) denoted
by {ω1, . . . , ωS}is generated from the scenario set accord-
ing to probability distribution p. The mathematical expectation
function E[Wt+1(X(t), Z (t), ξ(ω))] is then approximated by
the corresponding sample average function, meaning that:
E[Wt+1(X(t), Z (t), ξ(ω))] = 1
SX
sS
Wt+1(X(t), Z (t), ξ(ωs))
(14)
Therefore, we introduce SAA-VOP, by formulating our TSP-
VOP using SAA method. The objective function of SAA-VOP
that corresponds to sample his defined as follows:
ΓhS = min X
n∈N X
i∈M X
k∈A
akznik(t)+
1
SX
sS
Wt+1(X(t), Z (t), ξ(ωs)),
(15)
where ΓhS estimates the optimal cost for TSP-VOP (i.e., Γ).
In addition, XhS (t)and ZhS (t)are the obtained offloading
solutions (cloudlet and video quality decisions) for the deci-
sion variables of SAA-VOP that correspond to the solutions
for our original TSP-VOP. The obtained SAA-VOP can be
solved deterministically.
A. K-means Clustering for Scenario Reduction
In the traditional SAA method, a large number of scenarios
for a sample are generated to estimate the expected function.
To reduce the computational complexity of SAA and achieve
approximate solutions, we propose a novel Clustering-based
Sample Average Approximation Video Offloading Algorithm,
called CSAA-VOA. Our approach utilizes K-means clustering
to efficiently select fewer scenarios for each sample. Specifi-
cally, the generated Sscenarios in each sample can be deemed
as the observations in the K-means clustering, each of which is
an N-dimensional vector, where Nis the number of devices. A
vector corresponding to a scenario ωs∈ {ω1, . . . , ωS}can be
denoted by ~vs= ((lx
1,s(t0), ly
1,s(t0)),...,(lx
N,s (t0), ly
N,s (t0))),
where (lx
n,s(t0), ly
n,s(t0)) represents the two-dimensional coor-
dinate of device n∈ {1, . . . , N}at next time slot t0=t+ 1
under scenario ωs.
A distance metric is required to calculate and compare
the (dis)similarity between each pair of scenarios. Since each
scenario is composed by two-dimensional coordinates of the
devices, CSAA-VOA uses the Euclidean distance function for
any two vectors ~vaand ~vbas: f(~va, ~vb) =
sX
nN
((lx
n,a(t0)lx
n,b(t0))2+ (ly
n,a(t0)ly
n,b(t0))2)
CSAA-VOA utilizes K-means clustering to group all these S
scenarios into Cclusters based on this distance metric. A
scenario which has the minimum distance to the centroid of
its cluster is selected as the representative of that cluster.
A key property of CSAA-VOA is that it also takes into
account the density of the clusters when calculating the sample
average function. Since only one scenario is a representative
of each cluster, computing the sample average function (i.e.,
Eq (14), which is the second term of the objective of SAA-
VOP in Eq (15)) based on these centroid scenarios will deviate
from the original expectation with all Sscenarios and hence
become ineffective. To tackle this challenge, CSAA-VOA
introduces a density weight for each cluster. Considering πcas
the number of scenarios in cluster c∈ {1, . . . , C}, we assign
a scaled weight as Πc=πc
S.
The total number of scenarios Sfor solving SAA-VOP now
decreases to C, where CS. The new set of scenarios is
defined as {¯ω1,...,¯ωC}⊆{ω1, . . . , ωS}and the associated
weight set is denoted by {Π1,...,ΠC}. After clustering, we
now reformulate the objective function of SAA-VOP, defined
in Eq (15), based on the new set of scenarios as follows:
ˆ
ΓhC = min X
n∈N X
i∈M X
k∈A
akznik(t)+
X
cC
ΠcWt+1(X(t), Z (t), ξ(¯ωc)),
(16)
where the sample size for solving SAA-VOP is reduced by
utilizing K-means clustering. We refer to this as CSAA-VOP,
Clustering-based SAA Video Offloading Problem.
B. Design of CSAA-VOA
We now present the details of our proposed Clustering-
based SAA Video Offloading Algorithm, called CSAA-VOA,
to solve TSP-VOP using SAA method. The implementation of
our CSAA-VOA is summarized in Algorithm 1. CSAA-VOA
will be executed at the beginning of each time slot.
At each current time slot t, CSAA-VOA generates a set of
independent samples {1, . . . , H}, each contains a large set of
{ω1, . . . , ωS}scenarios for the new expected locations of the
devices at next time slot t+ 1 according to the probability
distribution (line 1). These samples are used to estimate the
recourse cost function. An extra sample h0containing S0
scenarios with S0> C is generated in order to evaluate the
obtained candidate solutions (line 2).
For each sample h, CSAA-VOA first utilizes K-means
clustering to obtain a much fewer number of Cscenar-
ios, where CS(line 4). Then, the algorithm uses
these filtered {¯ω1,...,¯ωC}scenarios to solve CSAA-VOP
(Eq (16)) for that sample. CSAA-VOA computes the opti-
mal value of the second stage for each device n, denoted
as Wt+1
s(xni(t), znik (t), ξ(ωs)) using Eq (4) (lines 6). For
each cloudlet ithat is temporarily assigned to device nat
current time slot t, CSAA-VOA finds ˆ
ΓhC
ni (t)based on the
obtained Wt+1
sin all Cscenarios of sample husing Eq (16)
(lines 7). The above steps are repeated for each device in each
Algorithm 1 CSAA-VOA
1: Generate Hsamples, each with Sscenarios, where SC
2: Generate an extra sample h0with S0scenarios, where S0> C
3: for each sample h∈ {1,...,H}do
4: {¯ω1,..., ¯ωC} ← K-means(C, {ω1,...,ωS})
5: for each device n∈ N do
6: Calculate Wt+1
s(xni(t), znik (t), ξ (ωs))
7: Calculate ˆ
ΓhC
ni (t)
8: ΓhC , XhC (t), Z hC(t) = Assign(N,M, < ˆ
ΓhC
ni (t),n, i >)
9: for each sample h∈ {1,...,H}do
10: Assume device nis offloading its video to cloudlet i
11: with coding ratio akat tbased on obtained candidate
12: solution (XhC (t), ZhC (t))
13: for each device n∈ N do
14: Calculate Wt+1
s0(xni(t), znik (t), ξ (ωs0))
15: Calculate ˆ
ΓhS0
ni (t)
16: ΓhS0
, XhS 0(t), ZhS 0(t) = Assign(N,M, < ˆ
ΓhS0
ni (t),n, i >)
17: ΓHS 0minh∈H ΓhS0
18: return: ΓHS0,XhS 0(t), ZhS0(t)
sample h. Having all ˆ
ΓhC
ni (t)for all devices and cloudlets,
CSAA-VOA needs to find the best allocation. This step can be
modeled as a Generalized Assignment Problem (GAP) and be
solved using existing approximation algorithms or heuristics
(this is out of the scope of this study). To solve this assignment
problem for each sample h, CSAA-VOA calls Assign()
function to compute the best candidate solution. This function
returns the best offloading decision XhC (t)and ZhC (t)along
with its cost ΓhC (line 8). Therefore, for each of Hsamples,
CSAA-VOA finds a candidate offloading solution.
Next, these candidate solutions are evaluated by an extra
sample h0with size of S0scenarios (lines 9-16). Considering
a candidate solution with XhC (t)and ZhC (t), the value
of Wt+1
s0for each scenario ωs0is simply determined (line 14).
Then, the best cost, ˆ
ΓhS0(t), is calculated by calling Assign()
function using all S0scenarios (line 16). The minimum value
(denoted by ΓHS 0) among all Hvalues of ˆ
ΓhS0(t)is calculated
(line 17). Finally, CSAA-VOA returns the minimum value as
the best objective value and its associated solutions as the
offloading decisions for devices at current time slot t(line 18).
IV. PERFORMANCE EVALUATION
A. Experimental Setup
Parameter Settings. We consider M= 10 cloudlets and
N= 80 devices. The location of the cloudlets and devices
are randomly chosen on a 2D grid. The length of a time
slot is set to 300 milliseconds. The original size of each
video chunk is uniformly selected from [1,10] MB and the
deadline of each video chunk is set to less than the length
of each time slot. To specify the required CPU cycles of a
video chunk, we set κ= 1000 [12]. We consider 10 video
coding ratios for each chunk as {0.1,0.2,...,1}, where coding
ratio of 1 means no compression. For each cloudlet i, its
computing capabilities Ciis arbitrarily chosen from [5,25]
GHz, bandwidth Biis randomly selected from [10,30] MHz,
(a) Video quality (b) Service migration (c) Average cost
Fig. 1: Service performance under real-time movement trace
and the value of is set to 1.2×1028 [13]. The energy con-
sumption capacity of a cloudlet at each time slot is arbitrarily
selected from [8000,15000] Joules [14]. As for the wireless
communication, we set p0= 0.5Watts and N0= 1.0×1013
Watts. The migration cost coefficient is set β= 5 to highly
reduce the number of migrations.
Scenario Generation. We use the real-world datasets of taxicab
mobility traces in the central area of Rome [15] to capture
the mobility traces of devices. The selected area with 4×4
km2is equally divided into a two-dimensional grid of more
than 10 ×10 cells. We use this data to generate the scenarios
for the devices based on the probability distribution of taxi
movements. A device at the current time slot will either move
to its neighboring cells at the next time slot or continue to
stay in the current cell.
B. Effectiveness of CSAA-VOA
To evaluate the performance our CSAA-VOA under device
mobility, we simulate a real-time movement trace and compare
it with the following benchmarks:
Video quality maximization (VQ-max): At each time
slot, each device is assigned to a cloudlet that maximizes
the video coding ratio in order to offload the highest-
quality video chunk.
Service migration minimization (SM-min): At each
time slot, each device is assigned to a cloudlet that
minimizes the migration cost in order to avoid a service
migration.
Random (RD): At each time slot, each device is ran-
domly assigned to a cloudlet to offload its video chunk.
To simulate the uncertain device mobility, a real-time
movement trace for 180 consecutive time slots based on the
scenario generator described in the setup is generated. We
measure the service performance in terms of video quality,
service migration, and the objective value (cost) for all 80
devices. Moreover, the sensitivity analysis of the cost is further
evaluated. We set H= 5,S= 100 and C= 10 for CSAA-
VOA.
Video Quality. Fig. 1a shows the average obtained video
coding ratio per device. Obviously, VQ-max computes the
optimal assignment of the video coding ratio (>0.98) of
the devices since it maximizes the video coding ratio. Both
SM-min and RD result in values less than mean value of
the video coding ratio. Our CSAA-VOA achieves significantly
high average video coding ratio (>0.90) compared with RD
and SM-min (<0.50).
Service Migration. Fig. 1b shows required service migrations.
This figure shows the number of migrations required for all
devices. As expected, SM-min outperforms other approaches
on overall in terms of minimum migration cost. RD performs
the worst due to its random policy, while VQ-max also
achieves a poor performance. Our CSAA-VOA leads to a much
fewer service migrations compared with RD and VQ-max.
Objective Value (Cost). Fig. 1c shows a trade off between
video quality and service migration. This figure shows the
average cost ¯
ΓEq (15) per device. Therefore, 2represents
the best (optimal) solution that all devices offload their video
chunks with the highest quality (without compression using the
video coding ratio of 1) while no migration cost is incurred.
The results show that the proposed CSAA-VOA achieves
near-optimal solutions. Specifically, CSAA-VOA outperforms
VQ-max, SM-min, and RD by 23.16% to 53.97%,60.25%
to 73.45%, and 138.53% to 175.20%, respectively. This is
due to the fact that CSAA-VOA effectively finds a trade off
between the video coding ratio and the migration cost over
time to compute the best offloading solution for the devices
that leads to minimum migration cost and maximum video
quality. The results also show that RD performs the worst due
to its random policy.
Sensitivity Analysis of the Cost. We further investigate the im-
pacts of some important parameters on the obtained cost. For
a fair analysis, the average costs obtained by these approaches
are evaluated using the same S0= 200 new scenarios.
The impact of number of devices is analyzed in Fig. 2a.
Clearly, the performance of RD and SM-min are not greatly
affected due to their random policy on selecting cloudlets for
offloading. The average cost obtained by VQ-max increases
as the number of devices increases. Because the deadline
restriction (Constraint (9)) and the energy restriction (Con-
straint (10)) become tighter as the number of devices increases.
Additionally, the average cost by the proposed CSAA-VOA
remains the lowest.
The impact of the deadline for offloading a video chunk
is shown in Fig. 2b. As deadline increases, the video quality
(a) under different N(b) under different deadline dn(c) under different β
Fig. 2: Sensitivity analysis of average cost
of offloading can improve (by allowing a higher video coding
ratio). Therefore, both RD and SM-min obtain better results
(lower costs). Contrarily, the obtained average cost by VQ-max
first decreases and then increases as the deadline increases.
This is because the video coding ratio rapidly increases to
reach a high value when deadline is small. However, the mi-
gration cost continues to increase. Differently, the average cost
by CSAA-VOA monotonically decreases to the near-optimal
cost as the deadline increases. This supports the fact that our
CSAA-VOA adaptively computes the best offloading solution
for devices by minimizing migration cost and maximizing the
video coding ratio over time.
Fig. 2c shows the impact of the migration cost coefficient.
When β= 0 (the migration cost is not considered), SM-min
and RD obtain similar values while CSAA-VOA and VQ-max
obtain similar values. Since at this case both SM-min and RD
compute the offloading solution using their random policy.
Contrarily, CSAA-VOA and VQ-max only optimize the video
coding ratio. Further, when migration cost is considered (i.e.,
β > 0), the average cost by RD and VQ-max increases greatly
as βincreases. Because both approaches do not optimize the
migration cost. However, the average cost obtained by SM-min
is not impacted by βas it always minimizes the migration
cost. Note that VQ-max will perform worse than SM-min
when βis set to a large value. On the other hand, our proposed
CSAA-VOA outperforms these benchmarks with stably lower
average cost. This again supports the fact that CSAA-VOA
effectively finds a trade off between the video coding ratio
and the migration cost over time.
V. CONCLUSION
In this paper, we studied the video offloading problem in
MEC to minimize migration cost and maximize video quality
without a priori knowledge on device mobility. We formulated
the problem as a two-stage stochastic program. Since our
stochastic optimization problem is computationally intractable,
we designed a clustering-based sample average approximation
(SAA) method, called CSAA-VOA, to achieve an efficient
scenario reduction while the quality of results is not negatively
impacted. Through extensive experiments, the results have
demonstrated the effectiveness of our proposed algorithm in
video offloading.
Acknowledgment. This research was supported in part by Cisco
grant CG#1935382.
REFERENCES
[1] G. Forecast, “Cisco visual networking index: global mobile data traffic
forecast update, 2017–2022,” Update, vol. 2017, p. 2022, 2019.
[2] M. Satyanarayanan, “The emergence of edge computing,” Computer,
vol. 50, no. 1, pp. 30–39, 2017.
[3] P. Mach and Z. Becvar, “Mobile edge computing: A survey on archi-
tecture and computation offloading,IEEE Communications Surveys &
Tutorials, vol. 19, no. 3, pp. 1628–1656, 2017.
[4] X. Chen, J.-N. Hwang, D. Meng, K.-H. Lee, R. L. de Queiroz, and F.-
M. Yeh, “A quality-of-content-based joint source and channel coding for
human detections in a mobile surveillance cloud,” IEEE Transactions on
Circuits and Systems for Video Technology, vol. 27, no. 1, pp. 19–31,
2016.
[5] C. Long, Y. Cao, T. Jiang, and Q. Zhang, “Edge computing framework
for cooperative video processing in multimedia iot systems,IEEE
Transactions on Multimedia, vol. 20, no. 5, pp. 1126–1139, 2017.
[6] L. Kong and R. Dai, “Efficient video encoding for automatic video
analysis in distributed wireless surveillance systems,ACM Transactions
on Multimedia Computing, Communications, and Applications, vol. 14,
no. 3, pp. 1–24, 2018.
[7] Y. Sun, S. Zhou, and J. Xu, “Emm: Energy-aware mobility management
for mobile edge computing in ultra dense networks,” IEEE Journal on
Selected Areas in Communications, vol. 35, no. 11, pp. 2637–2646,
2017.
[8] T. Ouyang, Z. Zhou, and X. Chen, “Follow me at the edge: Mobility-
aware dynamic service placement for mobile edge computing,” IEEE
Journal on Selected Areas in Communications, vol. 36, no. 10, pp. 2333–
2345, 2018.
[9] B. Gao, Z. Zhou, F. Liu, and F. Xu, “Winning at the starting line: Joint
network selection and service placement for mobile edge computing,”
in Prof. of the IEEE Conference on Computer Communications, 2019,
pp. 1459–1467.
[10] A. Shapiro, D. Dentcheva, and A. Ruszczy´
nski, Lectures on stochastic
programming: modeling and theory. SIAM, 2014.
[11] E. Eriksson, G. D´
an, and V. Fodor, “Predictive distributed visual analysis
for video in wireless sensor networks,” IEEE Transactions on Mobile
Computing, vol. 15, no. 7, pp. 1743–1756, 2015.
[12] W. Ma and L. Mashayekhy, “Truthful computation offloading mecha-
nisms for edge computing,” in Proc. of the IEEE International Confer-
ence on Edge Computing and Scalable Cloud, 2020, pp. 199–206.
[13] W. Ma, X. Liu, and L. Mashayekhy, “A strategic game for task offloading
among capacitated UAV-mounted cloudlets,” in Proc. of the IEEE
International Congress on Internet of Things, 2019, pp. 61–68.
[14] H. Badri, T. Bahreini, D. Grosu, and K. Yang, “Energy-aware application
placement in mobile edge computing: a stochastic optimization ap-
proach,” IEEE Transactions on Parallel and Distributed Systems, vol. 31,
no. 4, pp. 909–922, 2019.
[15] L. Bracciale, M. Bonola, P. Loreti, G. Bianchi, R. Amici, and
A. Rabuffi, “CRAWDAD dataset roma/taxi (v. 2014-07-17),” Down-
loaded from https://crawdad.org/roma/taxi/20140717/taxicabs, Jul. 2014,
traceset: taxicabs.
... Therefore, there is a need for a more efficient solution that can provide low-latency content delivery in 5G networks. Multi-access Edge Computing (MEC) offers a decentralization of the conventional cloud paradigm by positioning computing, storage, and caching closer to UEs at the edge of the network [5], [6]. This shift is instrumental in achieving extremely low latency and high throughput, essential for meeting the rigorous QoS requirements. ...
... • IP: We use IBM ILOG Concert Technology API for C++ [40] to implement our IP model, presented in equations (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12). The IP model serves as a benchmark, providing the optimal solution, • Epsilon Greedy (ϵ-Greedy): This is an MAB approach that balances exploration and exploitation based on the ϵ probability, • Nearest Neighbor (NN): This is a heuristic approach that leverages the geographical proximity of the content delivery routing path and chooses the physically closest path, and • Round Robin (RR): This is a heuristic approach that cycles through available content delivery routing paths in a predetermined order, ensuring that each path gets an equal share of content delivery requests. ...
Article
Full-text available
The increasing demand for high-volume multimedia services through mobile user equipment (UEs) has imposed a significant burden on mobile networks. To cope with this growth in demand, it is necessary to extend the 5G network's ability to meet quality-of-service (QoS) requirements. The integration of Multi-access Edge Computing (MEC) with 5G technology, 5G-MEC, emerges as a pivotal solution, offering ultra-low latency, ultra-high reliability, and continuous connectivity to support various latency-sensitive applications for UEs. Despite these advancements, the mobility of UEs introduces significant spatio-temporal uncertainties, posing a major challenge on optimizing content delivery routes and directly impacting both latency and service continuity for UEs. Addressing this challenge necessitates suitable approaches for selecting optimal 5G-MEC components, with the goal of minimizing latency and reducing the frequency of handovers, ultimately ensuring a seamless content delivery experience. This paper proposes two learning-based approaches to tackle the problem of 5G-MEC component selection to facilitate QoS-aware content delivery in the absence of complete information about the dynamics of the 5G-MEC environment. First, we design an online sequential decision-making approach, called QCS-MAB, to decide on the content delivery routes in real-time while achieving a bounded performance. We then propose a deep learning approach, called QCS-DNN, to efficiently solve large-scale 5G-MEC component selection problems. We evaluate the effectiveness of our proposed approaches through extensive experiments using a real-world dataset. The results demonstrate that both QCS-MAB and QCS-DNN achieve near-optimal latency and significantly reduced handover times, significantly enhancing the 5G-MEC content delivery experience.
... At the same time, the fluctuating processing capabilities make it challenging to allocate appropriate computing resources based on uncertain data when there is a surge of computational tasks. Nowadays, stochastic optimization (SO) [6] and robust optimization (RO) [7][8][9] methods are proposed to solve uncertainty problems. The SO method can use the probability distribution of measuring parameters to predict the potential uncertainty and obtain the mathematical expectation of the objective function. ...
... SO studies in mobile edge computing (MEC) take into account the probability distributions of channel resources and the random variations of computational resources in edge computing nodes (ECNs). The authors of [6] proposed a two-stage SO to tackle the challenge of uncertain dynamic environments. ...
Article
Full-text available
As an emerging network paradigm, the space–air–ground integrated network (SAGIN) has garnered attention from academia and industry. That is because SAGIN can implement seamless global coverage and connections among electronic devices in space, air, and ground spaces. Additionally, the shortage of computing and storage resources in mobile devices greatly impacts the quality of experiences for intelligent applications. Hence, we plan to integrate SAGIN as an abundant resource pool into mobile edge computing environments (MECs). To facilitate efficient processing, we need to solve the optimal task offloading decisions. In contrast to existing MEC task offloading solutions, we have to face some new challenges, such as the fluctuation of processing capabilities for edge computing nodes, the uncertainty of transmission latency caused by heterogeneous network protocols, the uncertain amount of uploaded tasks during a period, and so on. In this paper, we first describe the task offloading decision problem in environments characterized by these new challenges. However, we cannot use standard robust optimization and stochastic optimization methods to obtain optimal results under uncertain network environments. In this paper, we propose the ‘condition value at risk-aware distributionally robust optimization’ algorithm for task offloading, denoted as RADROO, to solve the task offloading decision problem. RADROO combines the distributionally robust optimization and the condition value at risk model to achieve optimal results. We evaluated our approach in simulated SAGIN environments, considering confidence intervals, the number of mobile task offloading instances, and various parameters. We compare our proposed RADROO algorithm with state-of-the-art algorithms, such as the standard robust optimization algorithm, the stochastic optimization algorithm, the DRO algorithm, and the Brute algorithm. The experimental results show that RADROO can achieve a sub-optimal mobile task offloading decision. Overall, RADROO is more robust than others to the new challenges mentioned above in SAGIN.
... In contrast to the robust optimization described above, which sacrifices a lot of device 184 performance to ensure its robustness during application, stochastic optimization is used. 185 The investigations of Stochastic Optimization (SO) in MEC gives the probability distribution 186 of channel resources, computational resources of ECNs varying randomly [2]. proposed a 187 two-stage SO to tackle the challenge of uncertain dynamic environments. ...
Preprint
Full-text available
As an emerging network paradigm, Space-Air-Ground integrated networks (SAGIN) has attracted the attentions from academia and industry. That is because SAGIN can implement seamless global coverage and connections among electronic devices in space, air, and ground spaces. Additionally, the shortages of computing and storage resources in mobile devices greatly affect the quality of experiences for intelligent applications. Hence, we devise to integrate SAGIN as an abundant resource pool into mobile edge computing environments (MEC). To facilitate efficient processing, we need to solve the optimal task offloading decisions. Different from the existing MEC task offloading solutions, we have to face some new challenges, such as the fluctuation of processing capability for an edge computing node, the uncertainty of the transmission latency caused by the heterogeneous network protocols, the uncertain amount of uploaded tasks during a period, and so on. In this paper, we firstly describe a task offloading decision problem in new challenge environments. But, we cannot use the standard robust optimization and stochastic optimization methods to obtain the optimal result under the uncertain network environments. In this paper, we propose the condition value at risk-aware distributionally robust optimization algorithm, named as CVAR-DRO, to solve the task offloading decision problem. The proposed CVAR-DRO method combines the distributionally robust optimization and the condition value at risk model for solving the optimal result. And then, We have evaluated our approach in simulation SAGIN environments with the confidence interval, the number of mobile task-offloading and the various parameters. We compare our proposed CVAR-DRO algorithm with the state-of-the-art algorithms, such as the standard robust optimization algorithm, the stochastic optimization algorithm, the DRO algorithm, and the brute algorithm. The experimental results show that CVAR-DRO can get a sub-optimal mobile task-offloading decision. Overall, CVAR-DRO is more robust than others to the new challenges mentioned above in SAGIN.
Chapter
The alarming rate of increase in energy demand and carbon footprint of Fog environments has become a critical issue. It is, therefore, necessary to reduce the percentage of brown energy consumption in these systems and integrate renewable energy use into Fog. Renewables, however, are prone to availability fluctuations due to their variable and intermittent nature. In this paper, we propose a new Fog framework and design various optimization techniques, including linear programming optimization, linear regression estimation, and Multi-Armed Bandit (MAB) learning to optimize renewable energy use in the Fog based on a novel idea of load shaping with adaptive Quality of Service (QoS). The proposed framework, along with the optimization techniques, are tested on a real-world micro data center (Fog environment) powered by solar energy sources connected to multiple IoT devices. The results show that our proposed framework significantly reduces the difference between renewable energy generation and total energy consumption while efficiently adjusting the QoS of applications.
Article
Many mobile device applications require low end-to-end latency to edge computing infrastructure when offloading their computation tasks in order to achieve real-time perception and cognition for users. User mobility brings significant challenges in providing low-latency offloading due to the limited coverage area of cloudlets. Virtual machine (VM)/container handoff is a promising solution to seamlessly transfer services from one cloudlet to another to maintain low latency as users move. However, an inefficient path planning for the handoff can result in system congestion and consequently poor quality of service (QoS). The situation can even worsen by selfish users who intentionally lie about their true parameters to achieve better service at the cost of degrading the whole system's performance. To fill this research gap, we propose an Online Service Handoff Mechanism (OSHM) to provide an efficient path dynamically for transferring VM/container from the current serving cloudlet to a nearby cloudlet at the destination of a mobile user. Our proposed path planning algorithm is based on a label correction methodology, leading to polynomial time complexity. OSHM is accompanied by our proposed payment determination function to discourage misreporting of unknown parameters. We discuss the theoretical properties of our proposed mechanism in implementing a system equilibrium and ensuring truthfulness. We also perform a comprehensive assessment through extensive experiments which show the efficiency of OSHM in terms of workload, handoff time, consumed energy, and other metrics compared to several benchmarks. Experimental results show that OSHM outperforms other algorithms, reducing at least 61% in average workload, 33% in average handoff time, and 29% in average energy consumption.
Article
Full-text available
The Quality of Service (QoS) in Mobile Edge Computing (MEC) systems is significantly dependent on the application offloading and placement decisions. Due to the movement of users in MEC networks, an optimal application placement might turn into the least efficient placement in few minutes. Thus, it is crucial to take the dynamics of the system into account when designing application placement mechanisms. On the other hand, energy consumption of servers is a significant component of the cost of services in MEC systems and must also be considered in the design of the mechanisms. In this paper, we model the problem of energy-aware application placement in edge computing systems as a multi-stage stochastic program. The objective is to maximize the QoS of the system while taking into account the limited energy budget of the edge servers. To solve the problem, we design a novel parallel Sample Average Approximation (SAA) algorithm. We conduct an extensive experimental analysis to evaluate the performance of the proposed algorithm using real-world trace data.
Article
Full-text available
Multimedia Internet-of-Things (IoT) systems have been widely used in surveillance, automatic behavior analysis and event recognition, which integrate image processing, computer vision and networking capabilities. In conventional multimedia IoT systems, videos captured by surveillance cameras are required to be delivered to remote IoT servers for video analysis. However, the long-distance transmission of a large volume of video chunks may cause congestions and delays due to limited network bandwidth. Nowadays, mobile devices, e.g., smart phones and tablets, are resource-abundant in computation and communication capabilities. Thus, these devices have the potential to extract features from videos for the remote IoT servers. By sending back only a few video features to the remote servers, the bandwidth starvation of delivering original video chunks can be avoided. In this paper, we propose an edge computing framework to enable cooperative processing on resource-abundant mobile devices for delay-sensitive multimedia IoT tasks. We identify that the key challenges in the proposed edge computing framework are to optimally form mobile devices into video processing groups and to dispatch video chunks to proper video processing groups. Based on the derived optimal matching theorem, we put forward a cooperative video processing scheme formed by two efficient algorithms to tackle above challenges, which achieves sub-optimal performance on the human detection accuracy. The proposed scheme has been evaluated under diverse parameter settings. Extensive simulation confirms the superiority of the proposed scheme over other two baseline schemes.
Article
Full-text available
Merging mobile edge computing (MEC) functionality with the dense deployment of base stations (BSs) provides enormous benefits such as a real proximity, low latency access to computing resources. However, the envisioned integration creates many new challenges, among which mobility management (MM) is a critical one. Simply applying existing radio access oriented MM schemes leads to poor performance mainly due to the co-provisioning of radio access and computing services of the MEC-enabled BSs. In this paper, we develop a novel user-centric energy-aware mobility management (EMM) scheme, in order to optimize the delay due to both radio access and computation, under the long-term energy consumption constraint of the user. Based on Lyapunov optimization and multi-armed bandit theories, EMM works in an online fashion without future system state information, and effectively handles the imperfect system state information. Theoretical analysis explicitly takes radio handover and computation migration cost into consideration and proves a bounded deviation on both the delay performance and energy consumption compared to the oracle solution with exact and complete future system information. The proposed algorithm also effectively handles the scenario in which candidate BSs randomly switch on/off during the offloading process of a task. Simulations show that the proposed algorithms can achieve close-to-optimal delay performance while satisfying the user energy consumption constraint.
Article
Mobile edge computing is a new computing paradigm, which pushes cloud computing capabilities away from the centralized cloud to the network edge. However, with the sinking of computing capabilities, the new challenge incurred by user mobility arises: since end users typically move erratically, the services should be dynamically migrated among multiple edges to maintain the service performance, i.e., user-perceived latency. Tackling this problem is non-trivial since frequent service migration would greatly increase the operational cost. To address this challenge in terms of the performance-cost tradeoff, in this paper, we study the mobile edge service performance optimization problem under long-term cost budget constraint. To address user mobility which is typically unpredictable, we apply Lyapunov optimization to decompose the long-term optimization problem into a series of real-time optimization problems which do not require a priori knowledge such as user mobility. As the decomposed problem is NP-hard, we first design an approximation algorithm based on Markov approximation to seek a near-optimal solution. To make our solution scalable and amenable to future fifth-generation application scenario with large-scale user devices, we further propose a distributed approximation scheme with greatly reduced time complexity, based on the technique of the best response update. Rigorous theoretical analysis and extensive evaluations demonstrate the efficacy of the proposed centralized and distributed schemes.
Article
In many distributed wireless surveillance applications, compressed videos are used for performing automatic video analysis tasks. The accuracy of object detection, which is essential for various video analysis tasks, can be reduced due to video quality degradation caused by lossy compression. This article introduces a video encoding framework with the objective of boosting the accuracy of object detection for wireless surveillance applications. The proposed video encoding framework is based on systematic investigation of the effects of lossy compression on object detection. It has been found that current standardized video encoding schemes cause temporal domain fluctuation for encoded blocks in stable background areas and spatial texture degradation for encoded blocks in dynamic foreground areas of a raw video, both of which degrade the accuracy of object detection. Two measures, the sum-of-absolute frame difference (SFD) and the degradation of texture in 2D transform domain (TXD), are introduced to depict the temporal domain fluctuation and the spatial texture degradation in an encoded video, respectively. The proposed encoding framework is designed to suppress unnecessary temporal fluctuation in stable background areas and preserve spatial texture in dynamic foreground areas based on the two measures, and it introduces new mode decision strategies for both intra- and interframes to improve the accuracy of object detection while maintaining an acceptable rate distortion performance. Experimental results show that, compared with traditional encoding schemes, the proposed scheme improves the performance of object detection and results in lower bit rates and significantly reduced complexity with comparable quality in terms of PSNR and SSIM.
Article
Technological evolution of mobile user equipments (UEs), such as smartphones or laptops, goes hand-in-hand with evolution of new mobile applications. However, running computationally demanding applications at the UEs is constrained by limited battery capacity and energy consumption of the UEs. Suitable solution extending the battery life-time of the UEs is to offload the applications demanding huge processing to a conventional centralized cloud (CC). Nevertheless, this option introduces significant execution delay consisting in delivery of the offloaded applications to the cloud and back plus time of the computation at the cloud. Such delay is inconvenient and make the offloading unsuitable for real-time applications. To cope with the delay problem, a new emerging concept, known as mobile edge computing (MEC), has been introduced. The MEC brings computation and storage resources to the edge of mobile network enabling to run the highly demanding applications at the UE while meeting strict delay requirements. The MEC computing resources can be exploited also by operators and third parties for specific purposes. In this paper, we first describe major use cases and reference scenarios where the MEC is applicable. After that we survey existing concepts integrating MEC functionalities to the mobile networks and discuss current advancement in standardization of the MEC. The core of this survey is, then, focused on user-oriented use case in the MEC, i.e., computation offloading. In this regard, we divide the research on computation offloading to three key areas: i) decision on computation offloading, ii) allocation of computing resource within the MEC, and iii) mobility management. Finally, we highlight lessons learned in area of the MEC and we discuss open research challenges yet to be addressed in order to fully enjoy potentials offered by the MEC.