Content uploaded by Xiaomin Zhu
Author content
All content in this area was uploaded by Xiaomin Zhu on Jul 25, 2014
Content may be subject to copyright.
2168-7161 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCC.2014.2310452, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING VOL. X, NO. X, 2013 1
Real-Time Tasks Oriented Energy-Aware
Scheduling in Virtualized Clouds
Xiaomin Zhu Member, IEEE, Laurence T. Yang* Senior Member, IEEE,
Huangke Chen, Ji Wang, Shu Yin Member, IEEE, and Xiaocheng Liu
Abstract—Energy conservation is a major concern in cloud computing systems because it can bring several important benefits such as
reducing operating costs, increasing system reliability, and prompting environmental protection. Meanwhile, power-aware scheduling
approach is a promising way to achieve that goal. At the same time, many real-time applications, e.g., signal processing, scientific
computing have been deployed in clouds. Unfortunately, existing energy-aware scheduling algorithms developed for clouds are not
real-time task oriented, thus lacking the ability of guaranteeing system schedulability. To address this issue, we firstly propose in
this paper a novel rolling-horizon scheduling architecture for real-time task scheduling in virtualized clouds. Then a task-oriented
energy consumption model is given and analyzed. Based on our scheduling architecture, we develop a novel energy-aware scheduling
algorithm named EARH for real-time, aperiodic, independent tasks. The EARH employs a rolling-horizon optimization policy and can
also be extended to integrate other energy-aware scheduling algorithms. Furthermore, we propose two strategies in terms of resource
scaling up and scaling down to make a good trade-off between task’s schedulability and energy conservation. Extensive simulation
experiments injecting random synthetic tasks as well as tasks following the last version of the Google cloud tracelogs are conducted
to validate the superiority of our EARH by comparing it with some baselines. The experimental results show that EARH significantly
improves the scheduling quality of others and it is suitable for real-time task scheduling in virtualized clouds.
Index Terms—virtualized cloud, real-time, energy-aware, scheduling, rolling-horizon, elasticity.
F
1 INTRODUCTION
THE cloud, consisting of a collection of interconnected
and virtualized computers dynamically provisioned
as one or more unified computing resource(s), has be-
come a revolutionary paradigm by enabling on-demand
provisioning of applications, platforms, or computing re-
sources for customers based on a “pay-as-you-go” model
[1]. Nowadays, an increasing number of enterprises
and governments have deployed their applications in-
cluding commercial business and scientific research in
clouds motivated by the reasonable price as they are
offered in economy of scale, and shifting responsibil-
ity of maintenance, backups, and license management
to cloud providers [2]. Hence, some IT companies are
significantly benefited from cloud providers by relieving
them from the necessity in setting up basic hardware and
software infrastructures, enabling more attention to the
innovation and development for their main pursuit [3].
It is worthwhile to note that to provide cloud ser-
vices, more and more large-scale data centers containing
thousands of computing nodes are built, which results
•Xiaomin Zhu and Huangke Chen, Ji Wang, and Xiaocheng Liu are
with the Science and Technology on Information Systems Engineering
Laboratory, National University of Defense Technology, Changsha, Hu-
nan, P. R. China, 410073. E-mail:{xmzhu, hkchen, wangji}@nudt.edu.cn,
nudt200203012007xcl@gmail.com
•Laurence T. Yang is with the Department of Computer Science, St.
Francis Xavier University, Antigonish, NS, B2G 2W5, Canada. Email:
ltyang@stfx.ca
•Shu Yin is with the School of Information Science and Engineering, Hunan
University, Changsha 410012, China. Email: shuyin@hnu.edu.cn
in consuming tremendous amount of energy with huge
cost [5]. Moreover, high energy consumption causes low
system reliability since the Arrhenius life-stress model
shows that the failure rate of electronic devices will dou-
ble as the temperature rises by every 10oC [6]. Further-
more, high energy consumption has negative impacts
on environment. It is estimated that computer usage
accounts for 2% of anthropogenic CO2emission. Data
center activities are estimated to release 62 million metric
tons of CO2into the atmosphere [7]. Consequently, it is
highly indispensable to employ some measures to reduce
energy consumption of cloud data centers and make
them energy efficient.
One of the important reasons about the extremely
high energy consumption in cloud data centers can be
attributed to the low utilization of computing resources
that incurs a higher volume of energy consumption
compared with efficient utilization of resources. The re-
sources with a low utilization still consume an unaccept-
able amount of energy. According to recent studies, the
average resource utilization in most of the data centers
is lower than 30% [8], and the energy consumption of
idle resources is more than 70% of peak energy [9]. In
response to the poor resource utilization, virtualization
technique is an efficient approach to increase resource
utilization and in turn reduce energy consumption. The
virtualization technique enables multiple virtual ma-
chines (VMs) to be placed on the same physical hosts and
supports the live migration of VMs between physical
hosts based on the performance requirements. When
VMs do not use all the provided resources, they can
2168-7161 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCC.2014.2310452, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING VOL. X, NO. X, 2013 2
be logically resized and consolidated to the minimum
number of physical hosts, while idle nodes can be
switched to sleep or hibernate mode to eliminate the idle
energy consumption and thus reduce the total energy
consumption in cloud data centers [3].
Noticeably, many applications, e.g., scientific comput-
ing, signal processing, deployed in clouds are with real-
time nature, in which the correctness depends not only
on the computation results, but also on the time instants
at which these results are produced. For some applica-
tions, it is even mandatory in some cases to provide real-
time guarantee. For example, if a photograph processing
application is deployed in clouds, when an earthquake
occurs, there is an urgent need for cloud data centers
to process photographs of affected areas obtained by
satellites in a real-time manner. Generally, the processed
photographs are expected to be acquired within a few
hours or even in dozens of minutes for timely conduct-
ing damage assessments and making rescue planing.
Missing deadlines on getting photographs may greatly
affect damage assessment and rescue, resulting in catas-
trophic consequence. Another example, taken from [4],
is a real-time signal processing application that can be
deployed in a cloud. In this application, the signal pro-
cessing task should be finished within timing constraint,
otherwise, the signal quality will be greatly degraded.
It is obvious that guaranteeing real-time response for
these kinds of applications is more crucial than energy
conservation. Hence, more physical hosts in a cloud
should be active to finish these real-time tasks before
their deadlines. In contrast, if some real-time tasks are
not emergent in nature, the cloud data center may use
less physical hosts to reduce energy consumption while
satisfying the timing requirements of users.
Motivation. It is well-known that scheduling is an
efficient software technique to achieve high performance
for applications running in clouds. Unfortunately, to
the best of our knowledge, most existing energy-aware
scheduling algorithms do not sufficiently consider real-
time task allocation and energy saving simultaneously
in clouds. Commonly, the workload is assumed in tra-
ditional energy-saving scheduling, and task allocation
and resource provisioning are separated, which presents
a big challenge for us to devise novel energy-aware
scheduling strategies for real-time tasks in clouds so as
to bridge the gap. To address this issue, we attempt to in
this paper incorporate the cloud elasticity and tasks’ real-
time guarantee into energy-aware scheduling strategies.
Specifically, our approach first gives high priority to deal
with schedulability, i.e., if some real-time tasks cannot
be finished before their deadlines using current active
hosts, it will add new VMs or hosts to finish as many
tasks as possible even though much energy consumption
may be produced. When some hosts are sitting idle,
our approach strives to reduce energy consumption by
consolidating VMs while achieving high schedulability.
Contribution. The major contributions of this paper
are as follows:
•We proposed an energy-aware scheduling scheme
by rolling-horizon optimization, and we introduced
an enabling scheduling architecture for rolling-
horizon optimization.
•We developed some policies for VMs’ creation, mi-
gration and cancellation to dynamically adjust the
scale of cloud, meeting the real-time requirements
and striving to save energy.
•We put forward an Energy-Aware Rolling-Horizon
scheduling algorithm or EARH for real-time inde-
pendent tasks in a cloud.
The rest of this paper is organized as follows: In Sec-
tion 2, we summarize the related work in the literature.
Section 3 gives the problem depictions. In Section 4, we
describe the EARH algorithm and the main principles
behind it are discussed. Section 5 presents the simulation
experiments and performance analysis. Section 6 con-
cludes this paper with a summary and future work.
2 RE LATED WO RK
Green computing and energy conservation in modern
distributed computing context are receiving a great deal
of attention in the research community and efficient
scheduling methods in this issue have been overwhelm-
ingly investigated [19], [23], [24]. In a broad sense,
scheduling algorithms can be classified into two cate-
gories: static scheduling and dynamic scheduling [10].
Static scheduling algorithms make scheduling decisions
before tasks are submitted, and are often applied to
schedule periodic tasks [11]. However, aperiodic tasks
whose arrival times are not known a priori must be
handled by dynamic scheduling algorithms (see, for
example, [12], [13]). In this study, we focus on scheduling
aperiodic and independent real-time tasks.
Chase et al. considered the energy-efficient manage-
ment issue of homogeneous resources in Internet hosting
centers. The proposed approach reduces energy con-
sumption by switching idle servers to power saving
modes and is suitable for power-efficient resource allo-
cation at the data center level [14]. Zikos and Karatza
proposed a performance and energy-aware scheduling
algorithm in cluster environment for compute-intensive
jobs with unknown service time [13]. Ge et al. stud-
ied distributed performance-directed DVFS scheduling
strategies that can make significant energy savings with-
out increasing execution time by varying scheduling
granularity [15]. Kim et al. proposed two power-aware
scheduling algorithms (space-shared and time-shared)
for bag-of-tasks real-time applications on DVS-enabled
clusters to minimize energy dissipation while meeting
applications’ deadlines [16]. N´
elis et al. investigated
the energy saving problem for sporadic constrained-
deadline real-time tasks on a fixed number of processors.
The proposed scheduling algorithm is preemptive; each
process can start to execute on any processor and may
migrate at run-time if it gets preempted by earlier-
deadline processes [17]. It should be noted that these
2168-7161 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCC.2014.2310452, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING VOL. X, NO. X, 2013 3
scheduling schemes do not consider resource virtualiza-
tion - the most important feature of clouds, thus they
cannot efficiently improve the resource utilization in
clouds.
Nowadays, virtualization technology has become an
essential tool to provide resource flexibly for each user
and to isolate security and stability issues from other
users [18]. Therefore, an increasing number of data cen-
ters employ the virtualization technology when man-
aging resources. Correspondingly, many energy-efficient
scheduling algorithms for virtualized clouds were de-
signed. For example, Liu et al. aimed to reduce en-
ergy consumption in virtualized data centers by sup-
porting virtual machine migration and VM placement
optimization while reducing the human intervention
[19]. Petrucci et al. presented the use of virtualization
for consolidation and proposed a dynamic configuration
method that takes into account the cost of turning on
or off servers to optimize energy management in vir-
tualized server clusters [20]. Bi et al. suggested a dy-
namic resource provisioning technique for cluster-based
virtualized multi-tier applications. In their approach, a
hybrid queuing model was employed to determine the
number of VMs at each tier [21]. Verma et al. formulated
the power-aware dynamic placement of applications in
virtualized heterogeneous systems as continuous opti-
mization, i.e., at each time frame, the VMs placement
is optimized to minimize energy consumption and to
maximize performance [22]. Beloglazov et al. proposed
some heuristics for dynamic adaption of VM allocation
at run-time based on the current utilization of resources
by applying live migration, switching idle nodes to sleep
mode [3]. Goiri et al. presented an energy-efficient and
multifaceted scheduling policy, modeling and managing
a virtualized data center, in which the allocation of VMs
is based on multiple facets to optimize the provider’s
profit [23]. Wang et al. investigated adaptive model-free
approaches for resource allocation and energy manage-
ment under time-varying workloads and heterogeneous
multi-tier applications, and multiple metrics including
throughput, rejection amount, queuing state were con-
sidered to design resource adjustment schemes [24].
Graubner et al. proposed an energy-efficient scheduling
algorithm that was based on performing live migrations
of virtual machines to save energy, and the energy costs
of live migrations including pre- and post-processing
phases were considered [25]. Unfortunately, to the best
of our knowledge, seldom work considers the dynamic
energy-efficient scheduling issue for real-time tasks in
virtualized clouds. In this study, we focus on the energy-
efficient scheduling by rolling-horizon optimization to
efficiently guarantee the schedulability of real-time tasks
and at the same time striving to save energy by dynamic
VMs consolidation.
3 PROB LEM FORM ULATI ON
In this section, we will introduce the models, notions,
and terminology used in this paper.
3.1 Scheduling Model
We target a virtualized cloud that is characterized by
an infinite set H={h1, h2,· · · } of physical computing
hosts providing the hardware infrastructure for creating
virtualized resources to satisfy users’ requirements. The
active host set is modeled by Hawith nelements,
Ha⊆H. For a given host hk, it is characterized by
its CPU performance defined by Million Instructions
Per Second (MIPS) [28], [30], amount of RAM, and
network bandwidth, i.e., hk={ck, rk, nk}, where ck,
rk, and nkrepresent the CPU capability, RAM and
network bandwidth of the kth host, respectively. For
each host hk, it contains a set Vk={v1k, v2k,· · · v|Vk|k}
of virtual machines (VMs). For a given VM vjk , we
use c(vjk ),r(vjk ), and n(vjk )to denote the fractions
of CPU performance, amount of RAM, and network
bandwidth allocated to vjk . Also, multiple VMs can be
dynamically started and stopped on a single host based
on the system workload. At the same time, some VMs
are able to migrate across hosts in order to consolidate
resources and further reduce energy consumption. Fig.
1 illustrates the scheduling architecture used for rolling-
horizon optimization.
ĂĂ
5ROOLQJ+RUL]RQ
Ă
+RVW
90
90
Ă
+RVWP
5HMHFW7DVN
6WDWXV
,QIRUPDWLRQ
$7DVN
$7DVN
$7DVN
90
90
Ă
$7DVN
$7DVN
$7DVN
6WDWXV
,QIRUPDWLRQ
7DVN
7DVN
5HDO7LPH
&RQWUROOHU
90
&RQWUROOHU
90$GMXVWPHQW
,QIRUPDWLRQ
90$GMXVWPHQW
,QIRUPDWLRQ
6FKHGXOHU
1HZ
7DVNV
Ă
8VHUV
Fig. 1. Scheduling architecture.
The scheduler consists of a rolling-horizon, a real-time
controller, and a VM controller. The scheduler takes tasks
from users and allocates them to different VMs. The
rolling-horizon holds both new tasks and waiting tasks
to be executed. A scheduling process is triggered by new
tasks, and all the tasks in the rolling-horizon will be
rescheduled.
When a new task arrives, the scheduling process fol-
lows five steps as below:
Step 1. The scheduler checks the system status infor-
mation such as running tasks’ remaining execution time,
active hosts, VMs’ deployments, and the information of
tasks in waiting pool including their deadlines, currently
allocated VMs, start time, etc.
Step 2. The tasks in rolling-horizon are sorted by their
deadlines to facilitate the scheduling operation.
Step 3. The real-time controller determines whether
a task in the rolling-horizon can be finished before its
deadline. If not, the real-time controller informs the VM
controller, and then the VM controller adds VMs to
2168-7161 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCC.2014.2310452, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING VOL. X, NO. X, 2013 4
finish the task within timing constraint. If no schedule
can be found to satisfy the task’s timing requirement
although enough VMs have been added, the task will
be rejected. Otherwise, the task will be retained in the
rolling-horizon.
Step 4.The scheduling decision for the tasks in the
rolling-horizon is updated, e.g., their execution orders,
start times, allocated VMs and new active hosts.
Step 5. When a task in the rolling-horizon is ready to
execute, the task is dispatched to assigned VM.
Additionally, when tasks arrive slowly, tasks have
loose deadlines or their count is less, making system
workload light, the VM controller considers both the
status of active hosts and the task information, and
then decides whether some VMs should be stopped or
migrated to consolidate resources so as to save energy.
Both of the real-time controller and the VM controller
work together trying to firstly meet users’ timing re-
quirements and then reduce energy consumption by
dynamically allocating tasks, adjusting VMs and hosts.
3.2 Task Model
In this study, we consider a set T={t1, t2,· · · } of
independent tasks that arrive dynamically. A task ti
submitted by a user can be modeled by a collection of
parameters, i.e., ti={ai, li, di, fi}, where ai,li,diand fi
are the arrival time, task length/size, deadline, and finish
time of task ti, respectively. We let rtjk be the ready time
of VM vjk at host hk. Similarly, let stijk be the start time
of task tion VM vjk . Due to the heterogeneity in terms
of CPU processing capabilities of VMs, we let etijk be
the execution time of task tion VM vjk .
etijk =li
c(vjk ).(1)
It is assumed that ftijk is the finish time of task tion
VM vjk , and it can be easily determined as follows:
ftijk =stij k +etijk .(2)
In addition, xijk is employed to reflect a mapping of
tasks to VMs at different hosts in a virtualized cloud,
where xijk is “1” if task tiis allocated to VM vjk at host
hkand is “0”, otherwise.
The finish time is, in turn, used to determine whether
the task’s timing constraint can be guaranteed, i.e.,
xijk =½0,if f tij k > di,
1 or 0,if ftijk ≤di.(3)
3.3 Energy Consumption Model
The energy consumption by hosts in a data center is
mainly determined by CPU, memory, disk storage and
network interfaces, in which the CPU consumes the main
part of energy. So we consider in this paper the energy
consumption by CPU like [3]. Further, the energy con-
sumption can be classified into two parts, i.e., dynamic
energy consumption and static (i.e., leakage) energy con-
sumption [26]. Since the dynamic energy consumption is
normally dominant and the static energy consumption
follows a similar trend to the dynamic one, we focus
on the dynamic energy consumption while building the
energy consumption model.
Let ecijk be the energy consumption caused by task ti
running on VM vjk . We denote the energy consumption
rate of the VM vjk by ecrjk and the energy consumption
ecijk can be calculated as follows:
ecijk =ecrjk ·etijk .(4)
Hence, the total energy consumed by executing all the
tasks is:
ecexec =
|Ha|
X
k=1
|Vk|
X
j=1
|T|
X
i=1
xijk ·ecijk
=
|Ha|
X
k=1
|Vk|
X
j=1
|T|
X
i=1
xijk ·ecrjk ·etijk .
(5)
We assume in Eq. (5) that no energy consumption
is incurred when VMs are sitting idle. However, this
assumption is not valid in real virtualized cloud data
centers. The energy consumption when VMs are idle
includes two parts, i.e., all the VMs in a host are idle
and some of VMs in a host are idle.
When all the VMs in a host are sitting idle, this host
can be set to a lower energy consumption rate by DVFS
technology. Thus, in this case we denote the energy
consumption rate of VM vjk by ecr0
jk . Suppose the idle
time when all the VMs in a host hkare idle is itk, the
energy consumption when a host is idle (i.e., all the VMs
in this host are idle) can be written as:
ecallIdle =
|Ha|
X
k=1
|Vk|
X
j=1
ecr0
jk ·itk.(6)
If only parts of VMs in a host are idle, the energy
consumption rates of VMs are same as that when they
are executing tasks, i.e., the energy consumption rate of
VM vjk is ecrjk. Then, we obtain the analytical formula
for the energy consumed in this case as:
ecpartIdle =
|Ha|
X
k=1
|Vk|
X
j=1
ecrjk ·tpartI dle
j
=
|Ha|
X
k=1
|Vk|
X
j=1
ecrjk ·
|T|
max
i=1 {fi} − itk−
|T|
X
i=1
xijk ·etijk
,
(7)
where tpartIdle
jis the idle time of VM vjk when only some
VMs are sitting idle in host hk.
Therefore, the e
¯nergy c
¯onsumption considering the
e
¯xecution time and i
¯dle time is derived from Eq. (5),
2168-7161 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCC.2014.2310452, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING VOL. X, NO. X, 2013 5
Eq. (6), and Eq. (7) as:
ecei =ecexec +ecallIdle +ecpar tIdle
=
|Ha|
X
k=1
|Vk|
X
j=1
|T|
X
i=1
xijk ·ecrjk ·etijk
+
|Ha|
X
k=1
|Vk|
X
j=1
ecr0
jk ·itk
+
|Ha|
X
k=1
|Vk|
X
j=1
ecrjk ·
|T|
max
i=1 {fi} − itk−
|T|
X
i=1
xijk ·etijk
.
(8)
Noticeably, the host may not be fully utilized, e.g.,
although some VMs are placed on a host, some of the
host resource is still unused. However, the resource also
consume energy. Suppose there are speriods in each
which the count of VMs in host hkis different from
another. We use tpto denote the time in the period p,
then we can get the following e
¯nergy c
¯onsumption by
u
¯nused r
¯esource.
ecur =
|Ha|
X
k=1
s
X
p=1
ecr(hk)−
|Vk(p)|
X
j=1
ecrjk
·tp,(9)
where |Vk(p)|denotes the VMs’ count in the pth period
of host hk.
Consequently, the total e
¯nergy c
¯onsumption to execute
all the allocated tasks can be derived from Eq. (8) and
Eq. (9) as:
ec =ecei +ecur. (10)
From the aforementioned analysis of energy consump-
tion, we can get that the less running hosts, the less
consumed energy. However, less hosts may greatly affect
the guarantee ratio of real-time tasks. It can be known
that energy conservation and tasks’ guarantee ratio are
two conflicting objectives while scheduling in a virtual-
ized cloud. Our EARH scheduling strategy makes a good
trade-off between guarantee ratio and energy saving by
dynamically starting hosts, closing hosts, creating VMs,
canceling VMs and migrating VMs according to the
system workload.
4 TH E EARH SCHEDULING ST RATEG Y
In this section, we firstly introduce the methodology
of rolling-horizon optimization and then integrate an
energy-efficient scheduling algorithm into it.
4.1 Rolling-Horizon Optimization
Unlike the traditional scheduling scheme where once a
task is scheduled, it is dispatched immediately to the
local queue of a VM or a host, our approach puts all
the waiting tasks in a R
¯olling-H
¯orizon (RH) and their
schedules are allowed to be adjusted for the system
schedulability and possibly less energy consumption.
The pseudocode of RH optimization is shown in Algo-
rithm 1.
Algorithm 1 Pseudocode of RH Optimization
1: for each new task tido
2: Q←NULL; R←NULL;
3: Add a new task tiinto set Q;
4: for each task twdo
5: if stwjk > aiand xwj k == 1 then
6: Add task twinto set Q;
7: end if
8: if stwjk +etij k ≥rtjk and xwj k == 1 then
9: rtjk ←stw jk +etwjk ;
10: Update the ready time of vjk in Vector R;
11: end if
12: Sort tasks in Qby their deadlines in a non-descending
order;
13: for each task tqin set Qdo
14: Schedule task tqby Different Energy-Efficient
Scheduling Algorithms;
15: if xqjk == 0 then
16: Reject task tq;
17: end if
18: end for
19: Update scheduling decisions;
20: end for
21: end for
In the pseudocode of RH optimization, a new task
is added into set Qwhich represents a rolling-horizon
when the new task arrives (See Line 3). Then, it adds
all the waiting tasks in the set Q(See Lines 5-7). The
ready time of each VM is updated (See Lines 8-11).
And it sorts the tasks in Qby their deadlines (See Line
12). After that, the tasks in Qcan be scheduled using
different algorithms and if a task cannot be allocated,
then the task will be rejected (See Lines 13-18).
4.2 Energy-Aware Scheduling Algorithm
In our energy-aware scheduling algorithm, we attempt
to append a new task to the end of former allocated tasks
on a VM. So the start time stijk of task tion VM vj k can
be calculated as below:
stijk =max{rtj k, ai},(11)
where rtjk is the ready time of VM vjk , and it is updated
when each task is allocated to vjk , e.g., a new task tpis
allocated to vjk , the new ready time rtjk of vj k is:
rtjk =stpj k +etpjk .(12)
The pseudocode of energy-efficient scheduling algo-
rithm is shown in Algorithm 2.
The energy-efficient scheduling algorithm is in heuris-
tic fashion. It allocates each task to a VM in a way
to aggressively meet tasks’ deadlines while conserving
energy consumption. The energy-efficient scheduling al-
gorithm calculates the task ti’s start time and execution
time on each VM (See Line 3). If ti’s deadline can be
satisfied representing this task can be allocated, then
the algorithm calculates ti’s energy consumption (See
Lines 4-7). If ticannot be successfully allocated to any
current VMs, it calls the Function ScaleUpResource()
2168-7161 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCC.2014.2310452, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING VOL. X, NO. X, 2013 6
Algorithm 2 Pseudocode of energy-efficient scheduling
1: f indT ag ←FALSE; findV M ←NULL;
2: for each VM vjk in the system do
3: Calculate the start time stijk by Eq. (11) and the execu-
tion time etijk by Eq. (1);
4: if stijk +etijk ≤dithen
5: f indT ag ←TRUE;
6: Calculate ecijk by Eq. (4);
7: end if
8: end for
9: if f indT ag == FALSE then
10: ScaleUpResource();
11: end if
12: if f indT ag == TRUE then
13: Select vsk with minimal energy consumption to execute
ti;findV M ←vsk;xij k ←1;
14: else
15: xijk ←0;
16: end if
17: Update the scheduling decision of tiand remove it from
Q;
striving to accommodate tiby increasing resource (See
Lines 9-11). If tican be allocated, then it selects the
VM yielding minimal energy consumption to execute the
task, otherwise, the algorithm rejects ti(See Lines 12-16).
When a task cannot be successfully allocated in any
current VM, the ScaleUpResource() is called to create a
new VM with the goal of finishing the task within its
deadline. In our study, we employ a three-step policy to
create a new VM as follows:
Setp 1. Create a new VM in a current active host
without any VM migration;
Setp 2. If a new VM cannot be created in Step 1,
migrate some VMs among current active hosts to yield
enough resource on a host and then create a VM on it;
Setp 3. If a new VM cannot be created in Step 2, start
a host and then create a new VM on it.
We use st(hk),ct(vj k), and mt(vj k )to denote the start-
up time of host hk, the creation time of VM vjk and the
migration time of VM vjk , respectively. The migration
time mt(vjk )can be defined as [27]:
mt(vjk ) = r(vjk )
n(vjk ).(13)
It should be noted that using different steps products
different start times for a task, i.e.,
stijk =
ai+ct(vjk ),if setp1,
ai+ct(vjk ) +
|p|
P
p=1
mt(vpk),if setp2,
ai+st(hk) + ct(vjk ),if setp3.
(14)
The pseudocode of Function ScaleUpResource() is
shown in Algorithm 3.
In Function ScaleUpResource(), our algorithm firstly
selects a kind of VM vj(it has not been placed on a host)
that can finish the task within its deadline (See Line 1).
Then it selects a host with possibly minimal remaining
MIPS that can accommodate vj(See Lines 2-7). If
no such host can be found, it migrates the VM with
Algorithm 3 Pseudocode of Function ScaleUpResource()
1: Select a kind of VM vjwith minimal MIPS on condition
that tican be finished before its deadline;
2: Sort the hosts in Hain the decreasing order of the CPU
utilization;
3: for each host hkin Hado
4: if VM vjcan be added in host hkthen
5: Create VM vjk ;f indT ag ←TRUE; break;
6: end if
7: end for
8: if f indT ag == FALSE then
9: Search the host hswith minimal CPU utilization;
10: Find the VM vps with minimal MIPS in hs;
11: for each host hkexcept hsin Hado
12: if VM vps can be added in host hkthen
13: Migrate VM vps to host hk; break;
14: end if
15: end for
16: if VM vjcan be added in host hsthen
17: Create VM vjs;
18: if tican be finished in vjs before its deadline then
19: f indT ag ←TRUE;
20: end if
21: end if
22: end if
23: if f indT ag == FALSE then
24: Start a host hnand put it in Ha;
25: Create VM vjn on hn;
26: if tican be finished in vjn before its deadline then
27: f indT ag ←TRUE;
28: end if
29: end if
minimal MIPS on the host with minimal CPU utilization
to a host with possibly minimal remaining MIPS (See
Lines 9-15). After this migration, it checks whether the
VM vjcan be added to the host on which a VM has been
migrated. If so, the vjwill be created on the host and it
checks if the task can be finished on vjbefore the task’s
deadline (See Lines 16-21). If no migration is feasible or
the task cannot be successfully finished, then it starts
a host hnand creates vjn on it. Then it checks if the
task can be finished successfully on vjn (See Lines 23-29).
Theorem 1. The time complexity of our energy-aware
task allocation algorithm is O(NtNvm +NtNalog(Na)),
where Nais the number of active hosts, Nvm is the
number of VMs, Ntis the number of tasks.
Proof. The time complexity of calculating a task’s start
time and execution time on all the VMs in the system is
O(Nvm)(Lines 3-9, Algorithm 2). In Algorithm 3, the
complexity of selecting a VM with minimal MIPS is
O(Nvm)(Line 1, Algorithm 3). It takes O(Nalog(Na))
to sort hosts in a decreasing order (Line 2, Algorithm
3). Checking if a VM can be added to a host, the time
complexity is O(Na)(Lines 3-7, Algorithm 3). The time
complexity of finding the host with minimal CPU uti-
lization is O(Na)(Line 9, Algorithm 3). It takes O(Nvm)
to find the VM with minimal MIPS (Line 10, Algorithm
3). Checking if a VM can be migrated to a host, the
time complexity is O(Na)(Lines 11-15, Algorithm 3).
For other lines, the time complexity is O(1). Hence, the
2168-7161 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCC.2014.2310452, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING VOL. X, NO. X, 2013 7
time complexity of our energy-aware task allocation al-
gorithm is calculated as O(Nt)(O(Nvm)+O(Nalog(Na))+
O(Na))=O(NtNvm +NtNalog(Na)).¤
When the VMs do not use all the provided resources,
they can be logically resized and consolidated to the
minimal count of physical hosts, while idle hosts can
be shut down to eliminate the idle energy consumption
and thus reduce the total energy consumption by the
cloud. Algorithm 4 gives the pseudocode of algorithm
about scaling down resources.
Algorithm 4 Pseudocode of Scaling Down Algorithm
1: SH ← ∅;DH ← ∅;
2: for each VM vjk in the system do
3: if vjk ’s idle time itjk >THRESH then
4: Remove VM vjk from host hkand delete it;
5: end if
6: end for
7: for each host hkin Hado
8: if there is no VM on hkthen
9: Shut down host hkand remove it from Ha;
10: end if
11: end for
12: Sort the hosts in Hain an increasing order of the CPU
utilization;
13: SH ←Ha;DH ←Haand sort DH inversely;
14: for each host hkin SH do
15: shutDownT ag ←TRUE; AH ← ∅;
16: for each VM vjk in hkdo
17: migT ag ←FALSE;
18: for each host hpin DH except hkdo
19: if vjk can be added in hpthen
20: migT ag ←TRUE; AH ←hp; break;
21: end if
22: end for
23: if migT ag == FALSE then
24: shutDownT ag ←FALSE; break;
25: end if
26: end for
27: if shutDownT ag ←TRUE then
28: Migrate VMs in hkto destination hosts; SH ←S H −
AH −hk;DH ←DH −hk;
29: Shut down host hkand remove it from Ha;
30: end if
31: end for
In the above algorithm, if there exists any VM whose
idle time is larger than a preestablished threshold, then
this VM will be canceled (See Lines 2-6). After this
operation, if a host has no VMs running on it, then
it shuts down this host (See Lines 7-11). Algorithm 4
puts hosts into sets SH and DH, respectively. In the
SH , the hosts are sorted by their CPU utilizations in an
increasing order, and DH is opposite (See Lines 12-13).
If all the VMs running on a host that is in SH can be
added to one or some hosts in DH, it migrates these
VMs to destination hosts, and then shuts down the host
after this migration. Otherwise, if in the host there are
one or some VMs that cannot be migrated, then it gives
up the migration operation of all the VMs on the host.
At the same time, the destination hosts will be removed
from set SH and the host that is shut down will also
be removed from DH (See Lines 14-31).
Theorem 2. The time complexity of our scaling down
algorithm is O(N2
aNvm), where Nais the number of
active hosts, Nvm is the number of VMs.
Proof. Checking whether there exists one VM whose idle
time is larger than a preestablished threshold, the time
complexity is O(Nvm)(Lines 2-6, Algorithm 4). It takes
O(Na)to check if a host needs to be shut down (Lines
7-11, Algorithm 4). It takes O(Nalog(Na)) to sort hosts
in an increasing order (Line 12, Algorithm 4). The VM
migration takes O(N2
aNvm)(Lines 14-31). For other lines,
the time complexity is O(1). Therefore, the complexity of
our scaling down algorithm is calculated as O(Nvm) +
O(Na) + O(Nalog(Na)) + O(N2
aNvm)=O(N2
aNvm).¤
5 PERFORMANCE EVALUATIO N
To demonstrate the performance improvements gained
by EARH, we quantitatively compare it with three base-
line algorithms - non-RH-EARH (NRHEARH in short),
non-Migration-EARH (NMEARH in short), and non-RH-
Migration-EARH (NRHMRARH in short). In addition,
we also compare them with four existing algorithms -
ProfRS in [28], Greedy-R in [29], Greedy-P in [29] and
FCFS in [29]. The algorithms for comparison are briefly
described as follows:
NRHEARH: Differing from EARH, NRHEARH does
not employ the rolling-horizon optimization.
NMEARH: Differing from EARH, NMEARH does not
employ the VM migration while allocating real-time
tasks.
NRHMEARH: Differing from EARH, NRHMEARH
employ neither the rolling-horizon optimization nor the
VM migration.
ProfRS: It firstly checks if a new task can wait until all
the accepted tasks complete in any initiated VMs. If the
new task cannot wait, then it checks whether the new
task can be inserted before any accepted tasks in any
initiated VMs. If not, the algorithm checks if the new task
can be accepted by initiating a new VM. This algorithm
does not consider consolidating VMs to minimal number
of servers when the system workload is light.
Greedy-R: It assigns tasks with the quickest execution
time first to the most powerful available cloud resource
in order to maximize system response time.
Greedy-P: It assigns tasks with the quickest execution
time first to the less powerful available cloud resource
so as to maximize task parallelization as well as system
response time.
FCFS: It assigns tasks as soon as they are ready for
execution to any available cloud resource.
The performance metrics by which we evaluate the
system performance include:
1) Guarantee Ratio (GR) defined as: GR=Total count
of tasks guaranteed to meet their deadlines/Total count
of tasks;
2) Total Energy Consumption (T E C) is: Total energy
consumed by hosts.
2168-7161 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCC.2014.2310452, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING VOL. X, NO. X, 2013 8
0.5 1 1.5 2 2.5 3x 104
60
65
70
75
80
85
Task Count
Guarantee Ratio (%)
FCFS
Greedy−P
Greedy−R
ProRS
NRHMEARH
NRHEARH
NMEARH
EARH
(a)
0.5 1 1.5 2 2.5 3x 104
0
0.5
1
1.5
2
2.5x 106
Task Count
Total Energy Consumption
FCFS
Greedy−P
Greedy−R
ProRS
NRHMEARH
NRHEARH
NMEARH
EARH
(b)
0.5 1 1.5 2 2.5 3x 104
0
20
40
60
80
100
120
140
160
180
Task Count
Energy Consumption per Task
FCFS
Greedy−P
Greedy−R
ProRS
NRHMEARH
NRHEARH
NMEARH
EARH
(c)
0.5 1 1.5 2 2.5 3x 104
30
40
50
60
70
80
Task Count
Resource Utilization (%)
FCFS
Greedy−P
Greedy−R
ProRS
NRHMEARH
NRHEARH
NMEARH
EARH
(d)
Fig. 2. Performance impact of task count.
3) Energy Consumption per Task (E CT ) calculated as:
EC T = Total energy consumption/Accepted task count;
4) Resource Utilization (RU): The average host utiliza-
tion, which can be calculated as: RU = (
|Ha|
P
k=1
|Vk|
P
j=1
|T|
P
i=1
li·
xijk )/(
|Ha|
P
k=1
ck·wtk), where wtkis the active time for host
hkduring an experiment.
5.1 Simulation Method and Parameters
In order to ensure the repeatability of the experiments,
we choose the way of simulation to evaluate the perfor-
mance of aforementioned algorithms. In our simulations,
the CloudSim toolkit [30] is chosen as a simulation
platform, and we add some new settings to conduct
our experiments. The detailed setting and parameters are
given as follows:
1) Each host is modeled to have one CPU core and
the CPU performance is 1000 MIPS, 1500 MIPS, or 2000
MIPS;
2) The energy consumption rate of the three different
kinds of hosts are 200W, 250W, or 400W;
3) The start-up time of a host is 90s and the creation
time of a VM is 15s;
4) We employ parameter baseDeadline to control a
task’s deadline that can be calculated as:
di=ai+baseDeadline, (15)
where parameter baseDeadline is in uniform distribution
U(baseT ime, a ×baseT ime)and we set a= 4;
5) The arriving rate of tasks is in Poisson distribution,
and the parameter interval T ime is used to determine
the time interval between two consecutive tasks.
The values of parameters are listed in Table 1.
5.2 Performance Impact of Task Count
In this section, we present a group of experimental
results to observe the performance comparison of the
eight algorithms in terms of the impact of task count.
Fig. 2 shows the experimental results.
We can observe from Fig. 2(a) that all the algorithms
basically keep stable guarantee ratios regardless of how
the changes of task count. This is because there are
TABLE 1
Parameters for Simulation Studies
Parameter Value (Fixed)-(Varied)
Task Count (105) (1)-(0.5,1.0,1.5,2.0,2.5,3.0)
baseT ime (s) (250)-(100,150,200,250,300,350,400)
intervalT ime (s) (3)-(0,2,4,6,8,10,12)
taskLength (MI) (100000)
infinite resources in clouds, thus when task count in-
creases, new hosts will be started up to finish more tasks.
However, not all the tasks can be finished successfully
although there are enough resources. We can attribute
this to the fact that starting a new host or creating a
new VM needs additional time cost, which makes some
real-time tasks with tight deadlines cannot be finished
within their timing constraints. Besides, it can be found
that EARH and NMEARH have higher guarantee ratios
than the other six algorithms. This can be explained
that EARH and NMEARH employ the rolling-horizon
optimization policy that is able to make tasks with
tight deadlines finish earlier, so the schedulability is
significantly improved.
From Fig. 2(b), it can be observed that although
NRHEARH conserves more energy, its guarantee ratio
is bad (See Fig 2(a)); in contrast, NMEARH has the
most energy consumption but with higher guarantee
ratio, which reflects that NRHEARH and NMEARH lack
good trade-off between guarantee ratio and total energy
consumption. Moreover, we can see that EARH and
NRHEARH have better energy conservation ability com-
pared with others, and the trend becomes more obvious
with the increase of task count. This experimental result
indicates that employing the VM migration policy is
very efficient when scheduling real-time tasks. On one
hand, when the task count increases, current VMs can be
consolidated to make some room for creating new VMs,
which avoids the energy consumption caused by adding
new active hosts. On the other hand, the VMs in light-
load host can be migrated to other hosts and then the
idle hosts can be shut down, which further reduces the
energy consumption.
Fig. 2(c) reveals that the EC T s of NMEARH and
2168-7161 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCC.2014.2310452, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING VOL. X, NO. X, 2013 9
0 2 4 6 8 10 12
65
70
75
80
85
90
intervalTime
Guarantee Ratio (%)
FCFS
Greedy−P
Greedy−R
ProRS
NRHMEARH
NRHEARH
NMEARH
EARH
(a)
0 2 4 6 8 10 12
0
2
4
6
8
10
12
14x 105
intervalTime
Total Energy Consumption
FCFS
Greedy−P
Greedy−R
ProRS
NRHMEARH
NRHEARH
NMEARH
EARH
(b)
0 2 4 6 8 10 12
0
20
40
60
80
100
120
140
160
intervalTime
Energy Consumption per Task
FCFS
Greedy−P
Greedy−R
ProRS
NRHMEARH
NRHEARH
NMEARH
EARH
(c)
0 2 4 6 8 10 12
0
10
20
30
40
50
60
70
intervalTime
Resource Utilization (%)
FCFS
Greedy−P
Greedy−R
ProRS
NRHMEARH
NRHEARH
NMEARH
EARH
(d)
Fig. 3. Performance impact of task arrival rate.
NRHMEARH slightly increase with the increase of task
count, whereas other six algorithms that employ VM
migration policy basically maintain stable EC T . This
experimental result can be explained in two folds. First,
when the task count increases, new VMs are needed to
accommodate these real-time tasks. The RAEH and NR-
HEARH use the migration policy striving to make room
for theses VMs, and avoid starting new hosts. Therefore,
the current resource is efficiently utilized leading to a
basically stable value of EC T . Second, ProRS, Greedy-
R, Greedy-P and FCFS tend to reject the tasks with tight
deadlines avoiding to increase the usage of resources.
Fig. 2(d) demonstrates that EARH and NRHEARH
show significant advantage compared with the other
algorithms. This can be attributed to the benefit brought
by VM migration policy. It is the VM migration policy
that can make the system fully utilize the host computing
capacity. On average, EARH outperforms NMEARH and
NRHMEARH 170.4% and 167.1%, respectively. Com-
pared with ProRS, Greedy-R, Greedy-P and FCFS, EARH
outperforms them by 114.1%, 98.0%, 92.9% and 120.2%
on average, respectively.
5.3 Performance Impact of Task Arrival Rate
In order to examine the performance impact of task
arrival rate, we vary the value of interval T ime from 0
to 12 with step 2. Fig. 3 illustrates the performance of
the eight algorithms.
One of the observations from Fig. 3(a) is that the eight
algorithms basically have unchanged guarantee ratios no
matter how the interval T ime varies. This derives from
the infinite resource provided in clouds. When the value
of interval T ime is smaller, tasks arrive in short time.
In this situation, creating new VMs or starting hosts is
required to accommodate these tasks. In addition, EARH
and NMEARH employing rolling-horizon optimization
have higher guarantee ratios than other six algorithms
without using rolling-horizon. The explanation for this
experimental result is like that in Fig. 2(a).
Fig. 3(b) shows that EARH and NRHEARH consume
less energy than the other algorithms. This reason can be
explained as that in Fig. 2(b). Besides, we can get from
Fig. 3(b) that when the parameter interval T ime changes,
the energy consumptions by the eight algorithms are
basically unchangeable indicating that the task arrival
rate has little impact on energy consumption.
The experimental results from Fig. 3(c) depict that
when interval T ime changes, the EC T s of the eight
algorithms are basically constant demonstrating that task
arrival rate has little impact on EC T . Besides, the EC T s
of EARH and NMEARH are obviously less than those
of the other algorithms with the explanation like that
in Fig. 2(c). In addition, when tasks arrive almost at
the same time (e.g., interval T ime is 0 or 2), NMEARH
and NRHMEARH consume more energy per task than
ProRS, Greedy-R, Greedy-P and FCFS. This is because
NMEATH and NRHMEARH accept more tasks with
tight deadlines than the others. When these tasks arrive
continuously, the system has to create new VMs and
hosts to accommodate them, which increases the energy
consumption by the tasks with tight deadlines.
Fig. 3(d) shows that when the interval T ime varies,
EARH and NRHEARH also exhibit their advantage
brought by VM migration policy. On average, EARH
outperforms NMEARH and NRHMEARH 111.1% and
120.2%, respectively. Compared with ProRS, Greedy-R,
Greedy-P and FCFS, EARH outperforms them by 92.7%,
87.4%, 69.4% and 94.7% on average, respectively.
5.4 Performance Impact of Task Deadline
The goal of this set of experiments is to evaluate the
performance impact of task deadline on the eight algo-
rithms. Parameter baseT ime varies from 100 to 400 with
step 50.
It is observed from Fig. 4(a) that with the increase
of baseT ime (i.e., task deadline becomes looser), the
guarantee ratios of the eight algorithms increase corre-
spondingly. This is because the deadlines are prolonged
making tasks can be finished later within their timing
constraints. In addition, Fig. 4(a) shows that EARH and
NMEARH have higher guarantee ratios. This can be
explained that after employing the rolling-horizon pol-
icy, the tasks with tight deadlines can be preferentially
finished. Thus, the guarantee ratio is enhanced.
From Fig. 4(b), we can see that when baseT ime
increases, the energy consumption by the eight algo-
rithms increases correspondingly. This is because when
2168-7161 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCC.2014.2310452, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING VOL. X, NO. X, 2013 10
100 150 200 250 300 350 400
0
10
20
30
40
50
60
70
80
90
100
baseTime
Guarantee Ratio (%)
FCFS
Greedy−P
Greedy−R
ProRS
NRHMEARH
NRHEARH
NMEARH
EARH
(a)
100 150 200 250 300 350 400
0
2
4
6
8
10
x 105
baseTime
Total Energy Consumption
FCFS
Greedy−P
Greedy−R
ProRS
NRHMEARH
NRHEARH
NMEARH
EARH
(b)
100 150 200 250 300 350 400
0
50
100
150
baseTime
Energy Consumption per Task
FCFS
Greedy−P
Greedy−R
ProRS
NRHMEARH
NRHEARH
NMEARH
EARH
(c)
100 150 200 250 300 350 400
0
10
20
30
40
50
60
70
80
90
baseTime
Resource Utilization (%)
FCFS
Greedy−P
Greedy−R
ProRS
NRHMEARH
NRHEARH
NMEARH
EARH
(d)
Fig. 4. Performance impact of task deadline.
the deadline becomes looser, more tasks can be fin-
ished before their deadlines and thus more energy is
consumed. Also, the energy consumptions by EARH
and NRHEARH are less than those of others and the
trend becomes more pronounced with the increase of
baseT ime. We attribute this trend to the fact that EARH
and NRHEARH use the VM migration policy that can
efficiently utilize the resource of active hosts, which
avoids starting more hosts to finish tasks. However, the
other six algorithms need to constantly start hosts to
finish more tasks resulting in more energy consumption.
Fig. 4(c) shows that the EC T s of NMEARH,
NRHMEARH, ProRS, Greedy-R, Greedy-P and FCFS
become larger with the increase of baseT ime. This can be
explained that these algorithms start more hosts and thus
yield more idle resource leading to lower utilization.
When the value of baseT ime is less than 300, the EC T s
of EARH and NRHEARH decrease with the explanation
that when deadline becomes looser, more tasks can be
finished in the current active hosts. Besides, the VM
migration policy is employed without starting more
hosts. Hence, the utilization of active hosts is higher
leading to smaller EC T . Nevertheless, when the value of
baseT ime is larger than 300, the current active hosts lack
the ability to finish more tasks due to looser deadline
and some hosts must be started, so yielding some idle
resource. Therefore, the ECT s increase correspondingly.
The advantage of VM migration policy is again shown
in Fig. 4(d). EARH and NRHEARH have much higher
resource utilizations than the other algorithms. Besides,
we can see that the resource utilization of NRHEARH is
sometimes even higher than EARH. It can be explained
that the employment of rolling-horizon policy makes the
system accept more tasks with tight deadlines which
sometimes needs new computing resources, and thus
decreases the resource utilization a bit. EARH outper-
forms NMEARH and NRHMEARH 164.1% and 173.1%,
respectively. Compared with ProRS, Greedy-R, Greedy-P
and FCFS, EARH outperforms them by 124.1%, 118.1%,
109.6% and 143.2% on average, respectively.
5.5 Evaluation Based on Real-World Trace
The above groups of experiments demonstrate the per-
formance of the different algorithms in various random
synthetic workloads. To validate our proposed algorithm
in practical use, we evaluate the algorithms based on the
latest version of the Google cloud tracelogs [31].
0 5000 10000 15000
0
2000
4000
6000
8000
10000
12000
Times(s)
Task Count
Fig. 5. The count of tasks submitted to the system.
The tracelogs record the information of 25 million
tasks grouped in 650 thousand jobs that span 29 days.
It is relatively difficult to conduct an experiment based
on all the tasks due to the enormous count of tasks in
the tracelogs. As a result, the first 5 hours in day 18, a
representative day among the 29 days according to the
analysis in [34], were selected as a testing sample. Over
200 thousand tasks were submitted to the cloud system
over this 5 hours. To observe the change of the task count
over the time, we depict the count of tasks submitted
in every 60 seconds in Fig. 5, where it can be easily
found that the task count fluctuates significantly over the
time. When large amounts of tasks surge into the system,
the resource demand is at a peak. While the resource
demand decreases sharply at the timestamps where a
few of tasks were submitted into the system. Based
on this observation, it is straightforward to conclude
that the resource scaling-up and scaling-down are quite
necessary for a cloud system.
It takes about 1,587 seconds averagely from the sub-
mission of a task to the finish of the task (i.e., the
response time of a task). The average execution time
of the tasks is around 1,198 seconds. Further, both
the distributions of the response time and the execu-
tion time approximately follow Lognormal distribution,
which means that most tasks finish in a short time. On
average, the ratio of the tasks’ response time over the
execution time is 2.89.
2168-7161 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCC.2014.2310452, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING VOL. X, NO. X, 2013 11
TABLE 2
Performance for Google cloud workloads
`````````
Metric
Algorithm EARH NMEARH NRHEARH NRHMEARH ProRS Greedy-R Greedy-P FCFS
GR 96.7% 95.4% 94.1% 93.4% 95.0% 93.2% 97.0% 90.1%
TEC(×106) 2.55 3.28 2.68 3.87 2.65 2.59 3.13 2.82
ECT 10.95 14.28 11.83 17.21 11.59 11.54 13.40 13.00
RU 72.8% 43.5% 65.6% 39.8% 61.9% 57.0% 50.7% 55.1%
0 5000 10000 15000
0
250
500
750
1000
1250
1500
Active Hosts Count
time(s)
EARH
Task Count
0 5000 10000 15000
time(s)
EARH
NMEARH
Task Count
0 5000 10000 15000
time(s)
EARH
NRHEARH
Task Count
EARH
NRHMEARH
0 5000 10000 15000
time(s)
0
2.5
5
7.5
10
12.5
15
Task Count(10
3)
EARH
NRHMEARH
Task Count
0 5000 10000 15000
0
250
500
750
1000
1250
1500
Active Hosts Count
time(s)
EARH
ProRS
Task Count
0 5000 10000 15000
time(s)
EARH
Greedy−R
Task Count
0 5000 10000 15000
time(s)
EARH
Greedy−P
Task Count
EARH
NRHMEARH
0 5000 10000 15000
time(s)
EARH
NRHMEARH
0
2.5
5
7.5
10
12.5
15
Task Count(10
3)
EARH
FCFS
Task Count
Fig. 6. The change of AHC and Task Count over time.
Due to the lack of some context information and
normalized data, it is necessary to make four realistic
assumptions as follows:
•When a task fails (i.e., it is evicted or killed), we
assume that it is reset back to the initial state.
According to the statement in [32], [33], the Google
cloud system manages to reschedule failed tasks,
and restart the tasks from the initial state.
•The task execution duration is considered from the
last schedule event to the finish event. Tasks are
normally resubmitted and rescheduled caused by
eviction and failures.
•Task length liis calculated based on the execution
duration and the average CPU utilization. As the
tracelog does not contain the data of task length in
MI, we employ the method proposed in [34] to solve
the problem.
li= (tsfinish −tsschedule)×Uavg ×CC P U (16)
where tsfinish and tsschedule represent the times-
tamp of finish and schedule event, respectively; Uavg
denotes the average CPU usage of this task. All
the three values can be obtained in the tracelog. As
regards CCP U , it represents the processing capacity
of the CPU in Google cloud. Since the data of ma-
chines’ capacity is rescaled in the trace, we assume
that it is similar to our experiment settings for hosts,
CCP U = 1500MIPS.
•The deadline of each task is designated through
the ratio of response time over execution time. As
discussed above, for the tasks in the first 5 hours in
the day 18, the average ratio is 2.89. So the deadline
of each task is assumed to be βtimes larger than
its maximum execution time, where βis uniformly
distributed in the range of [2.6,3.2].
Table 2 shows the performance of the eight algorithms
based on the Google cloud tracelogs. The results show
that EARH exhibits an outstanding performance in real
practice. The guarantee ratios of the eight algorithms
all show high values. However, there are still some
tasks that are rejected. This is because of the restriction
of deadline in our experiment, while in Google there
is no hard deadline restriction. The ECTs of the eight
algorithms are less than those in the previous synthetic
workloads. It is reasonable because there are many short-
run tasks in the tracelogs; the task lengthes of these tasks
are relatively small which consumes less energy. Regard-
ing the resource utilization, the algorithms using VM
migration policy still outperform the other algorithms.
This result indicates that our proposed algorithm can
enhance the system’s resource utilization effectively in
practice. Based on the Google Cloud workloads, in terms
of resource utilization, EARH outperforms NMEARH,
NRHEARH and NRHMEARH 67.3%, 11.0% and 83.0%,
respectively. Considering the other four algorithms, the
average outperformance of EARH reaches 30.3%.
In order to further investigate how our proposed algo-
rithm can improve the system utilization, we depict the
2168-7161 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCC.2014.2310452, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING VOL. X, NO. X, 2013 12
12345
40
50
60
70
80
90
100
Beta
Guarantee Ratio (%)
FCFS
Greedy−P
Greedy−R
ProfRS
NRHEARH
NRHMEARH
NMEARH
EARH
(a)
12345
0
0.5
1
1.5
2
2.5
3
3.5
4x 106
Beta
Total Energy Consumption
FCFS
Greedy−P
Greedy−R
ProRS
NRHMEARH
NRHEARH
NMEARH
EARH
(b)
12345
0
5
10
15
20
25
30
Beta
Energy Consumption per Task
FCFS
Greedy−P
Greedy−R
ProRS
NRHMEARH
NRHEARH
NMEARH
EARH
(c)
12345
0
20
40
60
80
100
0
20
40
Beta
Resource Utilization (%)
FCFS
Greedy−P
Greedy−R
ProRS
NRHMEARH
NRHEARH
NMEARH
EARH
(d)
Fig. 7. Performance impact of Beta.
change of active hosts’ count (AHC in short) over time
in Fig. 6, and compare EARH with the other algorithms
respectively. The blue solid line represents the task count
whose value is specified by the right side Y-axis. The red
solid line and dashed lines represent the AHCs of EARH
and the other algorithms whose value is specified by the
left side Y-axis. It can be found from the AHC of EARH
that the resource provisioning in the system is elastic
according to the request demand. When large amounts
of tasks surge into the system, the AHC increases to ac-
commodate these tasks. When fewer tasks are submitted
to the system, the resource is over-provisioned, and the
AHC decreases to save energy. From the comparison of
EARH and NMEARH, we can observe that the system
using EARH starts up less active hosts than NMEARH
especially when the workload of the system changes
from heavy to light. For example, during the time span
from 5,000 to 10,000, the tasks submitted to system are
much fewer than that in the previous time, the AHC of
EARH decreases sharply while that of NMEARH still
keeps a relatively high value. The superiority of EARH
in this situation can be attributed to the VM migration
technique which consolidates the existing VMs to several
active hosts, and as a result the idle active hosts can
be turned off to save energy when the tasks submitted
to the system becomes fewer. The comparison of EARH
and NRHEARH indicates that the employment of the
rolling-horizon policy can reduce the demand of active
hosts when large amounts of tasks surge into the system.
It is because the adoption of the rolling-horizon policy
is able to make the tasks with tight deadlines execute
earlier, and postpone the execution of the tasks with
loose deadlines. As a result, the resource demand in the
time stamp at which large amounts of tasks submitted to
the system become less, and the AHC of EARH is fewer
than that of NRHEARH. For other four algorithms, the
AHC of ProRS is similar to that of EARH; the Greedy-R
starts up more active hosts when large amounts of tasks
surge into the system; the Greedy-P cannot turn off the
active hosts in time when fewer tasks are submitted to
the system; the AHC of FCFS fluctuates with the change
of task count.
As mentioned above, the information about tasks’
deadlines is not contained in the Google tracelogs. We
designate the deadline through the ratio of response time
over execution time based on the realistic analysis in
previous experiments. For the purpose of demonstrating
the performance impact of deadline under the Google
tracelogs, we vary the value of βfrom 1 to 5. Fig. 7
shows the experimental results.
It is observed from Fig. 7(a) that with the increase of β,
the guarantee ratios of the eight algorithms increase. Be-
sides, EARH and NMEARH have the higher guarantee
ratios. This can be attributed to the employment of the
rolling-horizon policy that enhances the guarantee ratio.
Besides, it can be found that the rolling-horizon policy is
especially suitable for tasks with tight deadlines, such as
β= 1. From Fig. 7(b), we can see that when βvaries, the
energy consumption by the eight algorithms decreases.
This is because when the deadline is tight, the system
has to start up more hosts to keep a high guarantee ratio,
and thus consumes more energy. Also, the energy con-
sumptions by EARH and NRHEARH are less than that
of others by the adoption of the VM migration policy that
can efficiently utilize the resource of active hosts. Fig.
7(c) shows that the EC T s of NMEARH, NRHMEARH,
ProRS, Greedy-R, Greedy-P and FCFS decrease with the
increase of β. This because these algorithms start up
more hosts and thus incur a poor resource utilization.
In addition, the descending trend of EARH is not as
obvious as those of other algorithms. It is reasonable
that EARH employs the rolling-horizon policy and the
VM migration policy that make the system yield a good
resource utilization even when the βis small. The advan-
tage of the VM migration policy is again demonstrated
in Fig. 7(d). EARH and NRHEARH have higher resource
utilizations than the other algorithms under the five
values of β. EARH outperforms NMEARH, NRHEARH,
and NMRHEARN 75.3%, 8.1%, and 76.3% respectively.
For ProRS, Greedy-R, Greedy-P and FCFS, the average
outperformance of EARH is 23.4%, 26.0%, 40.2%, and
24.6%, respectively.
6 CONCLUSIONS AND FU TURE WOR K
In this paper, we investigated the problem of energy-
aware scheduling for independent, aperiodic real-time
tasks in virtualized clouds. The scheduling objectives
are to improve the system’s schedulability for real-time
tasks and save energy. To achieve the objectives, we
2168-7161 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCC.2014.2310452, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING VOL. X, NO. X, 2013 13
employed the virtualization technique and a rolling-
horizon optimization scheme. Firstly, we proposed a
rolling-horizon scheduling architecture, and then a task-
oriented energy consumption model was analyzed and
built. On this basis, we presented a novel energy-aware
scheduling algorithm named EARH for real-time tasks,
in which a rolling-horizon policy was used to enhance
the system’s schedulability. Additionally, the resource
scaling up and resource scaling down strategies were
developed and integrated into EARH, which can flexibly
adjust the active hosts’ scale so as to meet the tasks’ real-
time requirements and save energy.
The EARH algorithm is the first of its kind reported
in the literature; it comprehensively addresses the issue
of real-time, schedulability, elasticity, and energy saving.
To evaluate the effectiveness of our EARH, we conduct
extensive simulation experiments to compare it with
other algorithms. The experimental results indicate that
EARH can efficiently improve the scheduling quality of
others in different workloads and is suitable for energy-
aware scheduling in virtualized clouds.
The following issues will be addressed in our future
studies. First, we will apply vertical scaling of VMs
in terms of CPU in our energy-aware model, i.e., the
maximum amount of CPU cycles assigned to a VM that
runs a task can be updated dynamically. Second, we plan
to implement the EARH in a real cloud environment to
test its performance.
REFERENCES
[1] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, I. Brandic,
“Cloud computing and emerging IT platforms: vision,
hype, and reality for delivering computing as the 5th
utility,” Future Generation Computer Systems, vol. 57, no.
3, pp. 599-616, 2009.
[2] A. V. Dastjerdi, S. G. H. Tabatabaei, and R. Buyya, “A
dependency-aware ontology-based approach for deploy-
ing service level agreement monitoring services in cloud,”
Software-Practice and Experience, vol. 42, pp. 501-518, 2012.
[3] A. Beloglazov, J. Abawajy, and R. Buyya, “Energy-aware
resource allocation heuristics for efficient management
of data centers for cloud computing,” Future Generation
Computer Systems, vol. 28, pp. 755-768, 2012.
[4] X. Zhu and P. Lu, “Study of scheduling for processing real-
time communication signals on heterogeneous clusters,”
Proc. 9th Intl Symp. Parallel Architectures, Algorithms, and
Networks (I-SPAN ’08), pp. 121-126, May 2008.
[5] J. G. Koomey, “Estimating total power consumption by
servers in the U.S. and the world,” Lawrence Berkeley
National Laboratory, Stanford University, 2007.
[6] W. Feng, “Making a case for efficient supercomputing,”
ACM Queue, vol. 1, no. 7, pp. 54-64, 2003.
textcolorblue
[7] H. Cademartori, Green Computing Be-
yond the Data Center, 2007. Available:
http://www.powersavesoftware.com/Download/PS
WP GreenComputing EN.pdf
[8] L. A. Barroso, U. Hlzle, “The datacenter as a computer: an
introduction to the design of warehouse-scale machines,”
Synthesis Lectures on Computer Architecture, vol. 4, no. 1, pp.
1-108, 2009.
[9] X. Fan, W. D. Weber, L. A. Barroso, “Power provisioning
for a warehousesized computer,” ACM SIGARCH Com-
puter Architecture News, vol. 35, no. 2, pp. 13-23, 2007.
[10] Y.-K. Kwok, I. Ahmad, “Static scheduling algorithms for
allocating directed task graphs to multiprocessors,” ACM
Computation Survey, vol. 31, no. 4, pp. 406-471, 1999.
[11] X. Qin and H. Jiang, “A novel fault-tolerant scheduling
algorithm for precedence constrained tasks in real-time
heterogeneous systems,” J. Parallel Computing, vol. 32, no.
5, pp. 331-356, 2006.
[12] X. Zhu, C. He, K. Li, and X. Qin, “Adaptive energy-
efficient scheduling for real-time tasks on DVS-enabled
heterogeneous clusters,” J. Parallel and Distributed Comput-
ing, vol. 72, pp. 751-763, 2012.
[13] S. Zikos and H. D. Karatza, “Performance and energy
aware cluser-level scheduling of compute-intensive jobs
with unknown service times,” Simulation Modelling Practice
and Theory, vol. 19, no. 1, pp. 239-250, 2011.
[14] J. S. Chase, D. C. Anderson, P. N. Thakar, A. M. Vahdat,
and R. P. Doyle, “Managing energy and server resources in
hosting centers,” Proc. 18th ACM Symp. Operating Systems
Principles (SOSP ’01), pp. 103-116, Oct. 2001.
[15] R. Ge, X. Feng, and K. W. Cameron, “Performance-
constrained distributed DVS scheduling for scientific ap-
plications on power-aware clusters,” Proc. ACM/IEEE con-
ference on Supercomputing (SC ’05), pp. 34-44, Nov. 2005.
[16] K. H. Kim, R. Buyya, J. Kim, “Power-aware scheduling
of bag-of-tasks applications with deadline constraints on
DVS-enabled clusters,” Proc. 7th IEEE/ACM Int’l Symp.
Cluster Computing and the Grid (CCGrid ’07), pp. 541-548,
May 2007.
[17] V. N´elis, J. Goossens, R. Devillers, D. Milojevic, and N.
Navet, “Power-aware real-time scheduling upon identical
multiprocessor platforms,” Proc. 2008 IEEE Int’l Conf. Sen-
sor Networks, Ubiquitous, and Trustworthy Computing (SUTC
’08), pp. 209-216, Jun. 2008.
[18] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph. R. H. Katz,
A. Konwinski, G. Lee, D. A. Patterson, A. Rabkin, I. Stoica,
and M. Zaharia, “Above the clouds: a Berkeley view of
cloud computing,” Technical Report UCB/EECS-2009-28, UC
Berkeley, 2009.
[19] L. Liu, H. Wang, X. Liu, X. Jin, W. He, Q. Wang, and
Y. Chen, “GreenCloud: a new architecture for green data
center,” Proc. 6th Int’l Conf. High Performance Distributed
Computing (HPDC ’08), pp. 29-38, Jun. 2008.
[20] V. Petrucci, O. Loques, and D. Moss´e, “A dynamic con-
figuration model for power-efficient virtualized server
clusters,” Proc. 11th Brazillian Workshop on Real-Time and
Embedded Systems (WTR ’09), May 2009.
[21] J. Bi, Z. Zhu, R. Tian, and Q. Wang. “Dynamic provi-
sioning modeling for virtualized multi-tier applications in
cloud data center,” Proc. 3rd IEEE Int’l Conf. Autonomic
Computing (ICAC ’06), pp. 15-24, Jun. 2006.
[22] A. Verma, P. Ahuja, and A. Neogi, “pMapper: power and
migration cost aware application placement in virtualized
systems,” Proc. 9th ACM/IFIP/USENIX Int’l Conf. Middle-
ware (Middleware ’08), pp. 243-264, Dec. 2008.
[23] ´
I. Goiri, J. L. Berral, J. O. Fit´
o, F. Juli`
a, R. Nou, J. Guitart, R.
Gavald`
a, and J. Torres, “ Energy-efficient and multifaceted
resource management for profit-driven virtualized data
centers,” Future Generation Computer Systems, vol. 28, pp.
718-731, 2012.
[24] X. Wang, Z. Du, and Yi Chen, “ An adaptive model-free
resource and power management approach for multi-tier
cloud environments,” The Journal of Systems and Software,
vol. 85, pp. 1135-1146, 2012.
2168-7161 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCC.2014.2310452, IEEE Transactions on Cloud Computing
IEEE TRANSACTIONS ON CLOUD COMPUTING VOL. X, NO. X, 2013 14
[25] P. Graubner, M. Schmidt, and B. Freisleben, “Energy-
efficient management of virtual machines in Eucalyptus,”
Proc. IEEE 4th Int’l Conf. Cloud Computing (CLOUD ’11),
pp. 243-250, 2011.
[26] L. Yan, J. Luo, and N. K. Jha, “Joint dynamic volt-
age scaling and adaptive body biasing for heterogeneous
distributed real-time embedded systems,” IEEE Trans.
Computer-Aided Design of Integrated Circuits and Systems,
vol. 24, no. 7, pp. 1030-1041, 2005.
[27] A. Beloglazov and R. Buyya, “Optimal on deterministic
algorithms and adaptive heuristics for energy and perfor-
mance efficient dymanic consolidation of virtual machines
in cloud data centers,” Concurrency and Computation: Prac-
tice and Experience, vol. 24, pp. 1397-1420, 2012.
[28] L. Wu, G. Kumar, and R. Buyya, “SLA-based admis-
sion control for a sofrware-as-a-service provider in cloud
computing environments,” Journal of Computer and System
Science, vol. 78, no. 5, pp. 1280-1299, 2012.
[29] J. O. Gutierrez and K. M. Sim, “A family of heuristics for
agent-based elastic cloud bag-of-tasks concurrent schedul-
ing,” Future Generation Computer Systems, vol. 29, no. 7, pp.
1682-1699, 2013.
[30] R. N. Calheiros, R. Ranjan, A. Beloglazov, C. A. F. D.
Rose, and R. Buyya, “CloudSim: a toolkit for modeling
and simulation of cloud computing environments and
evaluation of resource provisioning algorithms,” Software:
Practice and Experience, vol. 41, no. 1, pp. 23-50, 2011.
[31] Google Cluster Data V2: http://code.google.com/p/
googleclusterdata/wiki/ClusterData2011 1.
[32] C. Reiss, J. Wilkes, and J. Hellerstein., ”Google cluster-
usage traces: format + schema,” Google Inc., White Paper,
2011.
[33] J. Dean and S. Ghemawat, “MapReduce: simplified data
processing on large clusters, Communications of the ACM,
vol. 51, no. 1, pp. 107-113, 2008.
[34] I. S. Moreno, P. Garraghan, P. Towend, and X. Jie, ”An
approach for characterizing workloads in Google cloud to
derive realistic resource utilization models,” Proc. IEEE 7th
Int’l Symp. Service-Oriented System Engineering (SOSE ’13),
pp. 49-60, 2013.
Xiaomin Zhu received the B.S. and M.S. de-
grees in computer science from Liaoning Tech-
nical University, Liaoning, China, in 2001 and
2004, respectively, and Ph.D. degree in com-
puter science from Fudan University, Shanghai,
China, in 2009. In the same year, he won the
Shanghai Excellent Graduate. He is currently an
associate professor in the College of Information
Systems and Management at National Univer-
sity of Defense Technology, Changsha, China.
His research interests include scheduling and
resource management in green computing, cluster computing, cloud
computing, and multiple satellites. He has published more than 50
research articles in refereed journals and conference proceedings such
as IEEE TC, IEEE TPDS, JPDC, JSS and so on. He is also a frequent
reviewer for international research journals, e.g., IEEE TC, IEEE TNSM,
IEEE TSP, JPDC, etc. He is a member of the IEEE, the IEEE Communi-
cation Society, and the ACM.
Laurence T. Yang research fields include net-
working, high performance computing, embed-
ded systems, ubiquitous computing and intelli-
gence. He has published around 300 papers (in-
clude around 80 journal papers, e.g., IEEE and
ACM Transactions) in refereed journals, confer-
ence proceedings and book chapters in these
areas. He has been involved in more than 100
conferences and workshops as a program/ gen-
eral/steering conference chair and more than
300 conference and workshops as a program
committee member. Currently is the chair of IEEE Technical Committee
of Scalable Computing (TCSC), the chair of IEEE Task force on Ubiq-
uitous Computing and Intelligence, the co-chair of IEEE Task force on
Autonomic and Trusted Computing. He is also in the executive commit-
tee of IEEE Technical Committee of Self-Organization and Cybernetics
for Informatics, and of IFIP Working Group 10.2 on Embedded Systems,
and of IEEE Technical Committee of Granular Computing.
Huangke Chen received the B.S. degree in
information systems from National University of
Defense Technology, China, in 2008. Currently,
he is a M.S. student in the College of Information
System and Management at National University
of Defense Technology. His research interests
include cloud computing and green computing.
Ji Wang received the B.S. degree in informa-
tion systems from National University of De-
fense Technology, China, in 2008. Currently, he
is a M.S. student in the College of Information
System and Management at National University
of Defense Technology. His research interests
include real-time systems, fault-tolerance, and
cloud computing.
Shu Yin received his Ph.D. degree in the De-
partment of Computer Science and Software
Engineering at Auburn University in 2012. Cur-
rently, he is an assistant professor in the School
of Information Science and Engineering at Hu-
nan University, China. His research interests in-
clude storage systems, reliability modeling, fault
tolerance, energy-efficient computing, and wire-
less communications.
Xiaocheng Liu received the Ph.D. degree from
the National University of Defense Technology
in 2012. He is currently an assistant professor
in the College of Information Systems and Man-
agement at National University of Defense Tech-
nology, Changsha, China. His recent research
interests include resource allocation in the cloud,
simulation-based training, component- based
modeling.