Conference PaperPDF Available

On the Extension of Traditional Resource Allocation Algorithms in LTE-A to joint UL-DL Scheduling with FDD Carrier Aggregation

Authors:

Abstract and Figures

Throughput and spectral efficiency maximization are two of the most challenging issues to be addressed by current and future cellular networks. To achieve these goals however, radio resources need to be utilized in a coordinated manner to provide better network performance and ensure adequate user quality of service (QoS). The radio resources are controlled by LTE base station (eNodeB) and can be scheduled either independently or jointly by considering both uplink (UL) and downlink (DL). This paper addresses the joint UL-DL scheduling in a carrier aggregation (CA) supportive LTE-advanced networks. This work provides good insights of the implementation of the traditional scheduling algorithms for joint UL-DL scheduling for LTE-A utilizing the functionality of CA. We present preliminary results from a MATLAB-based IS-Wireless system level simulator that confirm the importance of the fair sharing of the resources between the UL and DL channels while maintaining a good QoS. Furthermore, simulation results show that the resource utilization and spectral efficiency can be significantly improved by dynamically changing the bandwidth portion between the UL and DL. Finally, we also show that there is a significant improvement in both UL and DL data rates using the proposed CA and joint scheduling schemes.
Content may be subject to copyright.
On the Extension of Traditional Resource Allocation
Algorithms in LTE-A to joint UL-DL Scheduling
with FDD Carrier Aggregation
Abdulziz M. Ghaleb, Elias Yaacoub
Qatar Mobility Innovations Center, Doha, Qatar
Email: {aghaleb, eliasy}@qmic.com
Ayad Atiyah Abdulkafi
College of Engineering, Tikrit University, Iraq
Email: al.ayad@yahoo.com
Abstract—Throughput and spectral efficiency maximization
are two of the most challenging issues to be addressed by current
and future cellular networks. To achieve these goals however,
radio resources need to be utilized in a coordinated manner
to provide better network performance and ensure adequate
user quality of service (QoS). The radio resources are controlled
by LTE base station (eNodeB) and can be scheduled either
independently or jointly by considering both uplink (UL) and
downlink (DL). This paper addresses the joint UL-DL scheduling
in a carrier aggregation (CA) supportive LTE-advanced networks.
This work provides good insights of the implementation of the
traditional scheduling algorithms for joint UL-DL scheduling for
LTE-A utilizing the functionality of CA. We present preliminary
results from a MATLAB-based IS-Wireless system level simulator
that confirm the importance of the fair sharing of the resources
between the UL and DL channels while maintaining a good
QoS. Furthermore, simulation results show that the resource
utilization and spectral efficiency can be significantly improved by
dynamically changing the bandwidth portion between the UL and
DL. Finally, we also show that there is a significant improvement
in both UL and DL data rates using the proposed CA and joint
scheduling schemes.
Keywordstraditional scheduling algorithms; carrier aggrega-
tion; joint scheduling; resource allocation; LTE-A
I. INTRODUCTION
Scheduling is a key functionality of the radio protocol
stack, and it is performed by the MAC scheduler in the
eNodeB. The scheduler is responsible for allocating the radio
resources in both directions (DL and UL) while considering
the QoS requirements for all active radio flows (bearers). This
is achieved by allocating the available radio resource blocks
(RBs) to specific User Equipments (UEs) within the sector
or the cell for the transmission and reception of variable-size
Transport Blocks (TB). The scheduler runs every subframe or
Time Transmission Interval (TTI) and allocates the RBs to the
UEs in the DL and UL. A single TB may be allocated to a
UE per TTI [1]. The scheduling algorithms have a significant
impact on the performance of the individual eNodeB and
overall LTE network.
CA is an added enhancement in the LTE-A system by
which carriers or bands are aggregated together and users
can be scheduled on continuous or non-continuous component
carriers. The feature was primarily added to LTE in order to
provide substantial improvements to meet the IMT-A require-
ments for 4th Generation Networks (4G)[2].
The need for joint UL-DL scheduling is justified by the fact
that the evolutions of 4G wireless communication paved the
way to the advent of two-way or bidirectional communication.
This forces the traffic to be both symmetric (in bidirectional
application such as, online gaming, BitTorrent and video
conferencing) and asymmetric (web traffic) [3-5]. This type
of resource allocation has not been investigated enough in the
literature. There is no concrete framework defined for joint
UL-DL scheduling in LTE/LTE-A with carrier aggregation.
The existing work on LTE-/LTE-A scheduling is mostly
limited to UL or DL, separately. However, there are some
works on the joint UL-DL scheduling in other wireless
technologies. For example, the authors of [5] presented an
opportunistic joint UL-DL scheduling scheme for Wireless
Local Area Networks (WLANs) and focused on the unfair-
ness suffered by WLAN UL [6]. However, there exist some
differences in the access mechanisms between the WLANs and
LTE. The authors of [7] provided a resource allocation model
for the joint LTE/LTE-A system in DL only and considered CA
and the backward compatibility with conventional LTE users.
However, the authors focused on the fairness and neglected the
QoS provisioning which is a key element in the 4G network.
Furthermore, UL-DL joint scheduling was not addressed. A
lot of work has been done in joint cooperative scheduling
between the cells, either in UL or DL but not joint UL-
DL. The authors of [8] introduced cooperative scheduling for
jointly performing resource allocation for various UEs attached
to different cooperating eNodeBs for dynamic interference
coordination in the 3GPP LTE UL. Similar work has been
presented in [9] to address the joint scheduling and resource
allocation in UL orthogonal frequency division multiplexing
(OFDM) networks using the existing gradient-based schedul-
ing framework. The joint resource allocation in orthogonal
frequency division multiple access (OFDMA) systems was
addressed in [4, 10-13].
In this paper, we present a comparison of the imple-
mentation of the traditional resource allocation algorithms,
namely Round Robin (RR), Maximum signal-to-interference-
plus-noise ratio (MaxSINR) and Proportional Fair (PF), for the
QoS-aware joint UL-DL scheduling for FDD LTE-A utilizing
the functionality of CA. The framework presented enforces
the fair sharing of the resources between the UL and DL
channels while maintaining good QoS. The performance of
the implementation of the three traditional resource allocation
methods is compared and analyzed.
The rest of the paper is organized as follows. Section
II provides a detailed description of the joint UL-DL model
used in the simulation and gives and overview of the use of
the traditional scheduling methods in our joint schemes. The
simulation setup and results for the performance analysis of the
implementation of traditional resource allocation algorithms on
the proposed method are presented in Section III. Finally, the
conclusions are drawn in Section IV.
II. MO DE L DESCRIPTION
For FDD LTE air interface, the resource allocation for each
direction implies allocating portion of the bandwidth, which is
interpreted as a number of RBs. In this paper, we developed a
joint UL-DL algorithm, based on a utility function that decides
whether to give more resources (bandwidth) to the DL or the
UL based on the QoS Class Identifier (QCI) matrix and load
matrix, fairness and sum rate maximization. Then, each user
will be assigned a number of RBs in each direction based on
its streams QCIs, load, and SINR.
CA is utilized by adding a third carrier beside the UL and
DL carriers to be shared by both channels. Let BW be the total
bandwidth (in RBs) and BWU L,B WDL and BWsh are the
bandwidths for the UL, DL and shared bandwidth, respectively.
Each UE has a number of traffic flows (streams). The QoS of
each flow is defined by the QCI which is considered as a packet
filter. Let Nindicates the number of UEs, UE1to U EN. The
traffic flow ifor a UE j,j∈ {1,2, ..., N}, is characterized by
a QCI indicated as qcij
iand a load lj
i,i∈ {1,2, ..., 9}. The
load matrix and QCI matrix for the UL (LUL and QCI U L)
and DL (LDL and QCID L) are defined in (1), (2), (3) and (4).
The weights for the UL and DL from the shared bandwidth
are indicated as WUL and WDL and calculated using (5) and
(6) where () is the dot product operation.
LUL =
l1
1,UL · ·· ln
1,UL
.
.
.....
.
.
l1
9,UL ... ln
9,UL
(1)
QCI U L =
qci1
1,UL · ·· qcin
1,UL
.
.
.....
.
.
qci1
9,UL ... qcin
9,UL
(2)
LDL =
l1
1,DL ··· ln
1,DL
.
.
.....
.
.
l1
9,DL ... ln
9,DL
(3)
QCI DL =
qci1
1,DL ··· qcin
1,DL
.
.
.....
.
.
qci1
9,DL ... qcin
9,DL
(4)
WUL =XQC I U L LUL =
N
X
j=1
9
X
i=1
qcij
i,UL ×lj
i,UL (5)
WDL =XQCID L LDL =
N
X
j=1
9
X
i=1
qcij
i,DL ×lj
i,DL (6)
The traffic load for the jth UE in the UL lj
i,UL with the
QCI qcij
i,UL is defined as the requested traffic by the jth UE,
in Bytes, over the total similar requested traffic by all UEs
multiplied by the number of UEs requesting for this type of
traffic. Similarly, The traffic load for the jth UE in the DL
lj
i,DL with qcij
i,DL is defined as the amount received traffic
by the eNodeB to be transmitted to the jth UE, in Bytes,
over the total received traffic with same QCI multiplied by the
number of UEs having same traffic. The bandwidth portions
for the UL and DL (from BWsh ) are calculated as (7) and
(8), respectively.
BW U L
sh =WUL
WUL +WD L ×BWsh (7)
BW DL
sh =WDL
WUL +WD L ×BWsh (8)
A. Round Robin
This scheduling method performs allocation that satisfies
fairness in the amount of resources given to each user which is
not fair in terms of user throughput. It is based on the idea of
being fair in the long term by assigning equal number of RBs
to all active users without taking into consideration the channel
conditions, Channel Quality indicator (CQI) feedback. Hence,
this technique does not satisfy the user’s QoS requirements
and lack resource optimization [14]. The joint RR scheduling
method takes into considerations the BW DL
sh and BW U L
sh and
assigns the RBs for each direction accordingly. The scheduler
performs RR scheduling among the users in each direction. It
starts from the highest priority user then continues to the next
priority and examines all queues from there too.
B. Maximum SINR
The ordinary MaxSINR scheduling algorithm assigns the
RBs to the users based on their channel quality taken form the
CQI feedback to the eNodeB; user with higher CQI level will
have the priority to use the channel resources. This algorithm
targets the optimal cell throughput by assigning the RB to the
user with highest CQI. Hence, it enhances the data rate for
this user and the overall cell data rate at the expense of the
fairness between the users [14]. Similar to the joint RR, joint
Max SINR will first divide the resources between the UL and
DL based on BW DL
sh and BW U L
sh and then assigns the RBs
for each user in each direction according to its mechanisms.
C. Proportional Fair
Although there are many versions of PF algorithms, joint
PF scheduling algorithm was introduced in [15] and adopted
in this work. The RBs are assigned to the UEs with the
best relative channel quality i.e. a combination of CQI, load
and desired level of fairness in order to satisfy a balance
between maximizing the cell throughput and fairness achieving
a minimum required QoS by users. The weight for each UE (in
UL and DL) is calculated using (9) and (10), and the RBs are
allocated for the users according to (11) and (12),respectively.
wUE j
i,UL
=ρj
UL ×qcij
i,UL ×qlj
i,UL (9)
wUE j
i,DL
=ρj
DL ×qcij
i,DL ×qlj
i,DL(10)
RBsj
i,UL =f loor
wUE j
i,UL ×B W UL
sh
PN
j=1 P9
i=1 wUE j
i,UL
(11)
RBsj
i,DL =f loor
wUE j
i,DL ×BW D L
sh
PN
j=1 P9
i=1 wUE j
i,DL
(12)
The flow of the algorithms used in our for joint scheduling
is shown in Algorithm 1. Choosing the number of TTI (NTTI),
which determines how adaptive the algorithm is, depends on
the resource fluctuations and the nature of the traffic, whether
Constant Bit Rate or Bursty traffic. Simple learning algorithms
can be added to determine the optimal value of NTTI for each
case.
Algorithm 1 Joint UL-DL algorithm
Require: Number of users, N
QoS Matrices QCI U L and QCI DL
Load Matrices LUL and LDL
1: Check the No. of attached UEs, N.
2: Read the measurements from UEs
3: Read the requirement for the UEs (LUL ,LDL ,QCIU L , and
QCI DL )
4: Compare BW U L and BW DL with the users requirements
5: if (BW U L and BW DL ) are enough then
6: Perform Traditional scheduling
7: else
8: Calculate (WUL and WDL )
9: Calculate BW U L
sh and BW DL
sh
10: if Scheduler == MaxSINR then
11: Perform MaxSINR for BW U L
sh and BW DL
sh
12: else if Scheduler == RR then
13: Perform RR for BW U L
sh and BW DL
sh
14: else if Scheduler == PF then
15: Read the users’ weights
16: Perform PF for BW U L
sh and BW DL
sh based on (11) and
(12)
17: end if
18: end if
19: Check again each NTTI
20: Go back to Line 1
III. RESULTS AND DISCUSSION
A. Simulation Setup
One-cell LTE-A network was considered for the simulation.
Three UEs are connected to the eNodeB through its LTE air
interface in the FDD mode. The Matlab-based IS-Wireless
system level simulator was used for the core simulation
(building UEs, eNodeB and modelling the channel, pathloss
and so on). We integrated the the joint version of the traditional
scheduling algorithms to the simulator, and developed a MAT-
LAB graphical user interface. The three users are simulated
with single streams. Table I lists the default parameters unless
stated otherwise. The scheduling is performed using a 3 MHz
bandwidth for each of the UL, DL, and the additional carrier
used for CA. Three UEs are placed with a single eNodeB.
With a channel bandwidth of 3 MHz, there will be 15 usable
RBs as shown in Table II [16]. These RBs will be shared by
the three UEs in the downlink, uplink and shared portion of
the bandwidth. The simulation setup is shown in Fig. 1.
Fig. 1. Simulation scenario
B. Results
The performance of the joint UL-DL scheduling is pre-
sented by simulation. As aforementioned, three UEs are simu-
lated with a single eNodeB where the goal is to display the im-
pact of several parameters on the overall system performance.
The performance is compared with independent (non-joint)
scheduling with and without CA.
TABLE I. DE FAULT PARA MET ER S
Parameter Settings
Bearer Type Vary ( QCI 1- QCI 9)
Scheduling request error Disabled
Scheduling Mode Link adaptation
Scheduling Algorithm Proportional Fair
PDCP Compression Disabled
Path loss Model Free space
Multipath Model 3GPP model
Environment Type Urban
eNodeB height 20 m
UEs Height 1.5 m
Carrier Frequency 2600 MHz
Transmission Mode SISO
Cyclic Prefix Normal (7 symbols per slot)
Duplexing Mode FDD
BLER 101
ACK-to-NACK error Disabled
NACK-to-ACK error Disabled
Antenna types Omnidirectional
eNodeB Max. Tx power 40 Watt/46dBm
UE Max Tx power 200mWatt/23dBm
Modulation and coding scheme Adaptive modulation & coding
Channel Bandwidth (MHz) 3 DL, 3 UP, 3 Joint
TABLE II. LTE BA NDW ID TH AN D NO.OF RBS
Bandwidth (MHz) 1.4 3 5 10 15 20
Usable RBs 6 15 25 50 75 100
LTE comprises traffic classes with different QoS attributes
to define the traffic characteristics for services. Bearers are set
between UE and core network, where each bearer is associated
with different QoS attributes, such as QCI and Guaranteed
Bit Rate (GBR). Generally, bearers are classified into GBR
bearers Non-GBR bearers. The bearer will be admitted or pre-
empted according to allocation and retention priority. Once
the bearer is successfully established, the network will be
able to prioritize the service according to the QCI type.
The characteristics for the nine standardized QCI with their
respective parameters are shown in Table III [17]. The data rate
in the DL is generally higher than the UL in LTE, according to
[18]. It is affected, in both directions, by the Modulation and
coding scheme (MCS) order and number of users. However,
for our simulation MCS is adaptive based on the SINR. Hence,
we do not predict the maximum data rate for the cell. Instead,
the performance is evaluated in terms of the improvement
for a given configuration. The results were obtained with the
tabulated setting for the UEs’ traffic and QoS requirements in
Table IV.
TABLE III. LTE STANDA RD IZE D QCIS
QCI Resource
Type Priority
Packet
Delay
Budget
Packet
Loss
Rate
Example
Application
1 GBR 2 100 ms 102VOIP
2 GBR 4 150 ms 103Video Call
3 GBR 5 300 ms 106Streaming
4 GBR 3 50 ms 103Real-Time Gaming
5 Non-GBR 1 100 ms 106IMS Signaling
6 Non-GBR 7 100 ms 103Interactive Gaming
7 Non-GBR 6 300 ms 106TCP Applications
8 Non-GBR 8 300 ms Browsing, Email
9 Non-GBR 9 300 ms Download, etc.
TABLE IV. SETTING FOR UE S IN TH E SI MUL ATIO N
UL/DL UL DL
UE UE 1 UE 2 UE 3 UE 1 UE 2 UE 3
QCI 118151
L0.5 0.5 0.5 0.2 0.4 0.3
The averaged data rates obtained by each user in the DL
and UL using the RR normal and joint scheduling are presented
in Fig. 2. From the data presented, we can observe that the
joint method offers an improvement in the data rate for each
user due to the added radio resources. The joint scheduling
adds some fairness between the UL and DL (in terms of data
rate). However, the users in each direction still have the same
ratio of the resource as that of the normal scheduling which
does not satisfy the QoS and load Requirements by each user.
For example, UE2should have the highest data rate in the DL
based on its requirements which is not the case.
Fig. 2. UL and DL average UEs data rates (RR)
Similarly, the averaged data rates obtained by each user
in the DL and UL using the MaxSINR normal and joint
scheduling are presented in Fig. 3. It is clear that this method
lacks the fairness among the users both in the UL and DL
which is due to the unequal radio resources distribution among
the users. Again, the joint method adds some fairness between
the UL and DL by dividing the joint spectrum according to (7)
and (8). However, the data rates of each user using the joint
scheduling will depend on the SINR of each user in the joint
spectrum as well. From Fig. 3, we can observe that UE2has
less data rate than UE1and UE3in the DL. In the UL, UE3
has higher data rate than UE1and UE2.
Fig. 3. UL and DL average UEs data rates (MaxSINR)
The joint PF method serves the best since it considers
the load of the network and QoS requirements. The averaged
data rates obtained by each user in the DL and UL using the
PF normal and joint scheduling are presented in Fig. 4. The
scheduler gives UE2the highest data rate in the DL and gives
UE3less data rate than that for UE1and UE2in the UL. This
satisfies the requirements of each UE as shown in Table IV.
Fig. 4. UL and DL average UEs data rates (PF)
The cell data rates in the UL and DL are presented in Table
V for the three different scheduling methods, the traditional
and their joint counterparts. The high data rates for the RR
method are justified by the allocation methodology followed
by the simulator since all the resources are allocated in the
UL which is not the case for the other two. This is expected
to be different somehow when many users are competing for
the same resources in a single TTI, since less RBs will be left
unoccupied.
The cell spectral efficiency (for the UL and DL together) is
presented in Fig. 5. We compare the spectral efficiency for the
system with standard scheduling mechanism, scheduling with
CA and joint scheduling with CA.The scheduling with CA
only was performed by aggregating two 3MHz bands in both
TABLE V. CEL L DATA RATE
Scheduling UL DL UL % DL %
Standard RR 3.08 2.63
Joint RR 5.12 5.66 66% 115%
Standard MaxSINR 2.56 3.63
Joint MaxSINR 5 6.57 95% 88%
Standard PF 1.22 4.05
Joint PF 3.6 6.57 195% 62%
UL and DL and using standard scheduling in both directions.
The spectral efficiency was not presented for UL and DL
separately since it cannot be extracted easily for the joint
schemes. The joint scheduling increases the spectral efficiency
using all methods. On the other hand, CA does not have any
impact on the spectral efficiency since it increased the data
rate by simply adding additional bandwidth.
Fig. 5. Spectral efficiency
IV. CONCLUSION
In this paper, the implementation of the traditional schedul-
ing algorithms for joint UL-DL scheduling for LTE-A network
has been addressed. Three resource allocation algorithms,
namely RR, MaxSINR and PF have been discussed and
compared for the QoS-aware joint UL-DL scheduling for FDD
LTE-A utilizing the functionality of CA. The averaged data
rates obtained by each user in the DL and UL using the
normal and joint scheduling are presented for the mentioned
algorithms. Also, the system performance in terms of spectral
efficiency with standard scheduling mechanism, scheduling
with CA and joint scheduling with CA have been compared
and analyzed. Simulation results show that the performance of
joint scheduling is better than that of the normal scheduling
for all cases. Moreover, the framework presented enforces
the resources’ fair sharing between the UL and DL channels
while maintaining good QoS. Finally, findings also show that
the proposed scheme can greatly improve the throughput and
spectral efficiency by dynamically adjusting the bandwidth
portion between the UL and DL.
ACK NOW LE DG ME NT
This work was made possible by NPRP grant 4-347-2-127
from the Qatar National Research Fund (a member of The
Qatar Foundation). The statements made herein are solely the
responsibility of the authors.
REFERENCES
[1] N. Abu-Ali, A. Taha, M. Salah, and H. Hassanein, “Uplink Scheduling
in LTE and LTE-Advanced: Tutorial, Survey and Evaluation Frame-
work,”IEEE Communications Surveys & Tutorials, vol. PP, pp. 1-27,
2013.
[2] R. Ratasuk, D. Tolli, and A. Ghosh, “Carrier Aggregation in LTE-
Advanced,” in IEEE 71st Vehicular Technology Conference (VTC 2010-
Spring), pp. 1-5, 2010.
[3] W. Saad, Z. Dawy, and S. Sharafeddine, “A utility-based algorithm for
joint uplink/downlink scheduling in wireless cellular networks,” Journal
of Network and Computer Applications, vol. 35, pp. 348-356, 2012.
[4] N. Zorba, E. Yaacoub, A. M. El-Hajj, and Z. Dawy, “A modified joint
uplink-downlink opportunistic scheduling for Quality of Service guaran-
tees,” in IEEE International Conference on Communications Workshops
(ICC), pp. 1000-1004, 2013.
[5] J. Yoo, H. Luo, and C.-k. Kim, “Joint uplink/downlink opportunistic
scheduling for Wi-Fi WLANs,” Computer Communications, vol. 31, pp.
3372-3383, 2008.
[6] S. N. and and V. Sharma, “A Joint Uplink/Downlink Opportunistic
Scheduling Scheme for Infrastructure WLANs,” CoRR, Octorber, 2013.
[7] L.-x. Lin, Y.-a. Liu, F. Liu, G. Xie, K.-m. Liu, and X.-y. Ge, “Resource
scheduling in downlink LTE-advanced system with carrier aggregation,
The Journal of China Universities of Posts and Telecommunications, vol.
19, pp. 44-123, 2012.
[8] P. Frank, A. Muller, H. Droste, and J. Speidel, “Cooperative interference-
aware joint scheduling for the 3GPP LTE uplink,” in IEEE 21st Interna-
tional Symposium on Personal Indoor and Mobile Radio Communica-
tions (PIMRC), pp. 2216-2221, 2010.
[9] H. Jianwei, V. G. Subramanian, R. Agrawal, and R. Berry, “Joint
scheduling and resource allocation in uplink OFDM systems for broad-
band wireless access networks,” IEEE Journal on Selected Areas in
Communications, vol. 27, pp. 226-234, 2009.
[10] K. Sungyeon and L. Jang-Won, “Joint Resource Allocation for Uplink
and Downlink in Wireless Networks: A Case Study with User-Level
Utility Functions,” in IEEE 69th Vehicular Technology Conference (VTC
Spring 2009), pp. 1-5, 2009.
[11] A. M. El-Hajj and Z. Dawy, “On optimized joint uplink/downlink
resource allocation in OFDMA networks,” in IEEE Symposium on
Computers and Communications (ISCC), pp. 248-253, 2011.
[12] S. Jaewoo, J. Hyun-Cheol, and A. Donggun, “”Joint Proportional Fair
Scheduling for Uplink and Downlink in Wireless Networks,” in IEEE
73rd Vehicular Technology Conference (VTC Spring 2011), pp. 1-4,
2011.
[13] A. M. El-Hajj and Z. Dawy, “Joint dynamic switching point configuration
and resource allocation in TDD-OFDMA networks: optimal formulation
and suboptimal solution,” Transactions on Emerging Telecommunications
Technologies, DOI: 10.1002/ett.2782, 2014.
[14] F. Capozzi, G. Piro, L.A. Grieco, G. Boggia, P. Camarda, “Downlink
Packet Scheduling in LTE Cellular Networks: Key Design Issues and
a Survey,” IEEE Communications Surveys and Tutorials, vol.15, no.2,
pp.678,700, 2013.
[15] A. M. Ghaleb, E. Yaacoub, A. Abdulkafi, “QoS-Aware Joint Uplink-
Downlink Scheduling in FDD LTE-Advanced with Carrier Aggregation,
International Symposium on Wireless Communication Systems, 2014.
[16] 3GPP TS 36.213 V11.4.0, “3GPP TSG RAN Evolved Universal Terres-
trial Radio Access (E-UTRA) Physical layer procedures,” Release 11,
2013.
[17] 3GPP TS 23.203 V12.4.0, “Technical Specification Group Services and
System Aspects; Policy and charging control architecture,” March 2014.
[18] A. M. Ghaleb, D. Chieng, A. Ting, A. Abdulkafi, K.-C. Lim, and H.-S.
Lim, “Throughput performance insights of LTE Release 8: Malaysia’s
perspective,” in 9th International Wireless Communications and Mobile
Computing Conference (IWCMC), pp. 258-263, 2013.
... Optimization problem for joint allocation techniques satisfies bandwidth allocation in FDD environment. 10,11,21,22 It mainly focused on the QoS outage of the different traffic class of users for on demand service request of LTE-A system. The results of the modified opportunistic scheduling algorithm are used to derive the UL-DL QoS metric for coupling UL-DL along with the outage probability. ...
Article
Full-text available
The 5G long term evolution‐advanced (LTE‐A) system aims to offer unprecedent data rates, user experience, and minimal latency amongst the end users with the introduction of carrier aggregation (CA) technology. Radio resource management (RRM) with scheduler is responsible for satisfying the user quality of service (QoS) requirements through resource allocation. Due to the need for an uninterrupted bidirectional communication between the eNodeB and user equipment (UE) in LTE‐A, the joint scheduling algorithm is acknowledged as a central research prospect. In this article, a modified joint uplink/downlink (UL/DL) user carrier scheduling algorithm satisfying demand service requests UE, is proposed. The joint optimization of the subcarrier assignment in an orthogonal frequency‐division multiplexing‐based network is utilized to balance the resource coupling between the UL/DL directions. CA is used to determine the weight factors required for the bandwidth allocation within the mobile users based on the QoS class identifier. These weight factors are employed as probability functions for resource allocation. The proposed algorithm will reduce the computational complexity arised in the conventional joint UL/DL scheduling algorithm based on the suboptimal solutions via probability functions. The simulation results show that the packet loss rate and packet delay among the mobile users are substantially reduced for both real‐time and nonreal‐time services. A modified joint uplink/downlink user carrier scheduling algorithm satisfying demand service requests user equipment, is proposed. The novel method uses the probability function to reduce the computational complexity in LTE‐A system resource allocation.
... Generally, there are centralized gateways [8] in small cell networks that make centralized optimization possible. If we apply a centralized optimization to address this problem, e.g., [9]- [11], as the deployment of small cells become dense, this optimization problem is extremely complicated. For example, consider a network with 20 small cells and two band with two channels in each band. ...
Article
In this paper, we investigate the problem of achieving global optimization for distributed carrier aggregation (CA) in small cell networks, using a game theoretic solution. To cope with the local interference and the distinct cost of intra-band and inter-band CA, we propose a non-cooperation game which is proved as an exact potential game. Furthermore, we propose a spatial adaptive play learning algorithm with heterogeneous learning parameters to converge towards NE of the game. In this algorithm, heterogeneous learning parameters are introduced to accelerate the convergence speed. It is shown that with the proposed game-theoretic approach, global optimization is achieved with local information exchange. Simulation results validate the effectivity of the proposed game-theoretic CA approach.
Conference Paper
In this paper, we investigate the problem of global optimization for distributed carrier aggregation (CA) in LTE with unlicensed spectrum, using a game theoretic solution. To cope with the local influences and the distinct cost of different CA types, we propose a non-cooperation game which is proved as an exact potential game. Furthermore, we propose a concurrent log-linear learning based uncoupled algorithm to converge towards NE of the game. In this algorithm, concurrent learning is introduced to accelerate the convergence speed. It is shown that with the proposed game-theoretic approach, global optimization is achieved without exchanging information. Simulation results validate the effectiveness of the proposed game-theoretic CA approach.
Conference Paper
Full-text available
Efficient resource utilization is of a high importance for LTE/LTE-A network performance and provisioning of user quality of experience. The eNodeB (LTE base station) is responsi-ble for controlling the radio resources between the user and LTE /LTE-A network. The resources can be scheduled in two schemes; either independently or jointly by considering both uplink (UL) and downlink (DL) together. Joint scheduling is an attractive approach for resource utilization and for providing better Quality of Service (QoS), but it adds some computational complexity to the system. This paper aims to provide QoS-aware joint UL-DL scheduling for LTE-A utilizing the functionality of carrier aggregation. The scheme presented enforces the fair sharing of the resources between the UL and DL channels while maintaining good QoS. The work is based on simulation using the MATLAB-based IS-Wireless system level simulator. The key contributions of this work lie in addressing both the joint scheduling and the carrier aggregations while considering the QoS and traffic conditions of the users.
Article
Full-text available
We propose a combined uplink/downlink opportunistic scheduling algorithm for infrastructure WLANs. In the presence of both uplink and downlink flows, an infrastructure WLAN suffers from the uplink/downlink unfairness problem which severely decreases the throughput of the access point (AP). We resolve the unfairness by maintaining a separate queue and a backoff timer for each associated mobile station (STA) at the AP. We also increase the system throughput by making the backoff time a function of the channel gains. This reduces the collision probability also. We theoretically analyze the performance of the system under symmetric statistics for all users and validate the analysis by extensive simulations. Simulation results show increase in system throughput by over 40% compared to the 802.11 MAC.
Conference Paper
Full-text available
LTE wireless mobile broadband networks, in particularly those based on 3GPP Release 8 (Rel. 8) specification, have already made strong inroads into the commercial arena worldwide. In Malaysia, 8 companies have been allocated spectrum in 2.6GHz Band (LTE band class 7). This paper aims to provide some high level insights on the throughput performance of these spectrums. Although a lot of studies have been undertaken with regards to LTE network performances, various degrees of discrepancy still exist in particularly concerning network layer (IP) throughput. A wide array of factors may contribute to these differences, which include differences in methodology adopted, levels of abstraction (or details) used in the simulation model, environment and/or usage scenarios and so on. Using OPNET's latest LTE library, we study the effects of duplexing scheme (FDD vs. TDD), MCS, channel bandwidth, bearer's type (GBR or non-GBR) for the allocated spectrums. The impact of multiple users' access on the throughput performance is also analyzed. This work enables us to compare and contrast our findings with the existing studies while providing more accurate views on how the main system level configurations may impact the network layer throughput performance of the emerging LTE networks in Malaysia.
Article
Full-text available
Future generation cellular networks are expected to provide ubiquitous broadband access to a continuously growing number of mobile users. In this context, LTE systems represent an important milestone towards the so called 4G cellular networks. A key feature of LTE is the adoption of advanced Radio Resource Management procedures in order to increase the system performance up to the Shannon limit. Packet scheduling mechanisms, in particular, play a fundamental role, because they are responsible for choosing, with fine time and frequency resolutions, how to distribute radio resources among different stations, taking into account channel condition and QoS requirements. This goal should be accomplished by providing, at the same time, an optimal trade-off between spectral efficiency and fairness. In this context, this paper provides an overview on the key issues that arise in the design of a resource allocation algorithm for LTE networks. It is intended for a wide range of readers as it covers the topic from basics to advanced aspects. The downlink channel under frequency division duplex configuration is considered as object of our study, but most of the considerations are valid for other configurations as well. Moreover, a survey on the most recent techniques is reported, including a classification of the different approaches presented in literature. Performance comparisons of the most well-known schemes, with particular focus on QoS provisioning capabilities, are also provided for complementing the described concepts. Thus, this survey would be useful for readers interested in learning the basic concepts before going into the details of a particular scheduling strategy, as well as for researchers aiming at deepening more specific aspects.
Article
The choice of OFDM-based multi-carrier access techniques for LTE marked a fundamental and farsighted parting from preceding 3GPP networks. With OFDMA in the downlink and SC-FDMA in the uplink, LTE possesses a robust and adaptive multiple access scheme that facilitates many physical layer enhancements. Despite this flexibility, scheduling in LTE is a challenging functionality to design, especially in the uplink. Resource allocation in LTE is made complex, especially when considering its target packet-based services and mobility profiles, both current and emerging, in addition to the use of several physical layer enhancements. In this paper, we offer a tutorial on scheduling in LTE and its successor LTE-Advanced. We also survey representative schemes in the literature that have addressed the scheduling problem, and offer an evaluation methodology to be used as a basis for comparison between scheduling proposals in the literature.
Conference Paper
Quality of Service (QoS) is a very important metric in wireless communication systems, where both Uplink (UL) and Downlink (DL) connections request strict QoS demands in terms of rate, delay, etc. for their proper operation. The development of wireless communications standards and the emergence of new interactive and conversational services with bi-directional QoS requirements changes the scheduling context from a link-level to a system level. To this end, this paper presents a joint UL-DL bandwidth allocation approach that considers outage as the main QoS parameter to be minimized. The well-known opportunistic scheduling algorithm is first modified to account for the UL-DL coupling and a mathematical expression of the outage probability is derived. Then the expression is employed in the joint UL-DL bandwidth allocation problem. The formulated problem is shown to be a convex optimization problem. Simulation results demonstrate the merits of the proposed approach compared to traditional techniques.
Article
With the large increase in data services with asymmetric traffic loads, the time division duplexing (TDD) mode of operation of the next generation cellular wireless standards has gained an increasing interest from both cellular network operators and the research community. However, the introduction of TDD has raised several critical design requirements. Among others, the introduced flexibility in splitting resources between the uplink and downlink directions makes of the uplink/downlink switching point configuration a major design problem, especially that it is directly connected to other design aspects such as resource allocation and interference mitigation. In this paper, we formulate joint switching point configuration and resource allocation in TDD systems as an optimization problem with the objective of minimising the total interference under traffic load and other network constraints. The problem is then reformulated by linearizing some of the nonlinear terms. Because of the persistent difficulty of the problem, we propose a three-stage suboptimal algorithm. The algorithm clusters the cells in a given network into different groups depending on the susceptibility of cells to interfere on each other with emphasis on base station-base station and mobile station-mobile station interference. Switching point configuration using standardised long term evolution (LTE) frame structures and multicell resource allocation utilising uplink and downlink interference indicators are subsequently performed independently. Simulation results show the merits of the proposed approach compared with traditional techniques by offering attractive trade-offs between interference mitigation and traffic adaptation. Copyright © 2014 John Wiley & Sons, Ltd.
Article
In this paper, we focus on the resource scheduling in the downlink of long term evolution advanced (LTE-A) assuming equal power allocation among subcarriers. Considering the backward compatibility, the LTE-A system serves LTE-A and long term evolution (LTE) users together with carrier aggregation (CA) technology. When CA is applied, a well-designed resource scheduling scheme is essential to the LTE-A system. Joint scheduling (JS) and independent scheduling (INS) are two resource scheduling schemes. JS is optimal in performance but with high complexity. Whereas INS is applied, the LTE users will acquire few resources because they can not support CA technology. And the system fairness is disappointing. In order to improve the system fairness without bringing high complexity to the system, an improved proportional fair (PF) scheduling algorithm base on INS is proposed. In this algorithm, we design a weigh factor which is related with the number of the carriers and the percentage of LTE users. Simulation result shows that the proposed algorithm can effectively enhance the throughput of LTE users and improve the system fairness.
Conference Paper
In most of researches in resource allocation for wireless networks, uplink and downlink problems are considered separately, especially when resources for uplink and downlink are statically partitioned, as in FDD and static TDD systems. However, even in those systems, joint resource allocation for uplink and downlink can improve system efficiency and we study this issue in this paper with the concept of the user-level utility function. In most cases, a user has a two-way communication that consists of two sessions: uplink and downlink sessions and its overall satisfaction to its communication depends on its satisfaction to each of its sessions. To model user's overall satisfaction to its communication, we define a user-level utility function, which is defined as a function of its session-level utility functions. We then formulate and solve the optimization problem with user-level utility functions for cell-level resource scheduling that jointly considers uplink and downlink resource allocation. Simulation results show that our cell-level scheduling in which resource allocation in both uplink and downlink is done jointly outperforms link-level scheduling, in which resource allocation in each of uplink and downlink is done separately in most cases, especially when the asymmetry between uplink and downlink is large.
Conference Paper
In current cellular networks base stations (BSs) usually perform independent scheduling without coordinating the resource allocation among different cells. This, however, often leads to high interference levels in cellular networks operating with universal frequency reuse, such as the 3GPP UTRAN Long Term Evolution (LTE). Coordinated scheduling between different BSs may mitigate this problem by taking interference from and to nearby BSs into account in order to avoid high interference situations, especially for user equipments (UEs) located near the cell-edge. For that purpose, we propose in this paper a novel interference-aware joint scheduling scheme based on proportional fairness for the uplink of a 3GPP UTRAN LTE system, where different BSs cooperate with each other via a fast backhaul network in order to jointly allocate frequency resources to the various UEs, taking the caused inter-cell interference into account. Furthermore, we propose an efficient method for dynamic interference coordination based on the standardized high interference indicator and we compare the achievable performance with this approach to a system with interference-aware joint scheduling. It is shown that our joint scheduling scheme outperforms a dynamic interference coordination scheme and that it yields significant gains over a conventional LTE Release 8 system without any interference coordination.