Content uploaded by K. Robert Lai
Author content
All content in this area was uploaded by K. Robert Lai on Nov 08, 2015
Content may be subject to copyright.
Hwang et al. VOL. 4, NO. 2/FEBRUARY 2012/J. OPT. COMMUN. NETW. 99
Generic QoS-Aware Interleaved Dynamic
Bandwidth Allocation in Scalable EPONs
I-Shyan Hwang, Jhong-Yue Lee, K. Robert Lai, and Andrew Tanny Liem
Abstract—The Ethernet passive optical network is being
regarded as one of the best solutions for next-generation
optical access solutions. In time-division multiplexing–passive
optical network technology (TDM–PON), the dynamic band-
width allocation (DBA) plays a crucial role in efficiently and
fairly allocating the bandwidth between all users. Moreover,
the quality of service (QoS) is also an essential requirement
to support triple-play services. However, many proposed DBA
mechanisms are still unable to solve the idle period problem
and enhance the differentiated services (DiffServ), which will
decrease the quality of overall system performance. There-
fore, this paper proposes a generic QoS-aware interleaved
dynamic bandwidth allocation (QA-IDBA). The QA-IDBA can
operate adaptively bi-partitioned interleaved scheduling with
QoS-based predictive limit bandwidth allocation (QP-LBA)
and excess bandwidth reallocation (EBR) with the remaining
bandwidth compensation scheme to eliminate the idle period,
enhance QoS, and effectively reduce high-priority traffic delay
and jitter. We conduct detailed simulation experiments with 16
and 32 optical network units (ONUs) to show the scalability.
Simulation results show that our proposed algorithms can
accommodate the growth of ONUs and achieve better overall
quality of system performance even if the high-priority traffic
is increasing from 20%, 40%, and 60%.
Index Terms—EBR; EPON; QA-IDBA; QP-LBA.
I. INTRODUCTION
B
roadband access networks have become increasingly
important due to emerging services, such as IPTV, inter-
active gaming, video conferencing, and video on demand (VoD),
which demand a huge amount of bandwidth. However, in
many communities, last-mile [1] technology represents a major
remaining challenge because it is impractical to provide a
low-cost and high-speed solution for broadband access services
to individual subscribers in remote areas. The Ethernet
passive optical network (EPON) [2] appears to be one of the
best solutions for the broadband access networks due to its
simplicity, cost-effectiveness, and scalability. EPON provides
bi-directional transmissions. In downstream transmission,
the optical line terminal (OLT) has the entire bandwidth
downstream channel to broadcast the control messages and
the data packets through a 1:N passive splitter to each optical
Manuscript received August 18, 2011; revised December 8, 2011; accepted
December 12, 2011; published January 16, 2012 (Doc. ID 153077).
I-Shyan Hwang (e-mail: ishwang@saturn.yzu.edu.tw) is with the Department
of Information Communication, University of Yuan-Ze, Chung-Li, 32003, Taiwan.
Jhong-Yue Lee, K. Robert Lai, and Andrew Tanny Liem are with the
Department of Computer Science and Engineering, University of Yuan-Ze,
Chung-Li, 32003, Taiwan.
Digital Object Identifier 10.1364/JOCN.4.000099
network unit (ONU). In upstream transmission, all ONUs
share a common transmission channel toward the OLT, and
only a single ONU may upstream data in a transmission
time slot to avoid data collisions [3]. To emphasize, any data
collisions will cause a longer end-to-end delay and degrade
the system performance. Therefore, a proficient bandwidth
allocation algorithm has become a prominent concern in EPON
research, especially with the huge bandwidth demands and
critical applications.
Bandwidth allocation schemes can be divided into two
categories: fixed bandwidth allocation (FBA) [4] and dynamic
bandwidth allocation (DBA) [5–11]. In the FBA scheme, the
OLT pre-allocates the fixed time slot regardless of the actual
traffic arrival of each ONU; thus the upstream channel will
be occupied even if there is no frame to transmit. This can
result in long delays for all Ethernet frames buffered in the
other ONUs. Conversely, the DBA assigns the bandwidth
dynamically based on the queue state information received
from ONUs. However, in the traditional DBA scheme, the OLT
will begin bandwidth allocation after collecting all REPORT
messages which result in the idle period problem. During the
idle period, the ONUs are unable to transmit data. Therefore,
reducing the idle period in the DBA scheme becomes one of
the important issues to address in order to improve bandwidth
utilization.
Another problem with the DBA scheme is the inconsisten-
cies of the queue state due to the packets that are continuing
to arrive during the waiting time, which potentially leads to
longer delays. To address this problem, the predictive schemes
should be considered to resolve the packets arrival during
the waiting time and transmission time. Although exhaustive
queue size prediction mechanisms have been proposed (which
are credit based [7,8,11], linear based [5,8,9], proportion
based [9,12], waited based [4], and QoS based [6,7,10,13,14]),
these traffic prediction schemes are unable to provide feasible
solutions for differentiated services (DiffServ) and are also
unable to address the queue size inconsistency problem.
Moreover, sorting report messages scheduling schemes have
been proposed in [9,15,16]. Since these scheduling schemes
change the ordering of transmission, they are thus unable to
guarantee high-priority traffic jitter. Furthermore, QoS-based
mechanisms that can guarantee jitter [17] and fairness [18]
have been proposed, but still did not resolve the idle period
problem. To be emphasized, hybrid double polling algorithms
(DPAs) [7] are able to reduce the idle period problem by
switching between the double-phase polling and interleaved
polling states adaptively. However, the double-phase polling
state is still unable to solve the light-load penalty problem.
Then, when the idle period for subgroup A is longer than
1943-0620/12/020099-09/$15.00 © 2012 Optical Society of America
100
J. OPT. COMMUN. NETW./VOL. 4, NO. 2/FEBRUARY 2012 Hwang et al.
the upstream period for subgroup B, the interleaved polling
is used to resolve the channel idle period. Nevertheless, the
interleaved polling state will cause unfairness and longer
packet delays. The hybrid DPA is based on limited bandwidth
allocation without DiffServ; thus it is unable to support QoS.
Despite the numerous DBA algorithms proposed for EPON
networks, there is no comprehensive study that can solve the
aforementioned problems.
Therefore, the aim of this work is to design a comprehensive
DBA mechanism, referred to as a generic QoS-aware inter-
leaved dynamic bandwidth allocation (QA-IDBA) mechanism,
to fill a gap in the literature on EPONs. The QA-IDBA
can operate in coordination with adaptively bi-partitioned
interleaved scheduling [19]. Thus, as one group is sending data
to the OLT, the bandwidth for another group is simultaneously
calculated and will adaptively reserve the bandwidth to the
next group. Consequently, the QA-IDBA mechanism not only
eradicates a common problem—idle time—but also resolves the
jitter problem with non-interruption transmission. Moreover,
this QA-IDBA mechanism is incorporated with QoS-based
predictive limit bandwidth allocation (QP-LBA) that assigns a
linear estimation credit to predict the arrival of traffic during
the waiting time for DiffServ and compare the prediction
index with the minimum guaranteed bandwidth. At last,
the strict priority allocation will guarantee and effectively
reduce the high-priority traffic delay to support QoS; excess
bandwidth reallocation (EBR) collects and reallocates the
excess bandwidth of light-load ONUs for extra demand to
heavy-load ONUs and will ensure prioritized service support
by reservation class selection. Furthermore, any remaining
available bandwidth can be adaptively adjusted between
two groups by the bandwidth compensation scheme. Finally,
simulation results show that our proposed mechanism has
better scalability than the hybrid DPA in terms of the packet
delay, throughput, jitter performance, ratio of packet loss, and
fairness in 16 or 32 ONUs.
The rest of this paper is organized as follows. Section II
presents the proposed generic QoS-aware interleaved DBA
mechanism. Simulation results are given in Section III and are
followed by conclusions in Section IV.
II. PROPOSED GENERIC QOS-AWARE INTERLEAVED
DBA MECHANISM
In this section, we present a comprehensive dynamic
bandwidth allocation mechanism referred to as the QA-IDBA
mechanism, which is shown in Fig. 1. The T
cycle
is divided by
halving the ONUs sequentially, where one group is Group one
in the cycle n(S
n,1
), which is from ONU number 1 to N/2, and
the other is Group two in the cycle n(S
n,2
), which is from ONU
number N/2+1 to N. We assume that N is an even number. The
S
n+1,1
upstream transmission period is calculated in the nth
cycle. At the Group two DBA time in the (n+1)th cycle, the OLT
performs the DBA computation for ONUs in S
n+1,2
. At this
time, the OLT has granted the GATE message to S
n+1,1
in the
nth cycle, so that the ONUs in S
n+1,1
can transmit upstream
data during an idle time while the OLT computes the DBA for
S
n+1,2
.
Cycle time (n)
Cycle time (n+1)
Group two DBA time
Group two DBA time
Group one DBA time Group one DBA time
ONU
1
ONU
1
ONU
N
ONU
N
ONU
N/2
ONU
N/2
ONU
N/2+1
ONU
N/2+1
OLT
ONUs
REPORT message
GATE message (group one)
GATE message (group two)
S
n,1
transmission time
S
n,2
transmission time
S
n+1,1
transmission time
S
n+1,2
transmission time
Fig. 1. Operation of the QA-IDBA mechanism.
Start QA-IDBA
Receive REPORT message
Initialize B
available
and calculate S
n, i
B
available
QoS-based predictive limit bandwidth
allocation scheme
N
N
N
Is last one ONU?
Excessive bandwidth collection
Update B
excess
with past B
remain
Excessive bandwidth reallocation
Remaining bandwidth compensation
Finish QA-IDBA
B
remain
> 0
Y
Y
Y
P
n,j
< S
n,i
B
Min
j
Fig. 2. Flowchart of the QA-IDBA mechanism.
During S
n+1,2
, the first ONU in Group two is allowed to
transmit upstream data as soon as the last ONU of S
n+1,1
finishes transmission. Hence, the OLT alternately receives
Ethernet frames from the ONUs in Group one and Group
two without significant interruptions and adaptively adjusts
the bandwidth between the two groups by the remaining
bandwidth compensation scheme to avoid bandwidth waste.
The flowchart of the QA-IDBA mechanism is illustrated in
Fig. 2.
After receiving all REPORT messages from each ONU, the
total available bandwidth, B
available
, can be calculated as
B
available
= r ×
³
T
Max
cycle
− N × T
g
´
− N × 512, (1)
where r represents the transmission speed of the EPON in bits
per second, T
Max
cycle
is the maximum cycle time, N is the number
of ONUs, T
g
is the guard time, and 512 bits (64 bytes) is the
control message length for the EPON system. Furthermore,
after initializing B
available
, the available bandwidths for Group
Hwang et al. VOL. 4, NO. 2/FEBRUARY 2012/J. OPT. COMMUN. NETW. 101
one, denoted by S
n,2
B
available
, and Group two, denoted by
S
n,2
B
available
, in the nth cycle are calculated as the following
equations:
S
n,1
B
available
= B
available
×
Ã
1 −
N
X
j=N/2+1
S
n,2
W
j
!
, (2)
S
n,2
B
available
= B
available
×
Ã
1 −
N/2
X
j=1
S
n,1
W
j
!
, (3)
where W
j
is the weight assigned to each ONU
j
based on the
service level agreement (SLA).
The QoS-based IDBA executes the prediction scheme based
on the waiting time, historical and current traffic status
information, and the traffic characteristics to support DiffServ.
Next, the QP-LBA limits the predicted value (P
n, j
) to the
minimum guaranteed bandwidth threshold (S
n,i
B
Min
j
) of
each ONU and allocates bandwidth according to the strict
high-priority scheduling scheme. The excessive bandwidth
collection for lightly loaded ONUs is executed when P
n, j
≤
S
n,i
B
Min
j
, followed by the EBR for heavily loaded ONUs. We
define lightly loaded ONUs as the predicted bandwidth value
(P
T
n, j
) of the ONU when it is less than or equal to the minimum
guaranteed bandwidth threshold (S
n,i
B
Min
j
); otherwise, we
refer to them as heavily loaded ONUs. At the end, the unused
bandwidth from the over-estimated bandwidth can be reserved
for the next group of ONUs. Thus, the entire bandwidth can be
adaptively adjusted to each group by the remaining bandwidth
compensation scheme without bandwidth waste.
The proposed QA-IDBA has two contributions: first, the
idle period problem in the traditional DBA mechanism can
be eliminated by interleaved transmission and dynamically
adjusting the bandwidth between groups; second, the QP-LBA
and EBR can be proficient in supporting the differentiated
services architecture based on various traffic characteristics.
As a result, the QA-IDBA can improve bandwidth utilization
and enhance the QoS for DiffServ in the EPON system.
Table I summarizes the definition of parameters. The next
section will discuss the QP-LBA and EBR and remaining
bandwidth compensation scheme.
A. QoS-Based Predictive Limit Bandwidth Allocation
Scheme
The prediction in the QP-LBA scheme takes the classes of
traffic characteristics into account to enhance the prediction
accuracy for each ONU in order to resolve the queue variation
between waiting times and reduce the packet delay. For
instance, when the OLT allocates the granted bandwidth to
ONUs, a predict credit is to be added into the requested
bandwidth of each ONU. Moreover, the incoming packet
during the waiting time and transmission time can be
expected to be transmitted within the current time slot.
Therefore, an accurate traffic prediction scheme can avoid
longer packet delay and network performance degradation. In
this paper, we categorize the traffic data into three different
classes: expedited forwarding (EF)—highest-priority traffic;
assured forwarding (AF)—medium-priority traffic; and best
effort (BE)—low-priority traffic [8,20]. With the purpose of
achieving better performance for time-critical applications,
such as constant bit rate (CBR) for EF traffic and non-bursty
traffic mode, the bandwidth should be assigned to the ONUs
according to the rate of these applications.
Therefore, our proposed prediction scheme assigns the CBR
bandwidth to EF traffic as it will be multiplied by one plus the
proportion of waiting time T
waiting,j
with cycle time T
cycle, j
for each ONU
j
to the previous request of EF. Nevertheless,
the prediction schemes of AF and BE compare the difference
between the requested bandwidth in the current cycle time and
the mean value of the requested bandwidth of the historical
data over the past ten cycle times. The prediction index of
bandwidth requirements for differentiated traffic is expressed
in Eq. (4), where R
T
n, j
represents the bandwidth request of
each traffic type of ONU
j
in the cycle n and H
T
j
is the average
bandwidth requirement of the past ten cycles of each traffic
type of ONU
j
, where T ∈ {EF, AF, BE}:
∆P
EF
n, j
= H
EF
j
×
¡
T
waiting,j
/T
cycle, j
¢
∆P
AF
n, j
= R
AF
n, j
− H
AF
j
∆P
BE
n, j
= R
BE
n, j
− H
BE
j
.
(4)
After the index of predicted traffic is calculated in each
ONU, the prediction scheme can be derived, where ∆P
T
n, j
represents the prediction value of each ONU
j
in the cycle
n, T ∈ {EF, AF, BE}. Then, for the AF and BE traffic types,
the demand tends to increase gradually and the prediction
bandwidth value will be updated to obtain the new bandwidth
requirement when the ∆P
T
n, j
is greater than zero; otherwise,
the granted bandwidth is sufficient.
During dynamic allocation, the allocated time slot will be
adapted to the requested bandwidth in order to prevent the
allocation of excessive bandwidth (which can result in wasted
bandwidth) or bandwidth that is too small, the so-called
light-load penalty (which can increase packet delay). The
limitation of QP-LBA is set as FV
n,i, j
= min(P
T
n, j
, S
n,i
B
Min
j
),
where the FV
n,i, j
is the first index that will be the limit in
S
n,i
B
Min
j
when the ONU state is heavy load; otherwise, FV
n,i, j
will be updated by P
T
n, j
. P
T
n, j
is the predicted bandwidth
value, which is the summation of the requested bandwidth
and the prediction value for ONU
j
, P
T
n, j
= R
T
n, j
+ ∆P
T
n, j
, where
T ∈ {EF, AF, BE}. Thus, the QP-LBA scheme can resolve the
problem of an ONU with heavy traffic load monopolizing
the upstream channel and also supports the priority level
of differentiated services to guarantee QoS. The values of
S
n,i
B
Min
j
and FV
T
n,i, j
, where T ∈ {EF, AF, BE}, in the GATE
message based on each traffic class are described as follows:
S
n,i
B
Min
j
=
S
n,i
B
available
Number of ONUs for group i in cycle n
, (5)
FV
EF
n,i, j
= min
³
P
EF
n, j
, S
n,i
B
Min
j
´
FV
AF
n,i, j
= min
³
P
AF
n, j
, S
n,i
B
Min
j
− FV
EF
n,i, j
´
FV
BE
n,i, j
= min
³
P
BE
n, j
, S
n,i
B
Min
j
− FV
EF+ AF
n,i, j
´
.
(6)
102
J. OPT. COMMUN. NETW./VOL. 4, NO. 2/FEBRUARY 2012 Hwang et al.
TABLE I
DEFINITION OF PARAMETERS
Symbol Definition
B
available
Total available bandwidth in one cycle time
S
n,i
Transmission time for group i in the cycle n
P
n, j
Predicted bandwidth value for ONUj in the nth cycle
S
n,i
B
Min
j
Minimum guaranteed bandwidth threshold belonging to ONUj of group i for the cycle n
B
excess
Excess bandwidth, which is calculated by the sum of under-exploited bandwidth of lightly
loaded ONUs
B
remain
Unused bandwidth after excess bandwidth reallocation from heavily loaded ONUs
T
Max
cycle
Maximum cycle time
S
n,i
B
available
Available bandwidth for group i in the cycle n
W
j
Weight assigned to each ONUj based on SLA
T
waiting
, j Waiting time for ONUj
Tcycle, j Cycle time for ONUj
R
T
n, j
Bandwidth request of each traffic type of ONUj in the cycle n, where T ∈ {EF, AF, BE}
H
T
j
Average bandwidth requirements of the history of ten cycles of ONUj, where
T ∈ {EF, AF, BE}
∆P
T
n, j
Prediction index of each ONUj in the cycle n
FV
n,i, j
First value that later in the algorithm may be complemented by additional bandwidth
from the excess bandwidth allocation belonging to ONUj of group i for the cycle n
P
T
n, j
Predicted bandwidth value. P
T
n, j
= R
T
n, j
+ ∆P
T
n, j
, where T ∈ {EF, AF,BE}
ER
T
j
Predicted bandwidth value exceeding the bandwidth requirement for EF traffic in each
heavily loaded ONUj, where T ∈ {EF, AF, BE}
AB
excess
Available excessive bandwidth, calculated by EF subtracting the sum of ExceedR
EF
j
from
B
excess
S
n,i
G
T
j
Granted bandwidth time slot in the GATE message of ONUj belonging to the group i in
the nth cycle, where T ∈ {EF, AF, BE}
B. Excessive Bandwidth Reallocation and Remaining
Bandwidth Compensation Scheme
After QP-LBA grants all bandwidth time slots to the active
ONU
j
, lightly loaded ONUs, which are defined as having a
P
T
n, j
less than the S
n,i
B
Min
j
, may still be present. The sum
of underutilized bandwidth of lightly loaded ONUs is called
excessive bandwidth (B
excess
) [9,10], which can be expressed
as Eq. (7):
B
excess
=
X
j∈L
³
S
n,i
B
Min
j
− P
n, j
´
, (7)
where S
n,i
B
Min
j
> P
n, j
, L is the set of lightly loaded ONUs, and
j is a lightly loaded ONU in L.
In our proposed EBR scheme, B
excess
is redistributed among
the heavily loaded ONUs. A heavily loaded ONU obtains
an additional bandwidth based on the EBR scheme. If the
bandwidth has not been distributed to the heavily loaded
ONUs after B
excess
has been allocated, the remaining available
bandwidth, B
remain
, can be retained for the next excessive
bandwidth collection in the next cycle. It also needs to be noted
that the B
remain
must be restricted to half of the T
Max
cycle
to
avoid the accumulation of unused available bandwidth, which
can lead to unfair resource distribution. Therefore, the DBA
algorithm can adapt to the bursty nature of network traffic
and achieve better bandwidth utilization. The EBR scheme
provides considerable improvement in the average packet
delay and network throughput as indicated by the reported
simulation results. The B
remain
is expressed in Eq. (8) as
Excessive bandwidth reallocation
Start QoS-based EBR
Y
Y
N
N
Fig. 3. Flowchart of the QoS-based EBR scheme.
follows:
B
remain
= B
excess
−
X
j∈H
³
P
n, j
− S
n,i
B
Min
j
´
, (8)
where S
n,i
B
Min
j
< P
n, j
, H is the set of heavily loaded ONUs,
and j is a heavily loaded ONU in H.
As mentioned previously, the QP-LBA scheme restricts
heavily loaded ONUs from S
n,i
B
Min
j
, even if the other ONUs
have a free bandwidth. Thus, the heavily loaded ONUs
potentially are deferred for the next cycle.
Figure 3 shows the flowchart of the QoS-based EBR scheme
and can be described as follows: the extra requirement
Hwang et al. VOL. 4, NO. 2/FEBRUARY 2012/J. OPT. COMMUN. NETW. 103
of the heavily loaded ONUs can be fully satisfied by
the excess bandwidth reallocation scheme when B
excess
is
greater than the summation of the bandwidth request among
heavy-loaded ONUs. Moreover, B
remain
can be retained as
excessive bandwidth for the next group, which is calculated
by using Eq. (8). Afterward, if the excess bandwidth B
excess
is between the summation of ER
EF
j
and the summation of
the exceeding bandwidth request of each heavily loaded ONU,
the EBR with string priority distribution and the proportion
of the exceeding requested bandwidth scheme will satisfy the
bandwidth requirements of the exceeding EF, AF, and BE
traffic classes, respectively. Then, after the requirements of
the exceeding EF traffic have been allocated, the available
excessive bandwidth (AB
excess
) will be calculated as B
excess
minus the summation of ER
EF
j
. Moreover, the minimum
guaranteed bandwidth threshold of excessive bandwidth
B
Min
excess
can be calculated by equating AB
excess
for each ONU
with the exceeding AF and BE traffic requirement ratios. The
additional bandwidth is given by min(ER
T
j
,B
Min
excess
), where T ∈
{EF, AF, BE}. First, the bandwidth requirement of EF traffic
will be satisfied by the predicted bandwidth value (P
EF
n, j
). Next,
if the exceeding AF traffic requirement of the ONU
j
, ER
AF
j
,
is less than B
Min
excess
, then the additional bandwidth from the
excess bandwidth is equivalent to the exceeding AF traffic
requirement; otherwise, the additional bandwidth for ONU
j
is
equal to B
Min
excess
. The additional bandwidth of S
n,i
G
BE
j
is also
given by min(ER
BE
j
,B
Min
excess
− ER
AF
j
). At last, if the B
excess
is not between the summation of ER
EF
j
and the summation
of the exceeding bandwidth request of each heavily loaded
ONU, the EBR redistributes the granted bandwidth time slot
in the GATE message for EF traffic in ONU
j
as S
n, j
G
EF
j
.
The S
n, j
G
EF
j
obtains additional bandwidth multiplying the
excess bandwidth by the exceeded EF traffic request ratio
µ
ER
EF
j
P
j∈H
ER
EF
j
¶
.
III. PERFORMANCE EVALUATION
In this section, the system performance of the proposed
QA-IDBA mechanism incorporated with the interleaved
transmission operating scheme, QP-LBA, and the EBR scheme
is evaluated and compared in terms of end-to-end delay,
throughput, jitter performance, ratio of packet loss, and
fairness with 1) IDBA_Fixed, a mechanism combining the
interleaved transmission operating scheme and LBA with
a strict high-priority scheduling scheme; 2) IDBA_EBR, a
mechanism combining the interleaved transmission operating
scheme, LBA, and the EBR scheme; 3) PFEBR [9], an early
DBA mechanism with a linear prediction scheme by using
unstable ordering scheduling; and 4) the hybrid double-phase
polling algorithm (DPA) [7], a polling algorithm allowing the
OLT to poll in parallel two different ONU subgroups. The
system model is set in the OPNET simulator with one OLT and
either 16 or 32 ONUs. The downstream and upstream channels
are both 1 Gbps. The distances from the ONUs to the OLT
are assumed to be between 10 to 20 km, and each ONU has a
10 MB buffer. Previous research suggested that the maximum
TABLE II
SIMULATION SCENARIO
Number of ONUs in the System 16, 32
Upstream/downstream link capacity 1 Gbps
OLT–ONU distance (uniform) 10–20 km
Buffer size 10 MB
Maximum transmission cycle time 1 ms
Guard time 5 µs
Computation time of DBA 10 µs
Control message length 64 bytes
transmission cycle time is 1 ms [21–23] to meet ITU-T
Recommendation G.114 [24]. Moreover, the service policy
follows the first-in first-out (FIFO) principle. Then, for traffic
modeling, an extensive study has shown that most network
traffic can be characterized by self-similarity and long-range
dependence (LRD) [25]. This model can also be utilized to
generate highly bursty BE and AF traffic classes with a Hurst
parameter of 0.7 [4], and packet sizes are uniformly distributed
between 64 and 1518 bytes. In the traffic generation, the
high-priority traffic (e.g., voice applications) is generated
as a Poisson distribution with the mean packet size being
fixed at 70 bytes [8]. In order to simulate the effect of
high-priority traffic, the proportion of the traffic profile is
analyzed by simulating the three significant scenarios for a
traffic class triplet (EF, AF, and BE) with (20%, 40%, 40%),
(40%, 30%, 30%), and (60%, 20%, 20%), respectively [10,26].
The simulation scenario is summarized in Table II.
A. End-to-End Packet Delay
Figure 4 compares the mean end-to-end packet delay and
EF traffic classes of PFEBR, IDBA_Fixed, IDBA_EBR, hybrid
DPA, and QA-IDBA with different proportions of the traffic
profile versus the traffic load. In Figs. 4(a) and 4(b), the
simulation results show that both the PFEBR and the hybrid
DPA have the highest mean end-to-end packet delay with the
traffic load exceeding 70% and 60% in scenario 1 for 16 and
32 ONUs, respectively. Furthermore, with an increase in the
variable bit rate (VBR) traffic amount (e.g., 40%, 60%, and
80%), the mean end-to-end delays also increase because both
the aforementioned mechanisms will improve high-priority
traffic end-to-end delay performance but sacrifice low-priority
traffic delay performance. However, the scalable QA-IDBA can
achieve much lower delay at higher traffic loads regardless of
the scenarios due to the EBR’s ability to efficiently reallocate
excessive bandwidth between differential traffic types in the
EPON system. Figures 4(c) and 4(d) show that the PFEBR
mechanism can reduce the overall EF traffic delay by using
the linear prediction scheme unless the traffic load exceeds
90%. The reason is that the unstable list degree in the heavy
queue state is not efficient. However, the QoS-based prediction
scheme in the QA-IDBA mechanism not only effectively
reduces the EF end-to-end packet delay but also absolutely
satisfies the EF traffic in any scenario. Moreover, the QA-IDBA
mechanism meets ITU-T Recommendation G.114 that specifies
the delay for voice traffic in the access network at 1.5 ms [24].
Additionally, the QA-IDBA has better performance than other
mechanisms regardless of both the mean and EF end-to-end
104
J. OPT. COMMUN. NETW./VOL. 4, NO. 2/FEBRUARY 2012 Hwang et al.
PFEBR_ 244 IDBA_Fixed_244 IDBA_EBR_244 Hybrid DPA_244
QA-IDBA_244
QA-IDBA_433
QA-IDBA_622
PFEBR_ 433 IDBA_Fixed_433 IDBA_EBR_433 Hybrid DPA_433
PFEBR_ 622 IDBA_Fixed_622 IDBA_EBR_622 Hybrid DPA_622
0
20
40
60
80
100
120
140
160
(a) Mean end-to-end delay for 16 ONUs (b) Mean end-to-end delay for 32 ONUs
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
(c) EF end-to-end delay for 16 ONUs (d) EF end-to-end delay for 32 ONUs
Delay (ms)Delay (ms)
Delay (ms)
Delay (ms)
0
20
40
60
80
100
120
140
160
180
10%20%30%40%50%60%70%80%90% 10%20%30%40%50%60%70%80%90%
Traffic load (%)
10%20%30%40%50%60%70%80%90%
Traffic load (%)
Traffic load (%)
10%20%30%40%50%60%70%80%90%
Traffic load (%)
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Fig. 4. End-to-end delay versus various traffic load comparisons with
the PFEBR scheme and DPA scheme for 16 and 32 ONUs.
packet delay because the QA-IDBA has a consummation
mechanism for fulfilling the traffic requirements.
B. System Throughput
Figure 5 shows the mean system throughput performance of
PFEBR, IDBA_Fixed, IDBA_EBR, hybrid DPA, and QA-IDBA
for EF, AF, and BE traffic with different proportions of
the traffic profile versus the traffic loads. In Figs. 5(a) and
5(b), simulation results show that the proposed QA-IDBA
mechanism outperforms the PFEBR and the hybrid DPA
in terms of the mean system throughput because the
interleaved transmissions can eliminate the idle time problem
in traditional DBA and support efficient EBR as well as
the remaining bandwidth compensation scheme. All the
mechanisms in this simulation have equal mean system
throughput until the traffic load exceeds 60%. The mean
system throughputs of all mechanisms are similar unless the
traffic load exceeds 60%, whereas the PFEBR and the hybrid
DPA have gradually achieved a saturation performance with
the traffic load exceeding 60%. In this case, the IDBA_EBR has
the best mean throughput performance and begins to become
saturated when traffic load exceeds 80% for 16 and 32 ONUs
due to the effective excess bandwidth reallocation and the
remaining bandwidth compensation scheme. The IDBA_Fixed
and QA-IDBA have the identical trend in the mean throughput
performance, while IDBA_Fixed and QA-IDBA have a better
mean throughput performance in scenario 1 and lower mean
throughput performance in scenario 3 than the others. For the
EF throughput, as shown in Figs. 5(c) and 5(d), the proposed
QA-IDBA mechanism outperforms the other mechanisms
because the EF traffic obtains additional bandwidth by
using a QoS-based prediction scheme that can enhance the
high-priority traffic adaptively for different traffic proportions.
PFEBR_ 244 IDBA_Fix ed_244 IDBA_EBR_244 Hybrid DPA_244 QA -IDBA_244
PFEBR_ 433 IDBA_Fix ed_433 IDBA_EBR_433 Hybrid DPA_433 QA -IDBA_433
PFEBR_ 622 IDBA_Fix ed_622 IDBA_EBR_622 Hybrid DPA_622 QA -IDBA_622
(a) Total throughput for 16 ONUs (b) Total throughput for 32 ONUs
(c) EF throughput for 16 ONUs (d) EF throughput for 32 ONUs
(e) AF throughput for 16 ONUs (f) AF throughput for 32 ONUs
(
g
) BE throu
g
h
p
ut for 16 ONUs (h) BE throu
g
h
p
ut for 32 ONUs
Throughput (bits/s)Throughput (bits/s)
Throughput (bits/s)Throughput (bits/s)
Throughput (bits/s)Throughput (bits/s)
Throughput (bits/s)
Throughput (bits/s)
9.0E+08
8.0E+08
7.0E+08
6.0E+08
5.0E+08
8.0E+08
7.6E+08
7.2E+08
6.8E+08
6.4E+08
6.0E+08
10%20%30%40%50%60%70%80%90%
Traffic load (%)
10%20%30%40%50%60%70%80%90%
Traffic load (%)
10%20%30%40%50%60%70%80%90%
Traffic load (%)
10%20%30%40%50%60%70%80%90%
Traffic load (%)
10%20%30%40%50%60%70%80%90%
Traffic load (%)
10%20%30%40%50%60%70%80%90%
Traffic load (%)
10%20%30%40%50%60%70%80%90%
Traffic load (%)
10%20%30%40%50%60%70%80%90%
Traffic load (%)
5.4E+08
4.8E+08
4.2E+08
3.6E+08
3.0E+08
2.4E+08
1.8E+08
1.2E+08
6.0E+07
0.0E+00
6.E+08
5.E+08
4.E+08
3.E+08
2.E+08
1.E+08
3.6E+08
3.2E+08
2.8E+08
2.4E+08
2.0E+08
1.6E+08
1.2E+08
8.0E+07
4.0E+07
0.0E+00
4.0E+08
3.5E+08
3.0E+08
2.5E+08
2.0E+08
1.5E+08
1.0E+08
5.0E+07
0.0E+00
3.6E+08
3.2E+08
2.8E+08
2.4E+08
2.0E+08
1.6E+08
1.2E+08
8.0E+07
4.0E+07
0.0E+00
3.0E+08
2.5E+08
2.0E+08
1.5E+08
1.0E+08
5.0E+07
0.0E+00
Fig. 5. Throughput versus various traffic load comparisons with the
PFEBR scheme and DPA scheme for 16 and 32 ONUs.
Furthermore, for the AF and BE throughputs, as shown in
Figs. 5(e) to 5(h), the QA-IDBA has the best throughput
performance for 16 ONUs, and the IDBA_EBR has the best
BE throughput performance for 32 ONUs. The reason is that
the guaranteed bandwidth can enhance requirements of the
subscriber without over-estimating the demand of bandwidth
allocation for subscribers. The AF throughput performance
of hybrid DPA begins to decrease with the traffic load
exceeding 70% in scenarios 2 and 3 for 16 and 32 ONUs,
whereas the BE throughput performance of the hybrid DPA
begins to decrease with the traffic load exceeding 60% and
70% for 16 and 32 ONUs in every scenario. One possible
reason is that the QoS-based prediction mechanism of the
hybrid DPA guarantees the requirement of high-priority traffic
and disregards the requirement of low-priority traffic, which
results in lower AF and BE throughput when the proportion of
EF traffic is high.
Hwang et al. VOL. 4, NO. 2/FEBRUARY 2012/J. OPT. COMMUN. NETW. 105
C. EF Jitter and Packet Loss Ratio
Figures 6(a) and 6(b) compare the jitter performance of
PFEBR, IDBA_Fixed, IDBA_EBR, hybrid DPA, and QA-IDBA
for EF class with different proportions of the traffic profile
versus the traffic loads, respectively. The delay variance σ
2
is
calculated as σ
2
=
P
N
1
(d
EF
i
−
¯
d)
2
/N, where d
EF
i
represents the
delay time of the EF packet i and N is the total number of
received EF packets. Simulation results show that the delay
variance for EF traffic increases as the traffic load increases.
In the proposed interleaved transmission operating scheme, we
can see that the EF jitter of PFEBR can be improved by using
an interleaved transmission operating scheme regardless of
the number of ONUs, especially for IDBA_Fixed. The reason
is that the transmission order of each ONU is sequential
with a guarantee of EF jitter, but the PFEBR changes the
transmission order of ONUs, which leads to higher EF delay
jitter particularly in 16 ONUs. However, the QA-IDBA has a
higher EF jitter with the traffic load exceeding 60%, especially
in scenario 1. This is due to the prediction mechanism
allocating additional prediction bandwidth according to the
requirements of subscribers and therefore yields a larger ratio
of VBR traffic in scenario 1. On the other hand, the QA-IDBA
still outperforms in terms of the packet loss ratio in spite
of various traffic proportions. Figures 6(c) and 6(d) compare
the packet loss ratio with different proportions of the traffic
profile versus the traffic loads, respectively. Simulation results
show that the hybrid DPA begins to have packet loss with the
traffic load exceeding 80% in scenario 1 for 16 ONUs and the
traffic load exceeding 70% in every scenario for 32 ONUs due
to over-allocation of the requested bandwidth to ONUs [27]
for the EBR mechanism in hybrid DPA. This is termed the
redundant bandwidth problem [9] to decrease overall system
throughput. The IDBA_Fixed begins to have packet loss with
the traffic load exceeding 80% in every scenario for 32 ONUs
because of lack of an effective excess bandwidth reallocation
scheme. Furthermore, the IDBA incorporated with EBR and
the remaining bandwidth compensation scheme can improve
bandwidth utilization that prevents packet loss accumulation
in a high-traffic load for each scenario.
D. Fairness
Figure 7 shows the comparison of fairness against different
traffic loads among PFEBR, IDBA_Fixed, IDBA_EBR, hybrid
DPA, and QA-IDBA, respectively. Recently, fairness and QoS
on DBA schemes have become important issues. The fairness
index f (0 5 f 5 1) has been addressed [28], which is defined as
Eq. (9):
f =
³
P
N
i=1
G
[i]
´
2
N
P
N
i=1
G
2
[i]
, (9)
where N is the total number of ONUs and G
[i]
is the granted
bandwidth of ONU
i
. Jain’s fairness index f, ranging from 0 to 1,
becomes 1 when all ONUs have the same amount of bandwidth
allocated by the OLT.
Simulation results show that Jain’s fairness index f of the
IDBA is better than the hybrid DPA, especially in QA-IDBA
PFEBR_ 244 IDBA_Fix ed_244 IDBA_EBR_244 Hybrid DPA_244 QA-IDBA_244
PFEBR_ 433 IDBA_Fix ed_433 IDBA_EBR_433 Hybrid DPA_433 QA-IDBA_433
PFEBR_ 622 IDBA_Fix ed_622 IDBA_EBR_622 Hybrid DPA_622 QA-IDBA_622
\
(a) EF jitter for 16 ONUs (b) EF jitter for 32 ONUs
0.0%
1.0%
2.0%
3.0%
4.0%
5.0%
6.0%
0.0%
0.5%
1.0%
1.5%
2.0%
2.5%
3.0%
3.5%
4.0%
4.5%
5.0%
(c) Packet loss ratio for 16 ONUs (d) Packet loss ratio for 32 ONUs
Delay variance(ms
2
)
Delay variance(ms
2
)
3.E+02
2.E+02
2.E+02
2.E+02
2.E+02
1.E+02
1.E+02
8.E+01
5.E+01
3.E+01
0.E+00
Traffic load (%)
Traffic load (%)
Traffic load (%)
10% 20% 30% 40% 50% 60% 70% 80% 90%
Traffic load (%)
4.0E+02
3.5E+02
3.0E+02
2.5E+02
2.0E+02
1.5E+02
1.0E+02
5.0E+01
0.0E+00
Ratio (%)
Ratio (%)
10% 20% 30% 40% 50% 60%70% 80% 90%
10% 20% 30% 40% 50% 60% 70% 80% 90%
10% 20% 30% 40% 50% 60% 70% 80% 90%
Fig. 6. EF jitter and average packet loss ratio versus various traffic
load comparisons with the PFEBR scheme and DPA scheme for 16 and
32 ONUs.
PFEBR_244 IDBA_Fixed_244 IDBA_EBR_244 Hybrid DPA_244 QA-IDBA_24
4
PFEBR_433 IDBA_Fixed_433 IDBA_EBR_433 Hybrid DPA_433 QA-IDBA_433
PFEBR_622 IDBA_Fixed_622 IDBA_EBR_622 Hybrid DPA_622 QA-IDBA_62
2
0.74
0.79
0.84
0.89
0.94
0.99
0.70
0.75
0.80
0.85
0.90
0.95
1.00
(a) Fairness for 16 ONUs (b) Fairness for 32 ONUs
Fairness
Fairness
10%20%30%40%50%60% 70% 80%90%
Traffic load (%)
Traffic load (%)
10% 20% 30% 40% 50% 60% 70% 80% 90%
Fig. 7. Fairness using Jain’s index versus various traffic load
comparisons with the PFEBR scheme and DPA scheme for 16 and 32
ONUs.
with the traffic load exceeding 70%. Moreover, the QA-IDBA
has the best fairness performance, where the average Jain’s
fairness index f is about 0.93 and 0.9 for 16 and 32 ONUs,
respectively. The reason is that the proposed QA-IDBA can
not only utilize the idle period and remaining bandwidth by
performing DBA computation for fair bandwidth allocation,
but it can also achieve the fairness EBR mechanism with no
partiality based on the guaranteed bandwidth rather than
the requested bandwidth and increases bandwidth utilization.
The fairness of the hybrid DPA begins to gradually vary from
the traffic load 40% to the traffic load 70% for 32 ONUs.
There are two reasons for this situation: 1) the hybrid DPA
changes the transmission mechanism between online polling
and double-phase polling, and 2) EBR based on the requested
bandwidth in the hybrid DPA will cause the redundant
bandwidth problem.
106
J. OPT. COMMUN. NETW./VOL. 4, NO. 2/FEBRUARY 2012 Hwang et al.
IV. CONCLUSION
In this paper, important factors that can improve the
performance of EPON have been discussed and evaluated. Our
proposed mechanism executes an interleaved transmission
process to automatically adjust the cycle time to resolve the
idle period problem of the traditional DBA scheme, thus
enhancing system performance in terms of the end-to-end
packet delay and system throughput in both 16 and 32 ONU
environments. Moreover, our proposed QP-LBA, EBR, and the
remaining bandwidth compensation scheme incorporated with
the QA-IDBA not only takes into account the prediction for
differential traffic characteristics but also allocates bandwidth
for differential traffic adaptively and improves the utilization
of bandwidth. The simulation results show that the throughput
and EF jitter can be improved by an interleaved transmis-
sion operating scheme. Moreover, the end-to-end delay of
low-priority traffic, the packet loss rate, and fairness can be
improved by excess bandwidth reallocation and the remaining
bandwidth compensation scheme. Furthermore, the QoS-based
prediction limit bandwidth allocation scheme can support
DiffServ to guarantee the service level and improve overall
system performance. Finally, the QA-IDBA mechanism can
achieve better fairness performance. For example, at the traffic
load of each traffic amount (scenario 1, 2, and 3), the average
fairness index of DPA is about 0.81, whereas the QA-IDBA
induces 0.92, which is an improvement of 14% in the fairness
performance.
ACKNOWLEDGMENT
We wish to acknowledge the anonymous referees who gave
precious suggestions to improve the work. This work was
supported in part by the National Science Council of the
Republic of China under grants NSC 100-2221-E-155-016, NSC
100-2221-E-155-029, and NSC 100-2623-S-155-001.
REFERENCES
[1] P. E. Green, “Fiber to the home: The next big broadband thing,”
IEEE Commun. Mag., vol. 42, no. 9, pp. 100–106, Sept. 2004.
[2] G. Kramer, B. Mukherjee, and G. Pessavento, “Ethernet PON
(ePON): Design and analysis of an optical access network,”
Photonic Network Commun., vol. 3, no. 3, pp. 307–319, July
2001.
[3] IEEE Draft P802.3ah/D1.0TM, Media Access Control Param-
eters, Physical Layers and Management Parameters for Sub-
scriber Access Networks, Aug. 2002.
[4] Y. Luo and N. Ansari, “Bandwidth allocation for multiservice
access on EPON,” IEEE Commun. Mag., vol. 43, no. 2, pp.
S16–S21, Feb. 2005.
[5] M. McGarry, M. Maier, and M. Reisslein, “Ethernet PONs:
A survey of dynamic bandwidth allocation (DBA) algorithms,”
IEEE Commun. Mag., vol. 42, no. 8, pp. S8–S15, Aug. 2004.
[6] J. Hwang and M. Yoo, “QoS-aware class gated DBA algorithm
for the EPON system,” in Int. Conf. Advanced Technologies for
Communications, Oct. 2008, pp. 363–366.
[7] S. Y. Choi, S. Lee, T. J. Lee, M. Y. Chung, and H. Choo,
“Double-phase polling algorithm based on partitioned ONU sub-
groups for high utilization in EPONs,” J. Opt. Commun. Netw.,
vol. 1, no. 5, pp. 484–497, Oct. 2009.
[8] G. Kramer, B. Mukherjee, and G. Pesavento, “Interleaved polling
with adaptive cycle time (IPACT): A dynamic bandwidth distri-
bution scheme in an optical access network,” Photonic Network
Commun., vol. 4, no. 1, pp. 89–107, Jan. 2002.
[9] I. S. Hwang, Z. D. Shyu, L. Y. Ke, and C. C. Chang, “A novel early
DBA mechanism with prediction-based fair excessive bandwidth
allocation scheme in EPON,” Comput. Commun., vol. 31, no. 9,
pp. 1814–1823, June 2008.
[10] C. M. Assi, Y. Ye, S. Dixit, and M. A. Ali, “Dynamic bandwidth
allocation for quality-of-service over Ethernet PONs,” IEEE J.
Sel. Areas Commun., vol. 21, no. 9, pp. 1467–1477, Nov. 2003.
[11] C. C. Sue and H. W. Cheng, “A fitting report position scheme
for the gated IPACT dynamic bandwidth algorithm in EPONs,”
IEEE/ACM Trans. Netw., vol. 18, no. 2, pp. 624–637, Apr. 2010.
[12] J. Zheng, “Efficient bandwidth allocation algorithm for Ethernet
passive optical networks,” IEEE Proc. Commun., vol. 153, no. 3,
pp. 464–468, June 2006.
[13] G. Kramer, B. Mukherjee, S. Dixit, Y. Ye, and R. Hirth, “Support-
ing differentiated classes of service in Ethernet passive optical
networks,” J. Opt. Netw., vol. 1, no. 8, pp. 280–298, Aug. 2002.
[14] J. Chen, B. Chen, and L. Wosinska, “Joint bandwidth scheduling
to support differentiated services and multiple service providers
in 1G and 10G EPONs,” J. Opt. Commun. Netw., vol. 1, no. 4, pp.
343–351, Sept. 2009.
[15] C. A. Chan, M. Attygalle, and A. Nirmalathas, “Local-traffic-
redirection-based dynamic bandwidth assignment scheme for
EPON with active forwarding remote repeater node,” J. Opt.
Commun. Netw., vol. 3, no. 3, pp. 245–253, Mar. 2011.
[16] W. P. Chen, W. F. Wang, and W. S. Hwang, “Adaptive dynamic
bandwidth allocation algorithm with sorting report messages for
Ethernet passive optical network,” IET Commun., vol. 4, no. 18,
pp. 2230–2239, Dec. 2010.
[17] T. Berisa, Z. Ilic, and A. Bazant, “Absolute delay variation
guarantees in passive optical networks,” J. Lightwave Technol.,
vol. 29, no. 9, pp. 1383–1393, May 2011.
[18] Y. Okumura, “Traffic control algorithm offering multi-class fair-
ness in PON based access networks,” IEICE Trans. Commun.,
vol. 93, no. 3, pp. 712–715, 2010.
[19] I. S. Hwang, J. Y. Lee, and Z. D. Shyu, “A scalable interleaved
DBA mechanism within polling cycle for the Ethernet passive
optical networks,” in IAENG Int. Conf. Computer Science, Mar.
2010, pp. 238–243.
[20] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and W. Weiss,
“An architecture for differentiated services,” IETF RFC 2475,
1998.
[21] G. Kramer, Ethernet Passive Optical Networks. McGraw-Hill
Professional, 2005.
[22] H. Naser and H. T. Mouftah, “A joint-ONU interval-based dy-
namic scheduling algorithm for Ethernet passive optical net-
works,” IEEE/ACM Trans. Netw., vol. 14, no. 4, pp. 889–899,
Aug. 2006.
[23] M. Ma, Y. Zhu, and T. H. Cheng, “A bandwidth guaranteed polling
MAC protocol for Ethernet passive optical networks,” in Proc.
IEEE INFOCOM, San Francisco, CA, Apr. 2003, pp. 22–31.
[24] ITU-T Recommendation G.114, “One-Way Transmission Time, in
Series G: Transmission Systems and Media, Digital Systems and
Networks,” May 2000.
[25] W. Willinger, M. S. Taqqu, and A. Erramilli, “A bibliographical
guide to self-similar traffic and performance modeling for mod-
Hwang et al. VOL. 4, NO. 2/FEBRUARY 2012/J. OPT. COMMUN. NETW. 107
ern high-speed networks,” in Stochastic Networks: Theory and
Applications, vol. 4. Oxford Univ. Press, 1996.
[26] X. Bai and A. Shami, “Modeling self-similar traffic for network
simulation,” Tech. Rep. NetRep-2005-01, 2005.
[27] B. Chen, J. Chen, and S. He, “Efficient and fine scheduling
algorithm for bandwidth allocation in Ethernet passive optical
networks,” IEEE J. Sel. Top. Quantum Electron., vol. 12, no. 4,
pp. 653–660, July–Aug. 2006.
[28] R. Jain, A. Durresi, and G. Babic, “Throughput fairness index:
An explanation,” ATM Forum/99-0045, Feb. 1999.
I-Shyan Hwang received the B.S. and M.S. in electronic engineering
from Chung-Yuan Christian University, Chung-Li, Taiwan, in 1982
and 1984, respectively, and the M.S. and Ph.D. in electrical and
computer engineering from the State University of New York at
Buffalo, NY, in 1991 and 1994, respectively. Since Feb. 2007, he has
been promoted as a Full Professor in the Department of Computer
Science and Engineering at the Yuan-Ze University, Chung-Li, Taiwan,
and he is with the Department of Information Communication.
His current research interests are high-speed fiber communication,
mobile computing and heterogeneous multimedia wireless networks,
integration assessment of PON with broadband WiMAX, algorithm
design and testing, and load balancing.
Jhong-Yue Lee received the B.S. from the Department of Medical
Informatics at the Tzu-Chi University, Hualien, Taiwan, in 2006; the
M.S. in computer science and engineering from Yuan-Ze University,
Taiwan, in 2009; and is currently pursuing his Ph.D. at Yuan-Ze
University, Chung-Li, Taiwan. His recent work focuses on the
integration assessment of PON with broadband WiMAX.
K. Robert Lai received the B.S. degree from the National Taiwan
University of Science and Technology in 1980; the M.S. degree from
Ohio State University, Columbus, OH, in 1982; and the Ph.D. degree
in computer science from North Carolina State University, Raleigh,
NC, in 1992. From 1983 to 1989, he was a Senior Engineer with the
GE Aerospace Division, Maryland. In 1994, he joined the Department
of Computer Science and Engineering, Yuan-Ze University, Taiwan,
where he is now a Professor. His current research interests are in
computational intelligence, agent technologies, and mobile computing.
Andrew Tanny Liem received the B.S. degree from the Department
of Computer Science at the Adventist University of Indonesia,
Bandung, Indonesia, in 2003 and the M.S. in computer science and
engineering from the Institute of Technology, Bandung, Indonesia, in
2006. He is currently pursuing the Ph.D. degree at Yuan-Ze University,
Chung-Li, Taiwan. His recent work focuses on PON monitoring and the
long-reach PON.
A preview of this full-text is provided by Optica Publishing Group.
Content available from Journal of Optical Communications and Networking
This content is subject to copyright. Terms and conditions apply.