ArticlePDF Available

Continuous-Time Collaborative Prefetching of Continuous Media

Authors:

Abstract and Figures

The real-time streaming of bursty continuous media, such as variable-bit rate encoded video, to buffered clients over networks can be made more efficient by collaboratively prefetching parts of the ongoing streams into the client buffers. The existing collaborative prefetching schemes have been developed for discrete time models, where scheduling decisions for all ongoing streams are typically made for one frame period at a time. This leads to inefficiencies as the network bandwidth is not utilized for some duration at the end of the frame period when no video frame ldquofitsrdquo into the remaining transmission capacity in the schedule. To overcome this inefficiency, we conduct in this paper an extensive study of collaborative prefetching in a continuous-time model. In the continuous-time model, video frames are transmitted continuously across frame periods, while making sure that frames are only transmitted if they meet their discrete playout deadlines. We specify a generic framework for continuous-time collaborative prefetching and a wide array of priority functions to be used for making scheduling decisions within the framework. We conduct an algorithm-theoretic study of the resulting continuous-time prefetching algorithms and evaluate their fairness and starvation probability performance through simulations. We find that the continuous-time prefetching algorithms give favorable fairness and starvation probability performance.
Content may be subject to copyright.
36 IEEE TRANSACTIONS ON BROADCASTING, VOL. 54, NO. 1, MARCH 2008
Continuous-Time Collaborative Prefetching of
Continuous Media
Soohyun Oh, Beshan Kulapala, Andréa W. Richa, and Martin Reisslein
Abstract—The real-time streaming of bursty continuous media,
such as variable-bit rate encoded video, to buffered clients over
networks can be made more efficient by collaboratively prefetching
parts of the ongoing streams into the client buffers. The existing
collaborative prefetching schemes have been developed for dis-
crete time models, where scheduling decisions for all ongoing
streams are typically made for one frame period at a time. This
leads to inefficiencies as the network bandwidth is not utilized for
some duration at the end of the frame period when no video frame
“fits” into the remaining transmission capacity in the schedule. To
overcome this inefficiency, we conduct in this paper an extensive
study of collaborative prefetching in a continuous-time model. In
the continuous-time model, video frames are transmitted contin-
uously across frame periods, while making sure that frames are
only transmitted if they meet their discrete playout deadlines. We
specify a generic framework for continuous-time collaborative
prefetching and a wide array of priority functions to be used for
making scheduling decisions within the framework. We conduct
an algorithm-theoretic study of the resulting continuous-time
prefetching algorithms and evaluate their fairness and starvation
probability performance through simulations. We find that the
continuous-time prefetching algorithms give favorable fairness
and starvation probability performance.
Index Terms—Client buffer, continuous media, continuous-time,
fairness, playback starvation, prefetching, prerecorded media,
traffic smoothing, video streaming.
I. INTRODUCTION
THE REAL-TIME streaming of continuous media over net-
works, such as the future Internet and next generation wire-
less systems, is a challenging problem mainly due to (i) the pe-
riodic playout deadlines, and (ii) the traffic variability. NTSC
video, for instance, has a periodic playout deadline (frame pe-
riod) every 33 msec while PAL video has a 40 msec frame pe-
riod, whereby a new video frame has to be delivered every frame
period to ensure continuous playback. A frame that is not deliv-
ered in time is essentially useless for the media playback and
results in interruptions of the playback. The continuous media
are typically compressed (encoded) to reduce their bit rates for
network transport. The efficient encoders, especially for video,
produce typically highly variable traffic (frame sizes), with ra-
tios of the largest frame size to the average frame size for a given
video stream in the range between 8 and 15 [1]. As a result, al-
locating network resources based on the average bit rates would
result in frequent playout deadline misses since the larger frames
Manuscript received October 10, 2006; revised September 22, 2007. This
work is supported in part by the National Science Foundation under Grant No.
Career ANI-0133252.
The authors are with Arizona State University, AZ 85287-5706 USA (e-mail:
soohyun@asu.edu; beshan@asu.edu; aricha@asu.edu; reisslein@asu.edu)
Digital Object Identifier 10.1109/TBC.2007.910921
Fig. 1.
J
prerecorded video streams are multiplexed over a bottleneck link of
capacity
R
bits
=
1
, and prefetched into client buffers of capacity
B
(
j
)
bits,
j
=1
;
...
;J
.
could not be delivered in time, while allocating resources based
on the largest frame size would result in low average network
utilization.
To overcome these challenges, prefetching (work-ahead)
schemes have been developed that exploit the facts that (i)
a large portion of the media are prerecorded, and (ii) that
many of the media playback (client) devices have storage
space, by prefetching parts of an ongoing media stream. The
prefetching builds up prefetched reserves in the client buffers,
and these reserves help in ensuring uninterrupted playback.
The prefetching (smoothing) schemes studied in the literature
fall into two main categories: non-collaborative prefetching
schemes and collaborative prefetching schemes. Non-collabo-
rative prefetching schemes, see for instance [2]–[19], smooth
an individual stream by pre-computing (off-line) a transmis-
sion schedule that achieves a certain optimality criterion (e.g.,
minimize peak rate or rate variability subject to client buffer
capacity). The streams are then transmitted according to the in-
dividually pre-computed transmission schedules. Collaborative
prefetching schemes [20]–[29], on the other hand, determine
the transmission schedule of a stream on-line as a function
of all the other ongoing streams. For a single bottleneck link,
this on-line collaboration has been demonstrated to be more
efficient, i.e., achieves smaller playback starvation probabilities
for a given streaming load, than the statistical multiplexing of
streams that are optimally smoothed using a non-collaborative
prefetching scheme [28]. We also note that there are transmis-
sion schemes which collaborate only at the commencement of
a video stream, e.g., the schemes that align the streams such
that the large intracoded frames of the MPEG encoded videos
do not collude [30].
A common characteristic of the existing collaborative
prefetching schemes is that they are designed based on a
discrete-time model, that is, the scheduling decisions are com-
puted at discrete time instants. In particular, the majority of the
0018-9316/$25.00 © 2007 IEEE
OH et al.: CONTINUOUS-TIME COLLABORATIVE PREFETCHING OF CONTINUOUS MEDIA 37
exiting collaborative prefetching schemes calculate the trans-
mission schedule on a per-frame period basis [20], [24][29].
These schemes essentially consider each frame period as a new
scheduling problem and attempt to t as many video frames
as possible in the transmission capacity (link bit rate in bit/sec
frame period in sec) available in the currently considered
frame period. Frames are generally not scheduled across frame
periods. This leads to inefciencies as some amount of available
transmission capacity at the end of a frame period goes unused
as no frame is small enough to t into the remaining capacity.
Similar inefciencies arise when frames are rst smoothed
over an MPEG Group-of-Pictures (GoP) and then scheduled
using a JSQ like strategy executed on a per-frame period basis
[22], or when the scheduling decisions for the transmission of
individual frames are computed at discrete slot times [21], [23].
Another common characteristic of the existing collaborative
prefetching schemes is that they were primarily designed and
studied for minimizing the number of lost frames, i.e., the frame
loss probability. Since video frames requiring many bits for en-
coding have generally a larger impact on the delivered video
quality than video frames requiring only few encoding bits, the
number of lost bits, i.e., the information (bit) loss probability, is
an important metric when streaming compressed video.
In this paper we conduct an extensive study on collaborative
prefetching in a continuous-time model considering both the
frame loss probability and the information loss probability. The
continuous-time model overcomes the outlined inefciency of
the discrete-time model by allowing for video frames to be
transmitted continuously across frame periods. The discrete
playout deadlines of the video frames need to be still consid-
ered, even in the continuous-time model, and our algorithms
ensure that frames that would not meet their deadline are
not transmitted. We specify a generic framework for contin-
uous-time prefetching and an array of priority functions to be
used for making scheduling decisions within the framework.
The priority functions are based on numbers of transmitted
video frames and numbers of transmitted (or lost) video infor-
mation bits. We conduct an algorithm-theoretic study of the
resulting continuous-time prefetching algorithms and evaluate
their fairness as well as frame and information starvation
probability performance through simulations. We nd that the
continuous-time prefetching algorithms give favorable fairness
performance and signicantly reduce the starvation probability
compared to discrete-time prefetching schemes.
This paper is organized as follows. In the following section
we describe the problem set-up and introduce the notations
used in the continuous-time modeling of the collaborative
prefetching. In Section III, we develop the generic framework
for continuous-time collaborative prefetching and introduce
a wide array of priority functions to be used within the
framework. In Section IV, we conduct an algorithm-theo-
retic complexity analysis of the prefetching framework. In
Section V, we conduct an algorithm-theoretic analysis of the
prefetching algorithm using the frame-based priority function
in the prefetching framework, while in Section VI we ana-
lyze the bit-based prefetching algorithms. In Section VII, we
present simulation results illustrating the fairness and starvation
probability performance of the continuous-time prefetching
algorithms and compare with the discrete time algorithms. In
Section VIII, we summarize our ndings.
II. CONTINUOUS-TIME MODEL AND NOTATIONS
In our system model, which is illustrated in Fig. 1, a number
of prerecorded continuous media streams are stored in mass
storage in the server. We assume that the server is in one of the
following states, busy-with-transmission,busy-with-scheduling,
or idle. When the server is in the busy-with-transmission state, a
frame is being transmitted to the corresponding client. When the
transmission of the frame is complete, i.e., when the server has
sent out the last bit of the frame, the server becomes idle. If the
server has more frames to be delivered, then the server enters the
busy-with-scheduling state. The busy-with-scheduling state can
be masked by overlapped it with the preceding busy-with-trans-
mission state, i.e., during transmission of a frame to client ,
the server computes the next frame to be transmitted. We assume
in this paper that the time that it takes to decide on the sched-
uling of the next frame is less than or equal to the transmission
time of the smallest video frame, which is reasonable given our
time complexity results for the prefetching algorithms. This al-
lows for masking of the schedule computing time for all frames.
In our continuous-time model, frame deadlines are still
chosen from a discrete, evenly spaced set of time slots of length
, the common basic frame period of the videos in seconds.
However, scheduling and transmission of frames proceed in
acontinuous-time fashion. A frame will be scheduled while
a previously scheduled frame is being transmitted, and right
after the current transmission ends (i.e., its last bit is sent out),
the newly scheduled frame will start being transmitted. Once
a video frame arrives at a client, it is placed in the clients
prefetching buffer. For our model we assume that the time is
set to be 0 (i.e., ) when the server initiates scheduling
and transmitting frames for clients. We also assume that at
time , the rst frame of each stream has deadline .In
other words, the rst frame of a stream should arrive at the
corresponding client before or on time to be decoded
and displayed during the rst frame period . If at time
, a complete video frame with deadline is not in the
prefetch buffer, the client suffers playback starvation and loses
the frame.
For simplicity of notation, which we summarize in Table I,
we assume that time is normalized by the frame period . The
frame with playback deadline , is removed from the buffer and
decoded at normalized time , and displayed during time
. Each client displays the rst frame (frame 1) of its
video stream during the time period [1, 2), and then removes the
second frame from its prefetch buffer at time and displays
it during time period [2, 3), and so on. Formally, we let
denote the deadline of frame of stream and note that with
our assumptions, .
If a video frame with deadline cannot be scheduled before
or at time , where denotes the size of frame
with deadline (or -th frame) of stream , then it is dropped
at the server and the client will suffer a frame loss during time
period . We let , , denote the lowest
indexed frame for stream that is still on the server and has not
been dropped at time . In other words, is the frame with
the earliest playout deadline that can still be transmitted to meet
38 IEEE TRANSACTIONS ON BROADCASTING, VOL. 54, NO. 1, MARCH 2008
TABLE I
DEFINITIONS OF NOTATIONS
its deadline. Let denote the earliest deadline frame among the
ongoing streams on the server at time , i.e., .
We assume that initially, at time , all streams have
an innite number of frames to be transmitted to the corre-
sponding clients and their prefetch buffers are empty. We let
, , denote the number of video frames that
have been transmitted to client up to time (and note the initial
condition for ). Let , ,
denote the number of bits that have been transmitted to client
up to time .
We let , , denote the number of video
frames of stream that have missed their playout deadline up
to time . The counter is incremented by one whenever
client wants to retrieve a video frame from its buffer, but does
not nd a complete frame in its buffer. Let , ,
denote the number of bits of stream that have missed their
playout deadline up to time . We let , , de-
note the number of bits in the prefetch buffer of client at time
(and note that for all clients ). Note
that the buffer constraint must be satised for
all clients , , for all times . Although the case
of limited prefetch buffer capacities is more realistic, for ease
of analysis we initially consider the case of unlimited prefetch
buffer capacities (or sufciently large buffer capacities).
We dene the frame loss (starvation) probability of client
as
(1)
Similarly, we dene the information loss probability of client
as
(2)
We dene the average frame loss probability as
and the average information loss probability as
.
III. PREFETCHING FRAMEWORK AND PRIORITY FUNCTIONS
In our effort to study the problem of scheduling video frame
transmissions for the collaborative prefetching of continuous
media in continuous time, we rst present a generic prefetching
framework. This framework is divided into two basic categories
according to the capacities of clientsunlimited buffer capac-
ities and limited buffer capacities. We use the unlimited buffer
capacity scenario to study the effects of the limited network ca-
pacity on the video prefetching. With limited prefetch buffer
capacities, we can study the effects of both the limited net-
work capacity and the limited client buffer capacity. In our algo-
rithm-theoretic analysis of the proposed prefetching algorithms
we consider the unlimited prefetch buffer capacity, whereas in
our simulations we examine the effect of limited prefetch buffer
capacities.
In this section, we rst present the generic scheduling frame-
works for the continuous-time model and subsequently intro-
duce the different priority functions that are employed within the
frameworks. Before we introduce the scheduling framework and
priority functions we briey discuss the difculties of nding
optimal solution to the continuous-time prefetching of contin-
uous media. The objectives of a scheduling algorithm for the
continuous-time model are (i) to minimize the total number of
lost frames (or minimize the total number of lost bits) and (ii)
to treat the clients fairly. If we consider only objective (ii), fair-
ness among clients, then it can be easily achieved by sending one
frame per stream. However, this problem becomes considerably
more involved when we consider the rst objective: If we only
try to maximize the number of transmitted frames, then there is
an analogy between this problem and the standard job sched-
uling problem on a single processor. The standard job sched-
uling problem is dened as follows. There is a stream of tasks.
A task may arrive at any time and is assigned a value that re-
ects its importance. Each task has an execution time that rep-
resents the amount of time required to complete the task, and a
deadline by which the task is to complete execution. The goal
is to maximize the sum of the values of the tasks completed by
OH et al.: CONTINUOUS-TIME COLLABORATIVE PREFETCHING OF CONTINUOUS MEDIA 39
Fig. 2. Prefetch algorithm framework for case of unlimited client prefetch buffer capacity.
Fig. 3. Prefetch algorithm framework for case of limited client prefetch buffer capacity.
their deadlines. The analogy between the two scheduling prob-
lems is that the streams can be viewed as one stream where
some of tasks (frames) have the same deadline. Each frame is
assigned a value which is equal to one, and an execution time
which depends on the frame size. The off-line version of this
problem, where the time constraints of all tasks are given as
input, is known to be NP-hard [31]. A number of studies have
proposed on-line heuristics to handle situations when the system
is overloaded [31][33]. It is known that the earliest deadline
first policy gives the optimal solution when the system is under-
loaded [34].
A. Generic Prefetching Frameworks
Fig. 2 describes the generic prefetch algorithm framework for
the case of unlimited prefetch buffer capacity. In this case the
only consideration is the discrete deadline of each frame. The
priority of a stream will be dened by a priority function as de-
tailed shortly. In the algorithm framework in Fig. 2, we neglect
the masking of the busy-with-scheduling state, i.e., assume the
schedule computation takes negligible time. If this time is sig-
nicant, the algorithm execution would need to start earlier such
that it is completed when the server completes the current frame
transmission at time . The status of the various counters of num-
bers of transmitted or lost bits or frames at time is known at
this earlier time when the algorithm starts executing and to be
used in the algorithms calculations.
In the case of limited client prefetch buffer space, the server
must be aware of the clientsprefetch buffers to prevent loss
caused by an overow at the client side. Hence the server cannot
schedule frames for a client whose prefetching buffer is full even
though network capacity is available, as detailed in the algo-
rithm framework in Fig. 3.
Importantly, the outlined generic prefetching frameworks
collaboratively consider all ongoing streams by computing the
priorities for all streams , and selecting
the stream with the highest priority for frame transmission.
B. Priority Functions
Within the presented algorithm framework we conduct an ex-
tensive study of prefetching algorithms that use different pri-
ority functions . We broadly categorize the priority func-
tions into being based on the number of video frames or the
number of video information bits. The aim of the frame-based
priority function is primarily to minimize , whereas the
bit-based priority functions aim at minimizing .
In the remainder of this section we introduce the intuition be-
hind the different priority functions, which are summarized in
Table II, and give a brief overview of their performance. The in-
dividual prefetching algorithms obtained by employing the dif-
ferent priority functions within the prefetching framework are
formally specied and analyzed in Sections V and VI and eval-
uated through simulations in Section VII.
1) Frame-Based Priority Function:
Transmitted Frames (TF): If we maximize the number of
transmitted frames, we would expect to minimize frame
loss. Therefore, we propose a greedy local algorithm
that prioritizes the streams based on the number of frames
transmitted so far, i.e., according to . With
the TF priority function, the stream with the least number
of transmitted frames has the highest priority. This algo-
rithm strives to equalize the number of lost frames among
the clients. This algorithm has a good approximation ratio
when compared to an optimal ofine algorithm when
considering long streams. We found from our simulations
that this algorithm produces essentially the same frame
loss probability for all clients. However, we have observed
that the information loss probabilities may slightly vary
among clients.
2) Bit-Based Priority Functions: Transmitted Bits:
Normalized Transmitted Bits (NTB): This scheduling
policy is designed to minimize the loss probabilities by
maximizing the number of normalized transmitted bits,
i.e., , whereby the stream with the
40 IEEE TRANSACTIONS ON BROADCASTING, VOL. 54, NO. 1, MARCH 2008
TABLE II
SUMMARY OF CONSIDERED PRIORITY FUNCTIONS
smallest has the highest priority. One shortcoming
of the priority function based on the average frame size
is that they may not accurately reect the actual av-
erage data rate: The average frame size represents the
average frame size of the entire video stream. However,
the average frame size in the part of the stream that has
been transmitted in the recent past and is currently consid-
ered for transmission may differ from the average frame
size over the entire stream. To overcome this problem we
propose the following alternative.
Ratio of Transmitted Bits (RTB): This priority function is
based on the ratio of the number of transmitted bits to the
total number of bits that have been considered to be sched-
uled so far i.e., . This
priority function is designed to minimize the loss proba-
bilities by maximizing the number of transmitted bits.
Weighted Normalized Transmitted Bits (WNTB): The
drawback of the preceding bit-based priority functions is
that the frame deadlines are not directly taken into consid-
eration, and hence they may cause increased information
loss by prefetching frames far in the future rather than
scheduling an imminent frame. As a consequence, unnec-
essary losses may occur and overall performance may be
degraded. Hence, we propose a weighing of the number of
transmitted bits by multiplying them with the number of
frames currently in the corresponding prefetch buffer, i.e.,
. If a client has some
frames in its prefetch buffer, it can display frames without
starvation for some time. In contrast, an empty prefetch
buffer implies that the rst frame of the corresponding
stream should be immediately transmitted to avoid star-
vation. This WNTB priority function ensures that even
clients with a large number of successfully transmitted
bits can still be chosen by the algorithm for transmission
if they are close to starvation.
Weighted Ratio of Transmitted Bits (WRTB) This priority
function combines the weighing with the normalization
by the actual number of bits considered for scheduling,
i.e., uses the priority function
.
3) Bit-Based Priority Functions: Lost Bits: To ensure an ex-
tensive evaluation of bit-based priority functions we also con-
sider priority functions employing the number of lost bits. More
specically, we consider the lost bits based counterparts of the
NTB, RTB, and WRTB policies.
Normalized Lost Bits (NLB): The NLB priority function
considers the amount of lost bits normalized by the average
frame size of the video stream, i.e., .
The stream with the largest accumulated normalized lost
bits has the highest priority.
Ratio of Lost Bits (RLB): The RLB priority function is
based on the ratio of the number of lost bits to the total
number of bits that have been considered to be scheduled
so far, i.e., . The stream
with the largest ratio has the highest priority.
Weighted Ratio of Lost Bits (WRLB) This priority function
combines the weighing by the number of frames currently
in the prefetch buffer, achieved through the factor ,
with the RLB priority, i.e., the WRLB priority function is
.
IV. TIME COMPLEXITY OF GENERIC PREFETCH ALGORITHM
FRAMEWORK
In this section we analyze the computing time complexity of
the generic prefetch algorithm framework. It sufces to analyze
time complexity of the generic algorithm, since all the proposed
prefetch algorithms follow the basic framework of the generic
algorithm but use different priority functions, each of which can
be computed in constant time.
In the following, we compute the time taken for each step in
the algorithm frameworks presented in Figs. 2 and 3, thus deter-
mining the time complexity of the algorithm used for scheduling
the next frame for transmission. Since it takes only a constant
time to compute the priority of each stream (or client) at Step
OH et al.: CONTINUOUS-TIME COLLABORATIVE PREFETCHING OF CONTINUOUS MEDIA 41
Fig. 4. The least Transmitted Frames (TF) prefetch algorithm.
1 for all streams, it takes time to compute the current
priorities for all streams. Finding the stream with the highest
priority takes time at Step 2 and checking if the rst frame
of the selected stream satises the deadline constraint takes an-
other constant time. In a naive implementation of the algorithms,
we may need many iterations until we nd a frame to be sched-
uled that can meet its deadline.
We can optimize the algorithms by initially checking the
deadline constraint for all streams and dropping any frames
whose deadline is smaller than , before we compute the priority
of each stream. Note that according to this implementation,
there cannot be any undropped frames with deadline smaller
than at time (since the size of the previously scheduled
frame is at most , ensuring that it can be
transmitted within one frame period) and hence the added com-
plexity for checking the deadlines of the unscheduled frames is
since is a constant. The overall complexity
of the algorithms would then be for one execution of
the continuous-time prefetch algorithm, which decides on the
scheduling of a video frame.
The computational complexity of the common discrete-time
prefetch algorithms is given in terms of the computational effort
required to compute the video frame transmission schedule for
a frame period of length . In order to facilitate the compar-
ison of discrete-time and continuous-time prefetch algorithms
we characterize the computational complexity of the contin-
uous-time prefetch framework for a frame period as follows.
We let denote the smallest frame among
the ongoing streams and note that at most frames
can be scheduled per frame period. Hence, the worst-case
computational complexity of the continuous-time prefetching
framework is for computing the
schedule for a frame period. The corresponding complexities
are for JSQ [28], for DC
[21], and for BP [27]. Thus, continuous-time
prefetching has comparable computational complexity to dis-
crete-time prefetching.
V. S PECIFICATION AND ANALYSIS OF TF ALGORITHM
In this section we specify in detail and analyze the prefetch
algorithm obtained by employing the frame-based TF priority
function in the algorithm framework. The resulting least Trans-
mitted Frames (TF) prefetch algorithm schedules frames as fol-
lows: At time , from among all ongoing streams ,
pick the streams that have the minimum number of transmitted
frames . If there is more than one stream with the min-
imum , then pick the stream that has the frame with the
earliest deadline and schedule this frame. If there is still a tie,
then pick the stream that has the smallest frame and schedule
its frame. The TF prefetch algorithm has the primary goal to
achieve small frame loss probabilities and is specied in Fig. 4.
The TF algorithm denes the priority of stream at time as
For the convenient calculation of this priority we dene the
counter to be the number of video frames of stream that
have already missed their playout deadlines up to time , in-
cluding the frames that have not yet been identied and dropped
by the algorithm. This counter, which is maintained as
specied in Fig. 4, is necessary since the number of dropped
frames is updated only when a client wants to retrieve
a frame from its buffer. Frames that are still in the server but
cannot be delivered to the corresponding clients by their dead-
lines are hence not necessarily reected by the counter .
The equivalence holds as long as the
frames for each stream are transmitted in their deadline order,
which is the case for our prefetch policies.
With the TF prefetch algorithm, the stream with the smallest
value has the highest priority. Note that if the value
of is the same for all the streams, then the stream with
the most frame losses has the highest priority. If any two
streams, say stream and , have the highest priority, then the
TF algorithm selects the stream with . At the
end of the TF algorithm, one stream is selected to schedule and
transmit its lowest indexed frame.
42 IEEE TRANSACTIONS ON BROADCASTING, VOL. 54, NO. 1, MARCH 2008
A. Fairness Properties of TF Prefetch Algorithm
The following two claims show the fairness of the proposed
TF algorithm.
Lemma 1: At any time , .
Proof: Suppose that at time ,
. Let , and analogously let
. For convenience we drop the subscript in the
following. Since is equal to , at time ,
Since , stream would be considered
prior to stream . Suppose that frame misses its
deadline. Then, stream still has a lower than
stream , so that stream would be considered prior to
stream . If stream is considered again, then stream
will schedule frame at this time. Now, streams
and have the same priority.
The above lemma shows that the proposed algorithm mini-
mizes the difference between the maximum number of trans-
mitted frames and the minimum number of transmitted frames.
We dene the class of fair algorithms as being the set of algo-
rithms for which Lemma 1 holds.
Corollary 1: At any time , .
Proof: Let and
at time . Suppose that
at time , where . (Let us assume that only
one frame has been transmitted during time period .)
I.e., during , stream drops frame while
stream does not drop any frame. There are two cases to be
considered.
Case 1) Stream is considered prior to stream
during .
Since the deadline of frame is later than that
of frame , stream does not drop a frame
during time .
Case 2) Stream is considered prior to stream
during
We divide this case 2 into two sub-cases
(i)and (ii)
.(i) and
. Then which
violates Lemma 1. (ii). If frame
misses its deadline, so does frame .
Hence the corollary holds.
Lemma 1 and Corollary 1 taken together show that the TF
prefetching algorithm distributes the frame losses evenly among
clients. An algorithm for which Lemma 1 holds, but for which
Corollary 1 does not hold, could treat clients unfairly by drop-
ping more frames for some clients than for other clients while
maintaining the number of transmitted frames equalized across
all clients.
B. Efciency Properties of TF Prefetch Algorithm
We dene a round to be a time interval during which each
stream schedules and transmits exactly one frame using the TF
algorithm. More specically, we dene the round such that all
streams have transmitted the same number of frames at the
beginning of a round, and each stream has increased its number
of transmitted frames by one at the end of the round. Intuitively,
if we minimize the time required to complete each round then
we minimize starvation. Without loss of generality, we suppose
that one round started at time 0 and it took time to com-
plete this round. If , then all streams would transmit
their framesexactly one frame per streamwithout any frame
losses. If where is an integer, then each stream
that transmits its frame at time will experience exactly
frame losses, and each stream that transmits its frame at
time will experience exactly frame losses. Intuitively,
any fair algorithm must proceed in rounds. Otherwise, Lemma 1
would be violated. We label the rounds according to the number
of transmitted frames at the start of the round, i.e., round starts
when one of the streams has transmitted frames (while the
other streams still have transmitted frames), and round
ends when all streams have transmitted exactly frames.
We analyze the amount of time required to complete one
round. We dene to be the maximum value of the ratio of the
largest frame size to the smallest frame size over all streams.
i.e., . We show that the
proposed TF algorithm is an -approximation on minimizing
the time required to complete a round. More specically, we
compare round for our proposed algorithm and an optimal al-
gorithm. Assume that is the index of the scheduled frame
for stream in this round using the proposed algorithm. Let
denote the time required to transmit frame , i.e.,
. Then the total time taken to complete this round is
. Let be the index of the scheduled frame for
stream that minimizes the total time for this round, i.e., the
optimal solution for this round is . If no stream
drops its frame during this round, then . Let be the
set of streams such that . Note that
since . Thus,
(3)
(4)
(5)
(6)
(7)
(8)
which proves the following theorem.
OH et al.: CONTINUOUS-TIME COLLABORATIVE PREFETCHING OF CONTINUOUS MEDIA 43
Fig. 5. The NTB algorithm.
Theorem 1: The TF prefetch algorithm is an -approxima-
tion on minimizing the time required to complete a round, where
is the maximum ratio of the largest frame size to the smallest
frame size from the ongoing streams.
Theorem 2: Any fair algorithm requires at most time to
complete a round with high probability.
Proof: We naturally assume that the capacity of the link
is large enough to accommodate the sum of the average frame
sizes, i.e., that . We also assume that the prob-
ability that the trafc from the ongoing streams exceeds the
link capacity is less than a small constant , i.e.,
(9)
where is the size of an arbitrary frame of stream .
We recall that we have dened a unit of time to be the frame
period with length . For example, one time unit starting at time
is the interval and two time units starting at time
is the interval and so on. We show that any fair
algorithm completes one round in time units with high
probability as follows.
At least one stream could not send
its frame in units
At least one stream could not send
its frame in the rst time unit
At least one stream could not send
its frame in the second time unit
At least one stream could not send
its frame in the -th time unit
Hence, with high probability it takes at most time to send
exactly one frame per stream. This analysis holds for any sched-
uling algorithm as long as the algorithm is fair.
The preceding results imply that we have at most
frame losses per client during one round with high proba-
bility. At any time our algorithm has sent at least
frames per client and an optimal algorithm has sent at most
frames per client. Thus, the number of transmitted frames for
each client according to the optimal algorithm is at most
times larger than the number of frames transmitted according to
our algorithm.
VI. SPECIFICATION AND ANALYSIS OF TRANSMITTED
BIT-BASED ALGORITHMS
In this section we consider the prefetch algorithms obtained
with the priority functions based on the number of trans-
mitted bits in the prefetch algorithm framework. The resulting
bit-based prefetching algorithms have the primary goal to
achieve small information loss probabilities.
A. Normalized Transmitted Bits (NTB) Prefetch Algorithm
We dene to be the normalized number of transmitted
bits for stream up to time , i.e.,
(10)
We rewrite the information loss probability of stream as
(11)
(12)
(13)
where (12) follows by noting that is approximately equal
to the total number of bits in stream up to time (i.e., for large
: ). Hence maximizing minimizes
the information loss probability.
We propose the least normalized transmitted bit (NTB)
prefetch algorithm that selects the next frame to be transmitted
at time by selecting the stream with , i.e., we
employ the priority function ,as
detailed in Fig. 5. With the NTB algorithm, the stream with the
smallest has the highest priority. If any two streams,
say stream and , have the same smallest , then the
NTB policy selects the stream with the earlier playout deadline
. If there is again a tie, it selects the stream with the largest
frame.
44 IEEE TRANSACTIONS ON BROADCASTING, VOL. 54, NO. 1, MARCH 2008
Fig. 6. Illustration of the difference between
max
r
(
j
)
and
min
r
(
j
)
for proof of Lemma 2.
1) Fairness Properties of NTB Prefetch Algorithm:
Lemma 2: The NTB algorithm satises at any time
, , where
.
Proof: At the beginning of transmission (i.e., at time 0),
for all streams and thus the claim holds initially.
Suppose that after the -th frame transmission the claim holds.
Also suppose that the -th frame transmission starts at time
and the transmission completed at time . From the algorithm
it follows that
(14)
From the induction hypothesis, we have that
(15)
(16)
as illustrated in Fig. 6. From (14) and (16),
. Hence, at any time the claim holds.
2) Efciency Properties of NTB Prefetch Algorithm: Sup-
pose that there exists an optimal algorithm that maximizes
the total number of transmitted normalized bits while the
transmitted normalized bits among clients are even. Let be
the total number of transmitted normalized bits to all clients
up to time using algorithm . Then each client has re-
ceived roughly normalized bits and has received roughly
actual bits up to time . Since we are fully utilizing
the bandwidth, the total amount of actual bits that our algorithm
has transmitted up to time is the same as the total amount of
actual bits that algorithm has transmitted. Now, we claim
that at time , of our proposed NTB algorithm is no
less than , where .
Suppose that , say
for some positive number . Since we must
have ,
which violates Lemma 2 and in turn proves the following the-
orem.
Theorem 3: At time , of the NTB prefetch algo-
rithm is no less than , where is the total number
of transmitted normalized bits up to time using an optimal al-
gorithm.
B. Ratio of Transmitted Bits (RTB) Prefetch Algorithm
We rewrite the information loss probability of stream as
follows
(17)
(18)
Hence, maximizing the ratio of the number of transmitted bits
to the total number of bits minimizes the information loss prob-
ability of each stream. In this spirit, we propose the least Ratio
of Transmitted Bits (RTB) prefetch algorithm that schedules a
frame of the stream with smallest
for the next transmission.
The RTB policy is the same as the NTB policy, except the
way we dene denominator of the priority function. The RTB
policy uses the ratio of the number of transmitted bits to the
total number of bits to be transmitted up to time while the
NTB policy uses the normalized transmitted bits. Initially, we
set for all . Let be multiplied by current
time . Then
when is large. This implies that for a sufciently large , the
priority of RTB is almost the same as that of NTB. Hence we ex-
pect that the RTB would perform as well as the NTB. In fact, our
simulation results indicate that the information loss probability
of RTB is typically lower than that of NTB. This appears to be
due to the fact that the RTB policy relies on the actual number
of transmitted bits, instead of the average number of transmitted
bits over long time horizons.
C. Weighted Normalized Transmitted Bits (WNTB) Prefetch
Algorithm
The weighted normalized transmitted bits (WNTB) algorithm
combines the NTB policy with a weighing according to the
number of prefetched frames. The WNTB priority function is
obtained by multiplying with the number
of prefetched frames, i.e.,
The WNTB algorithm schedules a frame for the stream with the
smallest for the next transmission. The priority function
of this hybrid algorithm considers the playout deadlines of the
frames in addition to the amount of information that has been
transmitted so far. A stream without prefetched frames (i.e.,
) has the highest priority. However, if more than one
client have zero prefetched frames, then the client that has re-
ceived less bits should have higher priority. In order to permit
this differentiation, we use for the weighing instead
. Similar reasoning leads to the weighted ratio of trans-
mitted bits (WRTB) prefetch algorithm.
We conclude this section on the algorithm-theoretic analysis
of the transmitted bits based prefetch algorithms by noting that
OH et al.: CONTINUOUS-TIME COLLABORATIVE PREFETCHING OF CONTINUOUS MEDIA 45
TABLE III
VIDEO FRAME SIZE STATISTICS:PEAK TO MEAN RATIO,S
TANDARD DEVIATION (IN BIT), AND COEFFICIENT OF VARIATION (STANDARD DEVIATION NORMALIZED
BY MEAN FRAME SIZE). AVERAGE BIT RATE I S 64 kbps FOR ALL STREAMS
to the best of our knowledge the lost bits based prefetch algo-
rithms are theoretically largely intractable.
VII. SIMULATION RESULTS
In this section we evaluate the proposed prefetch algorithms
through simulations using traces of MPEG-4 encoded video.
All used traces correspond to videos with a frame rate of 25
frames per second, i.e., the frame period is .
The simulation implements the continuous-time model speci-
ed in Section II with frame playout deadlines that are discrete
integer multiples of the frame period . The sched-
uling decisions are made on a continuous-time basis, driven by
the completion of the individual frame transmissions. That is, a
new frame transmission is scheduled to commence as soon as
the current frame transmission ends, irrespective of the discrete
frame playout deadlines. The peak to mean ratios, standard de-
viations, and coefcients of variation (standard deviation nor-
malized by mean frame size) of the frame sizes of the video
traces used for the simulations are given in Table III. To study
the effects of video bit rate variability on the prefetch algorithms
we transformed the original video traces to obtain high vari-
ance video traces. This transformation increased the frame sizes
of large frames and decreased the sizes of small frames while
keeping the average bit rate xed at 64 kbps. Throughout, we
refer the original video traces as the low variance video traces,
and the transformed video traces as the high variance video
traces. To simulate a mix of different video lengths (run times),
we generated for each video a set of 21 trace segments con-
sisting of the full-length (108,000 frame) trace, the two halves
of the full-length trace, the three thirds, the four quarters, the
ve fth, and the six sixth of the full length trace. Each segment
was scaled to have an average bit rate of 64 kbit/sec.
For a detailed comparison of the different proposed contin-
uous time prefetch algorithms, we rst conduct steady state
simulations where all clients start at time zero with empty
prefetch buffers and there are always streams in progress,
all of which are collaboratively considered by the evaluated
prefetching mechanisms. Each client selects uniformly ran-
domly a video title, one of the trace segments for the selected
title, and a uniform random starting phase into the selected
trace segment. The entire trace segment from the starting
phase frame to the frame preceding the starting phase frame
is streamed once (with wrap-around at the end of the trace
segment). When a stream terminates, i.e., the last frame of the
stream has been displayed at the client, the corresponding client
immediately selects a new video stream, i.e., random video title,
trace segment, and starting phase and begins transmitting the
new stream, whereby the prefetch buffer is initially empty. We
estimate the frame loss probabilities and the information (bit)
loss probabilities and their 90% condence intervals for the
individual clients after a warm-up period using the method of
batch means. All simulations are run until the 90% condence
intervals are less than 10% of the corresponding sample means.
For ease of discussion of the numerical results we normalize
the capacity of the bottleneck link by the 64 kbit/sec average bit
rate of the streams and denote this normalized capacity by .
Note that , where is in units of bit/
and is the frame period in seconds.
A. Impact of Prefetch Buffer Size and Trafc Variability on
Continuous-Time Prefetch Algorithms
In this section we examine the effect of client prefetch buffer
size and the effect of video trafc variability on the proposed
continuous-time prefetch algorithms. We consider supporting
streams, each with buffer capacity , over a bottleneck
link of capacity . Table IV shows the average frame
loss probability and the average information probability
achieved by the different algorithms for different prefetch
buffer sizes for both low and high variability video streams.
We observe that generally the loss probability decreases as the
buffer size increases and is smaller for the low variance streams.
With a larger buffer, the clients can build up larger prefetched re-
serves, making starvation less likely. Also, the frame sizes of the
low variance are more uniform, making them generally easier to
schedule.
Examining more closely the loss probabilities for the dif-
ferent proposed prefetch algorithms, we rst observe that for
46 IEEE TRANSACTIONS ON BROADCASTING, VOL. 54, NO. 1, MARCH 2008
TABLE IV
AVERAGE LOSS PROBABILITIES OF CONTINUOUS TIME PREFETCHING ALGORITHMS FOR DIFFERENT PREFETCH
BUFFERS
B
=32
, 64, 128, AND 256 KByte;
J
=64
STREAMS IN STEADY-STATE TRANSMISSION OVER
AN
R
=64
LINK. (a) AVERAGE FRAME LOSS PROBABILITY; (b) AVERAGE INFORMATION LOSS PROBABILITY
the frame-based TF algorithm, the frame loss probabilities for
the small buffers are signicantly smaller than the information
loss probabilities. This is because the TF algorithm strives to
minimize the frame losses, and in doing so, tends to give pref-
erence to transmitting small frames and losing large frames.
Indeed, we observe from Table IV that for the TF algorithm,
the frame loss probability decreases while the information loss
probability slightly increases for the high variability streams
compared to the low variability streams. In the high variability
streams, there are relatively fewer mid-size frames, while there
are more small-sized and large-sized frames. As a consequence
the TF algorithm, which is purely focused on numbers of trans-
mitted frames tends to transmit more small frames and lose
some more large frames. In contrast to the frame-based TF algo-
rithm, we observe that for the bit-based algorithms, there is rel-
atively little difference between the frame and the information
loss probability, with the information loss probabilities tending
to be generally slightly higher than the frame loss probabili-
ties. This is because with the bit-based prefetch algorithms, the
larger frames have generally the same chance to be scheduled as
smaller frames, however, larger frames require more transmis-
sion time and therefore tend to be dropped more often when the
playout deadline is close.
Overall, we observe that counting the actual number of trans-
mitted (or lost) bits (as done in the RTB, WRTB, RLB, and
WRLB policies) gives smaller loss probabilities than relying on
the long run average (as done in the NTB, WNTB, and NLB
policies). This is because the actual number of transmitted (or
lost) bits more closely reects the current priority level of a
stream. Importantly, we observe that taking the number of cur-
rently prefetched frames (as done in the WNTB, WRLB, and
WRLB policies) into consideration results in signicant reduc-
tions in loss probability compared to the corresponding policies
without the weighing. The number of prefetched frames gives
an immediate indication of how close or farin terms of frame
periodsa client is to starvation and thus helps signicantly in
selecting the client that is most in need of a frame. Combining
the counting of the actual number of transmitted (or lost) bits
with the weighing by the number of current prefetched frames
(as done in the WRTB and WRLB policies) gives the smallest
loss probabilities. We also observe that there is generally little
difference between considering the transmitted bits or the lost
bits.
B. Comparison Between Discrete-Time and Continuous-Time
Prefetch Algorithms
In this section we compare three representative discrete-time
prefetch algorithms, namely deadline credit (DC) [21], join-the-
shortest-queue (JSQ) [28], and bin packing (BP) [27], with the
continuous-time prefetch algorithms proposed in this paper. We
employ start-up simulations for this comparison since the DC
scheme is formulated for the start-up scenario in [21]. In the
start-up simulations, each of the streams starts at time zero
with an empty prefetch buffer and uniformly randomly selects a
full-length video trace and independently uniformly randomly
a starting phase into the selected trace. The full-length trace is
then streamed once and the frame and information loss prob-
abilities for each stream are recorded. We run many indepen-
dent replications of this start-up simulation until the 90% con-
dence intervals of the loss probabilities are less than 10% of
the corresponding means. In each replication, all streams start
with an empty prefetch buffer at time zero and select a random
full-length trace and starting phase.
For the DC scheme, we use a slot length of 1/2000th of the
frame period, i.e., slots. To convert the buffer occupa-
tion in bytes to the maximum deadline credit, we use the actual
sizes of the frames in the buffer. In the BP scheme, we use the
window size of 128 frames, which is a reasonable window size
for a prefetch buffer of [27].
OH et al.: CONTINUOUS-TIME COLLABORATIVE PREFETCHING OF CONTINUOUS MEDIA 47
TABLE V
AVERAGE LOSS PROBABILITIES FOR DISCRETE-AND CONTINUOUS-TIME PREFETCH ALGORITHMS FROM START -UP
SIMULATIONS FOR DIFFERENT PREFETCH BUFFERS
B
=32
, 64, 128, AND 256 KByte;
J
=64
STREAMS OVER
AN
R
=64
LINK. (a) AVERAGE FRAME LOSS PROBABILITY; (b) AVERAGE INFORMATION LOSS PROBABILITY
1) Impact of Buffer Size and Trafc Variability: In Table V
we report the average loss probabilities for the discrete-time
and continuous-time prefetch algorithms for different prefetch
buffer sizes for both the low and high variability streams. In
Fig. 7 we plot the frame and information loss probabilities for
the individual clients for a prefetch buffer of .
We rst note that for this start-up scenario, the comparison
tendencies among the different continuous-time prefetch algo-
rithms are generally unchanged from the steady-state scenario
considered in Section VII-A.
Turning to the comparison of continuous- and discrete-time
prefetch algorithms, we observe that for both the low and
high variability streams, the discrete-time prefetch algorithms
give signicantly larger frame and information loss proba-
bilities than the continuous-time algorithms. Typically, the
continuous-time prefetch algorithms reduce the starvation
probabilities by a factor of four or more compared to the
discrete-time algorithms. This improvement is primarily due to
utilizing the full bandwidth by scheduling video frames across
frame periods with the continuous-time prefetch algorithms.
The frame loss probabilities achieved by BP for small prefetch
buffers are an exception to this general trend in that BP achieves
frame loss probabilities as low as the continuous-time algo-
rithms, but BP has information loss probabilities much larger
than the continuous-time algorithms. This behavior is caused by
BPs strategy to look for small frames by temporarily skipping
large frames that do not t into the remaining transmission ca-
pacity in a frame period and thus prefetching more future small
frames. In contrast, the JSQ algorithm does not skip frames,
but rather scans the next frame to be transmitted for all ongoing
streams to nd frames that t into the remaining transmission
capacity in a frame period. As observed from Table V, this JSQ
strategy results in larger frame loss probabilities, but smaller
information loss probabilities compared to BP.
Regarding fairness, we observe from Fig. 7 for both low and
high variability streams that the discrete-time prefetching al-
gorithms generally provide the individual clients with approx-
imately equal frame loss probabilities, but that the information
loss probabilities can vary signicantly between clients. Indeed,
the discrete-time prefetch algorithms have primarily been de-
signed to be fair in terms of the frame loss probability, while the
information loss probability has been largely ignored in the de-
velopment of the discrete-time algorithms. In contrast, the con-
tinuous-time prefetch algorithms can provide fairness in terms
of the frame or information loss probability corresponding to the
adopted priority function. More specically, the frame-based
TF prefetch policy meets the very tight fairness criterion of
Lemma 1, and gives consequently essentially identical frame
loss probabilities, as observed in Fig. 7. The individual infor-
mation loss probabilities achieved with the TF algorithm vary
only slightly. Also, the bit-based prefetch algorithms give good
fairness both in terms of frame and information loss probability.
Overall, the continuous-time prefetch algorithms avoid the pro-
nounced differences between the frame loss probability and the
information loss probability behaviors observed for BP by not
selectively prefetching frames with specic properties (such as
48 IEEE TRANSACTIONS ON BROADCASTING, VOL. 54, NO. 1, MARCH 2008
Fig. 7. Loss probabilities for individual clients for
J
=64
streams over a link supporting
R
=64
streams with
B
=64
KByte buffer in each client. (a) Frame
loss probability for low variability streams; (b) frame loss probability for high variability streams; (c) information loss probability for low variability streams; (d)
information loss probability for high variability streams.
TABLE VI
AVERAGE LOSS PROBABILITIES FOR
J
=64
LOW VARIABILITY STREAMS OVER LINK WITH DIFFERENT NORMALIZED CAPACITY
R
;START -UPSIMULATION
WITH FIXED
B
=64KByte
CLIENT BUFFERS
small number of bits). Instead, the continuous-time prefetch al-
gorithms transmit the frames of a given stream in order of their
playout deadlines.
2) Impact of Utilization Level and System Size: In this sec-
tion we analyze the impact of the utilization level, i.e., the ratio
of number of supported streams to normalized link capacity
, and the system size, i.e., the number of multiplexed streams
for a xed utilization level, on frame and information loss.
We vary the utilization level by xing and considering
a range of normalized link capacities .
Considering rst the overload scenarios with in
Table VI, we observe that the continuous-time prefetching algo-
rithms achieve information loss probabilities closer to the the-
oretical minimum of than the discrete-time algo-
rithms. Consistent with our earlier observations, the WRTB and
WRLB algorithms achieve information loss probabilities closest
to the theoretical minimum. We also observe that even slight
OH et al.: CONTINUOUS-TIME COLLABORATIVE PREFETCHING OF CONTINUOUS MEDIA 49
TABLE VII
AVERAGE LOSS PROBABILITIES FOR STREAMING
J
=
R
LOW VARIABILITY STREAMS TO
B
=64KByte
CLIENTS
TABLE VIII
AVERAGE LOSS PROBABILITIES FOR 32 LOW VARIABILITY STREAMS AND 32 HIGH VARIABILITY STREAMS OVER AN
R
=64
LINK
overload conditions with result in fairly large in-
creases in the loss probabilities from the stability
limit scenario. The loss probabilities of the continuous-time al-
gorithms increase by factors of around three to ve, whereas the
discrete time algorithms, which have already larger loss prob-
abilities at the stability limit, experience relatively
smaller increases of the information loss probabilities by fac-
tors less than two. Further increasing the utilization by reducing
the link capacity to results roughly in a doubling of
the loss probabilities compared to the case. For the
extreme overload case of , we observe information
loss probabilities close to the theoretical minimum of 1/2 and
correspondingly large frame loss probabilities.
Turning to the reduction in the utilization level by increasing
from 64 to 64.5, we observe that all algorithms, except DC
and BP, achieve reductions in the loss probabilities by roughly a
factor of two. The BP algorithm achieves a reduction by a factor
of close to two for the information loss probability; but the frame
loss probability, which is already very small for , is only
very slightly further reduced. The DC algorithm can extract only
small reductions of the loss probabilities for the lower utiliza-
tion. Overall, we conclude that the proposed continuous-time
prefetch algorithms achieve information loss probabilities close
to the theoretical minimum for overload conditions, and provide
signicant reductions in the loss probability for even slight re-
ductions on the utilization below the stability limit.
From Table VII we observe that the system size has a rela-
tively minor impact on the performance of the prefetching al-
gorithms. Larger systems that multiplex more streams for the
same utilization level have somewhat increased oppor-
tunities for statistical multiplexing and therefore achieve very
slightly smaller loss probabilities.
3) Impact of Heterogeneous Streams: In this section we ex-
amine the impact of heterogenous streams on the performance
of the prefetching algorithms. We rst consider concurrently
streaming 32 low variability streams and 32 high variability
streams over an link to clients with a
prefetch buffer using start-up simulations. We report the mean
frame and information loss probabilities experienced by the
group of 32 clients receiving low variability streams and
the group of 32 clients receiving high variability streams in
Table VIII.
We observe from the table that generally the prefetch algo-
rithms provide relatively good fairness to the two groups of
video clients with both groups experiencing roughly equivalent
frame and information loss probabilities. In a few instances, the
high variability clients experience slightly higher loss probabil-
ities, reecting that the higher variability trafc is more chal-
lenging to schedule. The exception to this relatively good fair-
ness performance for both loss probability measures is the TF
algorithm, which gives both groups of clients equivalent frame
loss probabilities, but signicantly larger information loss prob-
abilities to the high variability clients. This behavior is due to
the frame-based nature of the TF algorithm, which enforces the
same frame loss probability for all clients, but does not consider
the sizes of the video frames. The high variability streams con-
tain relatively more large-sized frames, which are more difcult
50 IEEE TRANSACTIONS ON BROADCASTING, VOL. 54, NO. 1, MARCH 2008
Fig. 8. Loss probabilities for individual clients with clients 1 through 32 receiving low variability streams and clients 33 through 64 receiving high variability
streams over a link supporting
R
=64
streams with
B
=64KByte
buffer in each client. (a) Frame loss probability. (b) Information loss probability.
TABLE IX
AVERAGE LOSS PROBABILITIES FOR 32 STREAMS WITH 64 kbps AVERAGE BIT RATE AND 16 STREAMS WITH AND 128 kbps AVERAGE BIT RAT E OVER AN
R
=64
LINK
to schedule, resulting in larger information loss for the high vari-
ability stream group.
For further insight into the loss probabilities experienced by
the individual clients we plot in Fig. 8 the frame and infor-
mation loss probabilities for the individual clients. We observe
that the algorithms that provide good fairness among the groups
also provide good fairness among the individual clients within
a given group. Especially the continuous-time prefetch algo-
rithms counting the actual numbers of bits and incorporating
weighing according to the number of prefetched frames (done
in WRTB and WRLB) achieve uniform information loss prob-
abilities across the individual clients.
In a second heterogenous streaming scenario we consider
32 clients that receive video with an average bit rate of 64 kbps
and 16 clients that receive video with an average bit rate of
128 kbps. Note that the total average bit rate is equivalent to 64
clients concurrently receiving 64 kbps steams. We set
as in the preceding start-up simulations. Table IX shows the
average frame and information loss probabilities for the two
groups of clients. We observe that the frame-based algorithms
provide similar frame loss probabilities to the two groups, with
the DC and TF algorithms providing particularly strict fairness.
On the other hand, the bit-based algorithms provide similar in-
formation loss probabilities to the two groups. The algorithms
incorporating the long run average rates show slight differences,
while the algorithms considering the actual number of trans-
mitted bits demonstrate very good fairness performance.
Lastly, comparing the loss probabilities for the heterogenous
scenarios in Tables VIII and IX with the corresponding loss
probabilities for the homogeneous scenario, e.g., the low vari-
ability streams with scenario in Table V, we
observe that the DC algorithm gives overall somewhat higher
loss probabilities for the heterogeneous scenarios. The other al-
gorithms give overall roughly equivalent loss probability levels
for the homogeneous and heterogeneous scenarios.
VIII. CONCLUSION
We have developed and evaluated continuous-time algo-
rithms for the collaborative prefetching of continuous media
with discrete playout deadlines, such as encoded video. The
continuous-time algorithms allow for the transmission of
video frames across frame period boundaries. In contrast,
previously studied collaborative prefetching schemes typically
scheduled video frames for transmission during a given frame
period and did not permit transmissions across the frame
period boundaries. This led to inefciencies in the form of
unused transmission capacity at the end of a frame period,
OH et al.: CONTINUOUS-TIME COLLABORATIVE PREFETCHING OF CONTINUOUS MEDIA 51
when no frame could t into the remaining capacity. The
continuous-time prefetching algorithms developed in this
paper overcome these inefciencies and typically reduce the
starvation probabilities by a factor of four or more. We have
formally analyzed the fairness and efciency characteristics of
the proposed continuous-time prefetching algorithms.
There are many exciting avenues for future work on con-
tinuous-time collaborative prefetching. One direction is to
consider the streaming from several distributed servers over a
common bottleneck to clients. In the present study the video
was streamed from a single server and all the scheduling deci-
sions were made at that single server. In a multi-server scenario,
which could arise when streaming in parallel from several peers
in a peer-to-peer network, the different servers would need
coordinate their transmissions so as to achieve overall low
starvation probabilities and good fairness. Another direction
is to explore the continuous-time prefetching over wireless
links. In the present study, the bottleneck link was reliable, i.e.,
a video frame scheduled on the link was guaranteed to arrive
at the client. In contrast, wireless links are unreliable, i.e., a
transmitted frame may be dropped on the wireless link, and
possibly require retransmission.
REFERENCES
[1] F. H. Fitzek and M. Reisslein, MPEG-4 and H.263 video traces for
network performance evaluation,IEEE Network, vol. 15, no. 6, pp.
4054, November/December 2001.
[2] C. Bewick, R. Pereira, and M. Merabti, Network constrained
smoothing: Enhanced multiplexing of MPEG-4 video,in Pro-
ceedings of IEEE International Symposium on Computers and
Communications, Taormina, Italy, July 2002, pp. 114119.
[3] H.-C. Chao, C. L. Hung, and T. G. Tsuei, ECVBA trafc-smoothing
scheme for VBR media streams,International Journal of Network
Management, vol. 12, pp. 179185, 2002.
[4] W.-C. Feng and J. Rexford, Performance evaluation of smoothing al-
gorithms for transmitting prerecorded variable-bit-rate video,IEEE
Trans. Multimedia, vol. 1, no. 3, pp. 302313, Sept. 1999.
[5] T. Gan, K.-K. Ma, and L. Zhang, Dual-plan bandwidth smoothing
for layer-encoded video,IEEE Trans. Multimedia, vol. 7, no. 2, pp.
379392, Apr. 2005.
[6] M. Grossglauser, S. Keshav, and D. Tse, RCBR: A simple and efcient
service for multiple time-scale trafc,IEEE/ACM Trans. Networking,
vol. 5, no. 6, pp. 741755, Dec. 1997.
[7] Z. Gu and K. G. Shin, Algorithms for effective variable bit rate trafc
smoothing,in Proceedings of IEEE International Performance, Com-
puting, and Communications Conference, Phoenix, AZ, Apr. 2003, pp.
387394.
[8] C.-D. Iskander and R. T. Mathiopoulos, Online smoothing of VBR
H.263 video for the CDMA2000 and IS-95B uplinks,IEEE Trans.
Multimedia, vol. 6, no. 4, pp. 647658, Aug. 2004.
[9] M. Krunz, W. Zhao, and I. Matta, Scheduling and bandwidth allo-
cation for distribution of archived video in VoD systems,Journal of
Telecommunication Systems, Special Issue on Multimedia, vol. 9, no.
3/4, pp. 335355, Sept. 1998.
[10] M. Krunz, Bandwidth allocation strategies for transporting variable-
bit-rate video trafc,IEEE Communications Magazine, vol. 37, no. 1,
pp. 4046, Jan. 1999.
[11] H. Lai, J. Y. Lee, and L.-K. Chen, A monotonic-decreasing rate sched-
uler for variable-bit-rate video streaming,IEEE Trans. Circuits and
Systems for Video Technology, vol. 15, no. 2, pp. 221231, Feb. 2005.
[12] S. S. Lam, S. Chow, and D. K. Y. Yau, A lossless smoothing algorithm
for compressed video,IEEE/ACM Trans. Networking, vol. 4, no. 5, pp.
697708, Oct. 1996.
[13] R. Sabat and C. Williamson, Cluster-based smoothing for
MPEG-based video-on-demand systems,in Proceedings of IEEE
International Conference on Performance, Computing, and Commu-
nications, Phoenix, AZ, Apr. 2001, pp. 339346.
[14] J. Salehi, Z.-L. Zhang, J. Kurose, and D. Towsley, Supporting stored
video: Reducing rate variability and end-to-end resource requirements
through optimal smoothing,IEEE/ACM Trans. Networking, vol. 6, no.
4, pp. 397410, Aug. 1998.
[15] A. Solleti and K. J. Christensen, Efcient transmission of stored
video for improved management of network bandwidth,International
Journal of Network Management, vol. 10, pp. 277288, 2000.
[16] B. Vandalore,W.-C. Feng, R. Jain, and S. Fahmy, A survey of applica-
tion layer techniques for adaptive streaming of multimedia,Real-Time
Imaging Journal, vol. 7, no. 3, pp. 221235, 2001.
[17] D. Ye, J. C. Barker, Z. Xiong, and W. Zhu, Wavelet-based VBR video
trafc smoothing,IEEE Trans. Multimedia, vol. 6, no. 4, pp. 611623,
Aug. 2004.
[18] D. Ye, Z. Xiong, H.-R. Shao, Q. Wu, and W. Zhu, Wavelet-based
smoothing and multiplexing of VBR video trafc,in Proceedings of
IEEE Globecom, San Antonio, TX, Nov. 2001, pp. 20602064.
[19] Z.-L. Zhang, J. Kurose, J. Salehi, and D. Towsley, Smoothing,
statistical multiplexing and call admission control for stored video,
IEEE Journal on Selected Areas in Communications, vol. 13, no. 6,
pp. 11481166, Aug. 1997.
[20] M. B. Adams and L. D. Williamson, Optimum Bandwidth Utilization
in a Shared Cable System Data Channel,United States Patent Number
6 124 878, led December 1996, granted September 2000.
[21] Z. Antoniou and I. Stavrakakis, An efcient deadline-credit-based
transport scheme for prerecorded semisoft continuous media applica-
tions,IEEE/ACM Trans. Networking, vol. 10, no. 5, pp. 630643, Oct.
2002.
[22] S. Bakiras and V. O. Li, Maximizing the number of users in an inter-
active video-on-demand system,IEEE Trans. Broadcasting, vol. 48,
no. 4, pp. 281292, Dec. 2002.
[23] J. C. H. Yuen, E. Chan, and K.-Y. Lam, Real time video frames allo-
cation in mobile networks using cooperative pre-fetching,Multimedia
Tools and Applications, vol. 32, no. 3, pp. 329352, Mar. 2007.
[24] Y.-W. Leung and T. K. C. Chan, Design of an interactive video-on-
demand system,IEEE Trans. Multimedia, vol. 5, no. 1, pp. 130140,
Mar. 2003.
[25] F. Li and I. Nikolaidis, Trace-adaptive fragmentation for periodic
broadcast of VBR video,in Proceedings of 9th International Work-
shop on Network and Operating Systems Support for Digital Audio
and Video (NOSSDAV), Basking Ridge, NJ, June 1999, pp. 253264.
[26] C.-S. Lin, M.-Y. Wu, and W. Shu, Transmitting variable-bit-rate
videos on clustered VOD systems,in Proceedings of IEEE Interna-
tional Conference on Multimedia and Expo (ICME), New York, NY,
July 2000, pp. 14611464.
[27] S. Oh, Y. Huh, B. Kulapala, G. Konjevod, A. W. Richa, and M.
Reisslein, A modular algorithm-theoretic framework for the fair and
efcient collaborative prefetching of continuous media,IEEE Trans.
Broadcasting, vol. 51, no. 2, pp. 200215, June 2005.
[28] M. Reisslein and K. W. Ross, High-performance prefetching proto-
cols for VBR prerecorded video,IEEE Network, vol. 12, no. 6, pp.
4655, Nov/Dec 1998.
[29] H. Zhu and G. Cao, A power-aware and QoS-aware service model
on wireless networks,in Proceedings of IEEE Infocom, Hong Kong,
Hong Kong, Mar. 2004, pp. 13931403.
[30] M. Krunz and S. K. Tripathy, Exploiting the temporal structure of
MPEG video for the reduction of bandwidth requirements,in Pro-
ceedings of IEEE Infocom, Kobe, Japan, Apr. 1997, pp. 6774.
[31] S. Baruah, G. Koren, B. Mishra, A. Raghunathan, L. Rosier, and D.
Shasha, On-line scheduling in the presence of overload,in Proceed-
ings of the 32nd Annual Symposium on Foundation of Computer Sci-
ence, 1991, pp. 100110.
[32] T.-W. Lam and K.-K. To, Performance guarantee for online deadline
scheduling in the presence of overload,in Proceedings of the ACM-
SIAM Symposium on Discrete Algorithms (SODA), 2001, pp. 755764.
[33] L. Sha, J. P. Lehoczky, and R. Rajkuma, Solutions for some practical
problems in prioritized preemptive scheduling,in Proceedings of the
Real-Time Systems Symposium, 1986, pp. 181191.
[34] M. L. Dertouzos, Control robotics: The procedural control of physical
processes,in Proceedings of the IFIP Congress, 1974, pp. 807813.
Soohyun Oh is a lecturer in the Department of
Computer Engineering at Hansung University,
Seoul, Korea. She received M.S. and Ph.D. degrees
in Computer Science from Arizona State University,
in 2001 and 2005, respectively. From July 2006
through February 2007 she joined Mobile Com-
puting Laboratory at Sungkyunkwan University as
a researcher. Her research interests are continuous
media streaming, QoS, and packet routing.
52 IEEE TRANSACTIONS ON BROADCASTING, VOL. 54, NO. 1, MARCH 2008
Beshan Kulapala received the B.S. degree in elec-
trical engineering from the University of Kentucky,
Lexington, in 2001, and received the M.S. degree in
electrical engineering from Arizona State University,
Tempe, in 2003. Since 2003 he has been a Ph.D. stu-
dent in the Department of Electrical Engineering at
Arizona State University. His research interests are in
the area of video transmission over wired and wire-
less networks. He is a student member of the IEEE.
Andréa W. Richa is an Associate Professor at the
Department of Computer Science and Engineering
at Arizona State University, Tempe, since August
2004. She joined this department as an Assistant
Professor in August 1998. Prof. Richa received
her M.S. and Ph.D. degrees from the School of
Computer Science at Carnegie Mellon University,
in 1995 and 1998, respectively. She also earned an
M.S. degree in Computer Systems from the Graduate
School in Engineering (COPPE), and a B.S. degree
in Computer Science, both at the Federal University
of Rio de Janeiro, Brazil, in 1992 and 1990, respectively. Prof. Richas main
area of research is in network algorithms. Some of the topics Dr. Richa has
worked on include packet scheduling, distributed load balancing, packet
routing, mobile network clustering and routing protocols, and distributed data
tracking. Prof. Richas data tracking (or name lookup) algorithm has been
widely recognized as the rst benchmark algorithm for the development of
distributed databases in peer-to-peer networking, having being references
by over 130 academic journal or conference publications to date, and being
implemented as part of two of the current leading projects in peer-to-peer
networking. Dr. Richas was the recipient of an NSF CAREER Award in 1999.
For a selected list of her publications, CV, and current research projects, please
visit http://www.public.asu.edu/aricha.
Martin Reisslein is an Associate Professor in the De-
partment of Electrical Engineering at Arizona State
University (ASU), Tempe. He received the Dipl.-Ing.
(FH) degree from the Fachhochschule Dieburg, Ger-
many, in 1994, and the M.S.E. degree from the Uni-
versity of Pennsylvania, Philadelphia, in 1996. Both
in electrical engineering. He received his Ph.D. in
systems engineering from the University of Pennsyl-
vania in 1998. During the academic year 19941995
he visited the University of Pennsylvania as a Ful-
bright scholar. From July 1998 through October 2000
he was a scientist with the German National Research Center for Information
Technology (GMD FOKUS), Berlin and lecturer at the Technical University
Berlin. From October 2000 through August 2005 he was an Assistant Professor
at ASU. He served as editor-in-chief of the IEEE Communications Surveys and
Tutorials from January 2003 through February 2007 and has servedon the Tech-
nical Program Committees of IEEE Infocom,IEEE Globecom, and the IEEE
International Symposium on Computer and Communications. He has organized
sessions at the IEEE Computer Communications Workshop (CCW). He main-
tains an extensive library of video traces for network performance evaluation,
including frame size traces of MPEG4 and H.264 encoded video, at http://
trace.eas.asu.edu. His research interests are in the areas of Internet Quality of
Service, video trafc characterization, wireless networking, optical networking,
and engineering education. For a selected list of his publications, CV, and cur-
rent research projects, please visit http://www.fulton.asu.edu/~mre.
... 3) Video Traffic Scheduling: The collaborative prefetch scheduling of a multitude of prerecorded video streams by a central controller to a set of receivers has been investigated for networks with a wired bottleneck link in several studies, see for instance [23]- [26]. Similarly, centralized prefetch scheduling for wireless networks with dedicated wireless channels has been studied, see for instance [27]- [29]. ...
... The prefetched reserves help to overcome delays due to large video frames and outages of the wireless channels. A prefetch policy that transmits video frames for the stream , with the shortest prefetched video segment [in seconds] achieves generally minimal video playback starvation probabilities [26]. In order to determine the shortest prefetched video segment, the OLT needs to track the lengths of the prefetched video segments , for the individual streams. ...
... In order to keep these overestimates small (and thus keep their impact on the scheduling negligible) the wireless client updates the OLT periodically, e.g., a few times per second, about its actual prefetched segment. b) JSQ-based frame selection: VMP may employ any prefetch frame selection policy, such as policies from [23], [24], [26]. Without loss of generality, we explain here the use of the discrete-time join-the-shortest-queue (JSQ) prefetch policy, which has low complexity while achieving relatively good performance [26], [45], in the context of our FiWi network. ...
Article
Optical and wireless network technologies are expected to converge in the near to midterm, giving rise to bimodal fiber-wireless (FiWi) broadband access networks. In triple-play (voice, video, and data) service scenarios for such FiWi access networks, video traffic will likely dominate due to the widely predicted increase in video network services and the high traffic volume of compressed video compared to voice and data services. In this paper, we introduce and evaluate a comprehensive video MAC protocol (VMP) to efficiently deliver prerecorded video downstream to wireless consumers over a FiWi network in the presence of voice and data upstream and downstream traffic. VMP consists of three main novel components: (i) frame fragmentation in conjunction with hierarchical frame aggregation for efficient MAC frame transport over the integrated optical and wireless network segments, (ii) multi-polling medium access control for upstream voice and data packets and acknowledgements for downstream video packets, and (iii) prefetching of video frames over the optical and wireless network segments in conjunction with hybrid reservation/contention-based medium access. Our simulation results indicate that VMP achieves significant improvements in throughput-delay performance for all three traffic types as well as reductions in the playback starvation probability for video traffic compared to existing state-of-the-art MAC mechanisms.
... Particularly significant improvement in the values of multifractal characteristics was noticed for the MV video. These results can further be used for improving the smoothing techniques of 3D videos, in applications such as smoothing with prefetching [33,46], more precisely in the sense of estimating the bursty traffic, in the process of its management and control, in order to handle it first. ...
Article
Full-text available
One of the main properties of a three-dimensional (3D) video is the large amount of data, which impose challenges for network transport of videos, in applications such as digital video broadcast (DVB), streaming over IP networks, or for transmission over mobile broadband. Addressing these challenges requires a thorough understanding of the characteristics and traffic properties of 3D video formats. We analyzed 3D video formats using publicly available long video frame-size traces of videos in full high definition (HD) resolution with two views. Examined 3D video representation formats are the multiview (MV) video format, the frame sequential (FS) format, and the side-by-side (SBS) format. We performed a multifractal analysis through extensive simulation and showed multifractal properties of 3D video representation formats. It was shown that the MV video had the highest multifractal nature, while the FS video had the lowest. Also, a part of the multifractal spectrum connected to the highest changes in the signal (high bitrate variability) is analyzed in detail. Changes in multifractal properties for different streaming approaches of 3D videos with aggregated frames are examined, as well as the influence of frame types and values of quantization parameters. Multifractal analysis was performed by the method of moments and by the histogram method.
... The server however has to smooth data sending in order to match the available bandwidth, especially when the video uses VBR (variable bit rate) encoding. Several smoothing algorithms have been proposed in the literature, with various complexities and properties [54,34]. They have been compared in [14], and the implications of smoothing on bandwidth and delay analysed in [10]. ...
Article
Full-text available
Nowadays, video data transfers account for much of the Internet traffic and a huge number of users use this service on a daily base. Even if videos are usually stored in several bitrates on servers, the video sending rate does not take into account network conditions which are changing dynamically during transmission. Therefore, the best bitrate is not used which causes sub-optimal video quality when the video bitrate is under the available bandwidth or packet loss when it is over it. One solution is to deploy adaptive video, which adapts video parameters such as bitrate or frame resolution to network conditions. Many ideas are proposed in the literature, yet no paper provides a global view on adaptation methods in order to classify them. This article fills this gap by discussing several adaptation methods through a taxonomy of the parameters used for adaptation. We show that, in the research community, the sender generally takes the decision of adaptation whereas in the solutions supported by major current companies the receiver takes this decision. We notably suggest, without evaluation, a valuable and realistic adaptation method, gathering the advantages of the presented methods.
... The problem of prefetching has been explored before for soft real-time applications such as multimedia [20], [21]. In contrast, our goal is to offer hard performance guarantees. ...
Article
Full-text available
Real–time systems and applications are becoming increasingly complex and larger, often requiring to process more data that what could be fitted in main memory. The management of the individual tasks is well-understood, but the interaction of communicating tasks with different timing characteristics is less well-understood. We discuss how to model prefetching across a series of real–time tasks/components communicating flows via reserved memory buffers (possibly interconnected via a real–time network) and present RAD– FETCH, a model for characterizing and managing these interactions. We provide proofs demonstrating the correctness RAD–FETCH, allowing system designers to determine the amount of memory required and latency bounds based upon the characteristics of the interacting real–time tasks and of the system as a whole.
... Moreover, buffering helps in reducing the high variability of the frame sizes (i.e., the video bitrate) at the encoder output by smoothing out the frame size variations. A wide array of video smoothing techniques has been researched for video encoded The Scientific World Journal with the early MPEG codecs [62][63][64][65][66][67] so that variable bitrate encoded video can be more easily transported over networks. ...
Article
Full-text available
Video encoding for multimedia services over communication networks has significantly advanced in recent years with the development of the highly efficient and flexible H.264/AVC video coding standard and its SVC extension. The emerging H.265/HEVC video coding standard as well as 3D video coding further advance video coding for multimedia communications. This paper first gives an overview of these new video coding standards and then examines their implications for multimedia communications by studying the traffic characteristics of long videos encoded with the new coding standards. We review video coding advances from MPEG-2 and MPEG-4 Part 2 to H.264/AVC and its SVC and MVC extensions as well as H.265/HEVC. For single-layer (nonscalable) video, we compare H.265/HEVC and H.264/AVC in terms of video traffic and statistical multiplexing characteristics. Our study is the first to examine the H.265/HEVC traffic variability for long videos. We also illustrate the video traffic characteristics and statistical multiplexing of scalable video encoded with the SVC extension of H.264/AVC as well as 3D video encoded with the MVC extension of H.264/AVC.
... The Scientific World Journal with the early MPEG codecs [62][63][64][65][66][67] so that variable bitrate encoded video can be more easily transported over networks. ...
Article
Full-text available
The performance evaluation of video transport mechanisms becomes increasingly important as encoded video accounts for growing portions of the network traffic. Compared to the widely studied MPEG-4 encoded video, the recently adopted H.264 video coding standards include novel mechanisms, such as hierarchical B frame prediction structures and highly efficient quality scalable coding, that have important implications for network transport. This tutorial introduces a trace-based evaluation methodology for the network transport of H.264 encoded video. We first give an overview of H.264 video coding, and then present the trace structures for capturing the characteristics of H.264 encoded video. We give an overview of the typical video traffic and quality characteristics of H.264 encoded video. Finally, we explain how to account for the H.264 specific coding mechanisms, such as hierarchical B frames, in networking studies.
... More recently, collaborative approaches have attracted significant attention. For instance, collaborative smoothing approaches jointly smooth several ongoing video streams; thus, achieving improved statistical multiplexing gains [41], [92], [93]. Similarly, with cooperative video streaming, several nodes cooperate to improve the transmission of a video stream [94]–[98]. ...
Article
Full-text available
The performance evaluation of video transport mechanisms becomes increasingly important as encoded video accounts for growing portions of the network traffic. Compared to the widely studied MPEG-4 encoded video, the recently adopted H.264 video coding standards include novel mecha-nisms, such as hierarchical B frame prediction structures and highly efficient quality scalable coding, that have important implications for network transport. This tutorial introduces a trace-based evaluation methodology for the network transport of H.264 encoded video. We first give an overview of H.264 video coding, and then present the trace structures for capturing the characteristics of H.264 encoded video. We give an overview of the typical video traffic and quality characteristics of H.264 encoded video. Finally, we explain how to account for the H.264 specific coding mechanisms, such as hierarchical B frames, in networking studies. Index Terms—H.264 encoded video, hierarchical B frames, medium grain scalability (MGS), network transport, simulation, traffic variability, video trace.
Article
Cooperatively fetching video content with the help of other mobile nodes can release some of the pressure on storage and bandwidth of the constrained mobile devices in wireless networks and speeds up the localization process of video resources so as to support better quality of viewing experience. In this context, the discovery of the appropriate mobile cooperative node becomes a key factor and is also a challenge for the deployment of such a fetching scheme. In this paper, we introduce a novel cooperative content fetching-based strategy to increase the quality of video delivery to mobile users in wireless networks (CCF). By intelligently monitoring the real-time variation in the state of the one-hop neighbors (immediate-neighbors) of the video resource downloader, CCF employs an innovative estimation model to measure the stability of these immediate-neighbors. In order to enhance the cooperative fetching efficiency, CCF designs a communication quality forecast model to measure link reliability and forecast the available bandwidth. By making use of a newly proposed cooperative fetching algorithm, CCF can speed up fetching and disseminating of video resources with the help of cooperative neighbors selected in terms of stability and communication quality. Simulation results show how CCF obtains higher selection accuracy of cooperative neighbors, lower average end-to-end delay, lower average packet loss ratio, higher average throughput, higher video quality, and lower maintenance overhead in comparison with state-of-the-art solutions.
Article
Dynamic circuits are well suited for applications that require predictable service with a constant bit rate for a prescribed period of time, such as cloud computing and e-science applications. Past research on upstream transmission in passive optical networks (PONs) has mainly considered packet-switched traffic and has focused on optimizing packet-level performance metrics, such as reducing mean delay. This study proposes and evaluates a dynamic circuit and packet PON (DyCaPPON) that provides dynamic circuits along with packet-switched service. DyCaPPON provides (i)(i) flexible packet-switched service through dynamic bandwidth allocation in periodic polling cycles, and (ii)(ii) consistent circuit service by allocating each active circuit a fixed-duration upstream transmission window during each fixed-duration polling cycle. We analyze circuit-level performance metrics, including the blocking probability of dynamic circuit requests in DyCaPPON through a stochastic knapsack-based analysis. Through this analysis we also determine the bandwidth occupied by admitted circuits. The remaining bandwidth is available for packet traffic and we conduct an approximate analysis of the resulting mean delay of packet traffic. Through extensive numerical evaluations and verifying simulations we demonstrate the circuit blocking and packet delay trade-offs in DyCaPPON.
Article
The peer-to-peer (P2P) network structure is widely employed for video streaming applications because of its high stability, flexible extensibility, and ability to distribute data stream loading among different peer nodes. Numerous P2P schemes have been proposed for video on demand (VoD) applications. High video source searching cost and long response latency are always issues in dealing with VCR functionality, such as jump and fast-forward/rewind, because of asynchronous interactive and random join/leave behaviors of end users. To overcome this bottleneck, an interleaved video frame distribution (IVFD) scheme is proposed to support full VCR functionality in a P2P environment without searching for new sources. Instead of acquiring video stream data from a single parent peer in the published schemes, each child peer in the IVFD scheme can simultaneously acquire interleaved video data from multiple parent peers. When a client peer carries out arbitrary VCR operations, such as jump or fast-forward/rewind, its parent peers are still able to provide intermittent video stream data for the client peer; thus, no video source search is necessary. Simulation results reveal excellent load distribution performance and response latency for VCR operations in the proposed IVFD scheme.
Article
Full-text available
The transfer of prerecorded, compressed variable-bit-rate video requires multimedia services to support large fluctuations in bandwidth requirements on multiple time scales. Bandwidth smoothing techniques can reduce the burstiness of a variable-bit-rate stream by transmitting data at a series of fixed rates, simplifying the allocation of resources in video servers and the communication network. This paper compares the transmission schedules generated by the various smoothing algorithms, based on a collection of metrics that relate directly to the server, network, and client resources necessary for the transmission, transport, and playback of prerecorded video. Using MPEG-1 and MJPEG video data and a range of client buffer sizes, we investigate the interplay between the performance metrics and the smoothing algorithms. The results highlight the unique strengths and weaknesses of each bandwidth smoothing algorithm, as well as the characteristics of a diverse set of video clips
Article
We propose an effective and efficient traffic-smoothing called the efficient changes and variability bandwidth allocation (ECVBA) scheme. This algorithm not only minimizes the peak rate of a stream but also increases the likelihood of successful VBR stream transmission. The main benefit is that it can immediately release bandwidth to other sites in the network. Copyright © 2002 John Wiley & Sons, Ltd.
Article
Though the integrated services model and resource reservation protocol (RSVP) provide support for quality of service, in the current Internet only best-effort traffic is widely supported. New high-speed technologies such as ATM (asynchronous transfer mode), gigabit Ethernet, fast Ethernet, and frame relay, have spurred higher user expectations. These technologies are expected to support real-time applications such as video-on-demand, Internet telephony, distance education and video-broadcasting. Towards this end, networking methods such as service classes and quality of service models are being developed. Today's Internet is a heterogeneous networking environment. In such an environment, resources available to multimedia applications vary. To adapt to the changes in network conditions, both networking techniques and application layer techniques have been proposed. In this paper, we focus on the application level techniques, including methods based on compression algorithm features, layered encoding, rate shaping, adaptive error control, and bandwidth smoothing. We also discuss operating system methods to support adaptive multimedia. Throughout the paper, we discuss how feedback from lower networking layers can be used by these application-level adaptation schemes to deliver the highest quality content.
Article
A new algorithm, the Visibility Polygon (VP) algorithm is developed, that reduces the variable bit rate properties of a stored video by reducing the number of changes required for the transmission and also reduces the variation between these changes. It is demonstrated that the average maximum bandwidth requirements of the new algorithm is lower that of the OBA and optimal smoothing algorithms. It is also shown that the transmission plan obtained by the new VP algorithm is lower and that the variation between the changes is also reduced.
Article
In this paper, we study the bandwidth allocation problem for serving video requests in a mobile real-time video player system. One of the main issues to maintain the quality of services (QoS) in mobile video playback is to ensure sufficient number of video frames available at the client side while a video is being played. However, due to the limited bandwidth of a mobile network and variable workload streaming video, this is not easy to achieve in practice. In addition, the communication link between mobile clients and a video server is subject to frequent errors and temporary disconnection. In this paper, we propose to use the notion of buffered bandwidth in addition to the network bandwidth to support real-time playback of videos at mobile clients. Based on this, we designed a bandwidth allocation scheme called Cooperative Pre-Fetching (CP) in which the amount of bandwidth to be allocated to serve a video request depends on the current buffer level of the video at the client relative to the target buffer level of the client. In determining the target buffer level, we consider the errors in communication to the client as well as the other clients who are concurrently served by the system. The buffered video frames at the clients are then used to minimize the impact of error in communications on the overall QoS of video playbacks in the whole system.
Article
In this paper an efficient scheme is proposed based on the introduced Deadline Credit-based (DC) policy. This scheme is appropriate for any prerecorded media but particularly relevant for prerecorded, semi-soft Continuous Media (CM) applications. Semi-soft are applications with very small initial delay tolerance and, thus, for which very small amount of content may be sent in advance. The proposed policy pushes content toward the end-user during the session by taking advantage of any bandwidth underutilisation periods, exploiting available storage and building up fairly a deadline credit to be consumed during periods of overutilisation. The scheduling policy is studied for the single hop case (applicable to the server of the content), as well as for the multihop case (applicable to the server and network nodes). The derived results demonstrate the ability of the proposed scheme to decrease the amount of required bandwidth (or equivalently induced losses) with respect to alternative schemes without requiring large initial delay which is not acceptable for semi-soft CM applications.
Conference Paper
Many studies show that the wireless network interface (WNI) accounts for a significant part of the power consumed by mobile terminals. Thus, putting the WNI into sleep when it is idle is an effective technique to save power. To support streaming applications, existing techniques cannot put the WNI into sleep due to strict delay requirements. We present a novel power-aware and QoS-aware service model over wireless networks. In the proposed model, mobile terminals use proxies to buffer data so that the WNIs can sleep for a long time period. To achieve power-aware communication while satisfying the delay requirement of each flow, a scheduling scheme, called priority-based bulk scheduling (PBS), is designed to decide which flow should be served at which time. Through analysis, we prove that the PBS service model can provide delay assurance and achieve power efficiency. We use audio-on-demand and Web access as case studies to evaluate the performance of the PBS service model. Experimental results show that PBS achieves excellent QoS provision for each flow and significantly reduces the power consumption.