Conference PaperPDF Available

Online Adaptive Compression in Delay Sensitive Wireless Sensor Networks

Authors:

Abstract and Figures

Compression, as a popular technique to reduce data size by exploiting data redundancy, can be used in delay sensitive wireless sensor networks (WSNs) to reduce end-to-end packet delay as it can reduce packet transmission time and contention on the wireless channel. However, the limited computing resources at sensor nodes make the processing time of compression a nontrivial factor in the total delay a packet experiences and must be carefully examined when adopting compression. In this paper, we first study the effect of compression on data gathering in WSNs under a practical compression algorithm. We observe that that compression does not always reduce the packet delay in a WSN as commonly perceived, whereas its effect is jointly determined by the network configuration and hardware configuration. Based on this observation, we design an adaptive algorithm to make on-line decisions such that compression is only performed when it can benefit the overall performance. We implement the algorithm in a completely distributed manner that utilizes only local information of individual sensor nodes. Our extensive experimental results show that the algorithm demonstrates good adaptiveness to network dynamics and maximizes compression benefit.
Content may be subject to copyright.
On-Line Adaptive Compression in Delay
Sensitive Wireless Sensor Networks
Xi Deng and Yuanyuan Yang
Department of Electrical and Computer Engineering
New York State Center of Excellence in Wireless and Information Technology
Stony Brook University
Stony Brook, NY 11794, USA
ABSTRACT
Compression, as a popular technique to reduce data
size by exploiting data redundancy, can be used in delay
sensitive wireless sensor networks (WSNs) to reduce end-
to-end packet delay as it can reduce packet transmission
time and contention on the wireless channel. However,
the limited computing resources at sensor nodes make the
processing time of compression a nontrivial factor in the
total delay a packet experiences and must be carefully
examined when adopting compression. In this paper, we
first study the effect of compression on data gathering
in WSNs under a practical compression algorithm. We
observe that that compression does not always reduce
the packet delay in a WSN as commonly perceived,
whereas its effect is jointly determined by the network
configuration and hardware configuration. Based on this
observation, we design an adaptive algorithm to make
on-line decisions such that compression is only performed
when it can benefit the overall performance. We implement
the algorithm in a completely distributed manner that
utilizes only local information of individual sensor nodes.
Our extensive experimental results show that the algorithm
demonstrates good adaptiveness to network dynamics and
maximizes compression benefit.
Keywords: Wireless sensor networks, data compression,
data gathering, packet delay, delay-sensitive, adaptive algo-
rithm.
I. INTRODUCTION
Delay sensitive wireless sensor networks (WSNs) re-
quires real-time delivery of sensing data to the data sink.
Such networks are widely adopted in various real-time
applications including traffic monitoring, hazard detection
and battlefield surveillance, where decisions should be
made promptly once the emergent events occur. Compared
to general WSNs where energy efficiency is the primary
design concern, delay sensitive WSNs demand more on
minimizing the communication delay during data delivery.
Recent study in this area has been mainly focused on the
algorithm design of efficient routing strategies and data
Research supported by NSF grant number ECCS-0801438 and ARO
grant number W911NF-09-1-0154.
aggregation to reduce such delay and provide real-time de-
livery guarantees [12], [13], [2]. Compression was initially
adopted as an effective approach to saving energy in WSNs.
In fact, it can also be used to reduce the communication
delay in delay sensitive WSNs, which is the main focus of
this paper.
In WSNs, compression reduces the data amount by ex-
ploiting the redundancy resided in sensing data. The re-
duction can be measured by the compression ratio,dened
as the original data size divided by the compressed data
size. A higher compression ratio indicates larger reduction
on the data amount and results in shorter communication
delay. Thus, much work in the literature has been endeav-
ored to achieve better compression ratio for sensing data.
However, from the implementation perspective, most of the
compression algorithms are a complex and time-consuming
procedure running on sensor nodes that have a resource
constrained nature. As the processing time of compression
could not be simply neglected in such nodes, the effect
of compression on the total delay during data delivery be-
comes a tradeoff between the reduced communicationdelay
and the increased processing time. As a result, compression
may increase rather than decrease the total delay when the
processing time is relatively long. In this paper, we will
first analyze this effect in the typical data gathering scheme
in WSNs where each sensor collects data continuously and
delivers all the packets to a data sink. Then we will design
an on-line adaptive algorithm that performs compression
only when compression can actually reduce the total delay
to guarantee the network to achieve the shortest total delay
under all conditions.
To analyze the effect of compression, first we have to
obtain the processing time of compression, which depends
on several factors, including the compression algorithm,
processor architecture, CPU frequency and the compression
data. In this paper, we adopt a lossless compression algo-
rithm LZW [17] that is suitable for sensor nodes. We imple-
ment the algorithm on a TI MSP430F5418 microcontroller
[1], which is used in the current generation of sensor nodes.
Our experiments on typical sensing data demonstrate that
the compression time is comparable to the transmission
time of packets. To support the study in large scale WSNs,
we utilize a software performance estimation approach to
978-1-4244-7489-9/10/$26.00 ©2010 IEEE
452
providing runtime measurement of the algorithm execution
time in the NS-2 simulator. Our simulation results reveal
that compression may lead to several times longer overall
delay under light traffic loads, while it can significantly
reduce the delay under heavy traffic loads and increase
maximum throughput.
As the effect of compression varies heavily with different
network traffic and hardware configurations, we design an
on-line adaptive algorithm that dynamically makes com-
pression decisions to accommodate the changing state of
WSNs. In the algorithm, we adopt a queueing model to
estimate the queueing behavior of sensors with the assis-
tance of only local information of each sensor node. By
using the queueing model, the algorithm predicts the com-
pression effect on the average packet delay and performs
compression only when it can reduce the packet delay. We
evaluate our algorithm with extensive simulations in NS-2,
which show that the adaptive algorithm can make decisions
properly and yield near-optimal performance under various
network configurations.
The rest of the paper is organized as follows. Section
II introduces the LZW compression algorithm and the
approach to measure its execution time in sensor nodes.
Section III characterizes the compression effect on packet
delays under variousnetwork configurations. In Section IV,
we describe the on-line adaptive algorithm in detail, includ-
ing the analysis of the queueing model and the algorithm
implementation in sensor nodes. Section V examines the
performance of compression under the proposed adaptive
algorithm. Finally, Section VI concludes the paper.
II. COMPRESSION PROCESSING IN SENSOR NODES
Due to the limited energy budget in sensor networks,
compression is considered as an effective method to re-
duce energy consumption on communications and has been
extensively studied. The initial study was focused on
the exploitation of the spatial-temporal correlation in the
sensing data. One approach is the distributed source coding,
which was first introduced in [18] and later studied in [14],
[7]. In this approach, signals received in sensor nodes are
independently quantized to a certain number of bits and
sent to the single receiver, which can estimate the original
signals with high confidence utilizing the correlation in
signals received from different sensors. Another approach
is to combine compression with the routing strategy so that
data are aggregated and compressed along their way to the
sink [8], [10], [3]. This type of routing strategy is mainly
designed to maximize the compression ratio, however, it
may not be the optimal routing strategy to minimize the
packet delay. Therefore, these compression methods may
not be suitable to real-time applications.
The above compression methods are lossy in the sense
that the compressed data cannot be fully recovered by de-
compression. In contrast, lossless compression algorithms
can reconstruct the original data from the compressed data,
which are more practical and applicable to general sensing
TAB L E 1
LZW COMPRESSION ALGORITHM
STRING = get first character
while there are still input characters
C = get next character
look up STRING+C in the dictionary
if STRING+C is in the dictionary
STRING = STRING+C
else
output the code for STRING
add STRING+C to the dictionary
STRING = C
end if
end while
output the code for STRING
data in sensor networks [5], [4]. In this paper, we consider
the LZW algorithm [17], which is a lossless algorithm and
is widely used in energy constrained devices. Next, we
briefly introduce the LZW algorithm and the approach to
obtaining its execution time at the software level instead of
running the algorithm on hardware.
A. LZW Algorithm
Lempel-Ziv-Welch (LZW) compression is a dictionary
based algorithm that replaces strings of characters with
single codes in the dictionary. The first 256 codes in the
dictionary by default correspond to the standard character
set. The algorithm sequentially reads in characters and
finds the longest string sthat can be recognized by the
dictionary. Then it encodes susing the corresponding
codeword in the dictionary and adds string s+cin the
dictionary, where cis the character following string s.This
process continues until all characters are encoded. A more
detailed description of the LZW algorithm can be found in
[17]. Compared to other compression algorithms, LZW is
relatively simple but yields a good compression ratio for
sensor data as shown in [4].
We will only focus on the compression process in this
paper, as we assume the decompression process is per-
formed at the sink node, which is more powerful thus can
perform the decompression in relatively short time. To
adapt the LZW compression to sensor nodes, we set the
dictionary size to 512, which has been shown to yield good
compression ratios in real-world deployments in [4].
To achieve a good compression ratio, which is defined
as the original data size divided by the compressed data
size, the string should be long enough to provide sufficient
redundancies. Thus, the LZW algorithm is more suitable
for WSNs that collect heavier load data, such as images
and audio clips. Even in the compression algorithms
specifically designed for these type of data, the processing
in the algorithms could be complex and time-consuming.
Hence, our evaluation on the LZW algorithm can provide
some guidance for adopting these algorithms. In addition,
in large scale WSNs, distant nodes require multiple hops
of transmissions to reach the sink and the nodes closer
453
to the sink can endure unaffordable traffic. In this case,
aggregation is often used to reduce the traffic and such
aggregated packets that contain several lighter load packets
are also suitable for compression.
B. Measurement of Compression Delay in Sensor Nodes
We examine the compression process on a TI
MSP430F5418 microcontroller, which is used in the current
generation of sensor nodes. It is a 16-Bit Ultra-Low-Power
MCU with 128KB Flash and 16KB RAM. The CPU has a
peak working frequency of 18MHz, a very high frequency
among the current generation of sensor nodes.
To facilitate the evaluation of the compression effect in
large scale WSNs, we adopt a software estimation approach
to simulating the compression processing time. Since
instructions are sequentially executed in the CPU of this
microcontroller, the processing time can be calculated as
the total number of instruction cycles divided by the CPU
frequency. Thus obtaining the precise cycle counts becomes
the main task, which we describe below.
The source code of the LZW algorithm written in C
language is first compiled to assembly code using the
instruction set of MSP430X CPU series. A mapping is
performed between the source code and the corresponding
assembly code, with an example showing in Table 2. As
each instruction in assembly code has a fixed number of
execution cycles, the number of cycles of each C statement
can be counted by summing up the numbers of cycles of
the corresponding assembly instructions. The numbers for
all C statements are then recorded in the source code by
code instrumentation so that the execution cycles can be
obtained at run time. For the efficiency consideration,
the instrumentation codes only appear at the end of each
basic block, in which statements are sequentially executed
without conditional branches. This way, the total count
of cycles can be obtained at the completion of the LZW
algorithm, then the processing time is obtained by dividing
the total execution cycles by the working frequency.
III. EXPERIMENTAL STUDY OF COMPRESSION EFFE CT
ON PACKET DELAY
In this section, we study the compressioneffect on packet
delay for data gathering in WSNs. The network scenario
considered is all-to-one data gathering where all sensors
continuously generate packets and deliver them to a single
sink. The performance metric used is the end-to-end packet
delay, which is the interval from the time a packet is
generated at the source to the time the packet is delivered to
the sink. To evaluate the compression effect, we compare
the end-to-end packet delay under compression with that
without compression. In the rest of the paper, we refer to
two schemes as compression scheme and no-compression
scheme, respectively.
A. Experimental Setup
The experiments are conducted in the NS-2 simulator.
To simplify the evaluation, we examine the performance
TAB L E 2
AMAPPING EXAMPLE FOR A F UNCTION IN THE LZW ALGORITHM.C
STATEMENTS IN SOURCE CODE ARE HIGHLIGHTED.THE STATEMENT
BLOCK AFTER EACH CSTATEMENT I S THE CORRESP ONDING
ASSEMBLY CODE OF THAT CSTATEMENT.
void put(long key, int element){
put:
006190 153B pushm.w #4,R11
006192 4C0A mov.w R12,R10
006194 4D0B mov.w R13,R11
006196 4E08 mov.w R14,R8
int b = hash code(key);
006198 13B0 5FDC calla #hash_code
if(table[b].key == NOT USED){
00619C 5C0C rla.w R12
00619E 4C0F mov.w R12,R15
0061A0 5C0C rla.w R12
0061A2 5F0C add.w R15,R12
0061A4 93BC 1C00 cmp.w #0xFFFF,0x1C00(R12)
0061A8 2009 jne 0x61BC
0061AA 93BC 1C02 cmp.w #0xFFFF,0x1C02(R12)
0061AE 2006 jne 0x61BC
table[b].key = key;
0061B0 4A8C 1C00 mov.w R10,0x1C00(R12)
0061B4 4B8C 1C02 mov.w R11,0x1C02(R12)
table[b].element = element;
0061B8 488C 1C04 mov.w R8,0x1C04(R12)
return;
0061BC 1738 popm.w #4,R11
0061BE 0110 reta
on a 2D grid wireless network. Two networks of a 5×5
grid and a 7×7grid with the sink at the center of the
grids are considered. In the simulation, the transmission
range is set to 16mand the distance between neighboring
nodes is 10mand 15m, respectively, to create different
network topologies. The packet generation on each sensor
node follows an i.i.d. Poisson process, and we assume an
identical packet length for all the packets generated in a
single experiment. Two different packet lengths (256Band
512B) are used to create different compression ratios and
processing delays.
At the network layer, we adopt a multi-path routing
strategy [15]. Specifically, each sensor is assigned a level
number, which indicates the minimum number of hops
required to deliver a packet from this sensor to the sink.
Such information can be obtained at the initial setup in the
sensor deployment. A sensor with a level number iis called
alevelinode. A level inode only forwards the packets to
its level i1neighbors. Such a routing strategy is easy
to implement, though may not necessarily yield the best
real-time performance. However, since our compression
strategy does not consider inter-packet compression, the
choice of the routing strategy will not substantially affect
the performance evaluation that targets at the compression
algorithm.
We adopt the commonly used 802.11 protocol as the
MAC layer protocol, and the wireless bandwidth is set to
1Mb/s. The data set is automatically generated by the tool
described in [6], which provided a good approximation on
the real sensing data in the evaluation of several represen-
tative network applications. We use such synthetic data
454
so that our simulation can be performed sufficiently long
to capture the steady state behavior without exhausting the
simulation data.
Compression is performed on each packet when it is
generated at the source node. Since each sensor is equipped
with a sequential processor, multiple packets are served in
the First-Come-First-Serve order and a sufficiently large
buffer is assumed so that no packets are dropped at the
compression stage. The compression process is simulated
according to the estimation approach described in Section
II. The CPU frequency is 18MHz. With these settings, we
obtain that the average compression ratio is 1.25 and 1.6
when the packet length is 256Band 512B, respectively.
The average processing delay is 0.016sfor 256Bpackets
and 0.045sfor 512Bpackets. Note that we use the peak
CPU frequency here, thus the processing delay simulated
is a lower bound of the actual delay. As will be seen later,
even this lower bound represents a very significant portion
of the total packet delay and cannot be simply ignored.
B. Experimental Results
B.1 End-to-End Packet Delay
Based on the routing strategy, a packet generated at a
level inode requires i-hop transmissions to reach the sink,
resulting in different packet delay for nodes at different
levels. In this subsection, we examine the average packet
delay for all the nodes at the same level. Fig. 1 shows the
average delays of different levels in the 5×5network with
neighboring distance set to 15mand the packet length set
to 512B.
2 2.5 3 3.5 4 4.5 5 5.5
0
0.2
0.4
0.6
0.8
Packet generation rate (/s)
End−to−end delay (s)
w/
comp.
w/o
comp.
Level 1
Level 2
Level 3
Level 4
Fig. 1. Average packet delays for packets generated at different levels.
The primary observation drawn from the figure is that
compression has a two-sided effect on the real-time per-
formance depending on the packet generation rate. When
the rate is low, compression clearly increases the average
delay at each level. For example, when the rate is 2, the
average delay of level 1increases about 2.7times from
13.6ms to 51.5ms when compression is adopted. Such
increase is also observed for other levels, the least of which
is 75% for level 4. Under such light traffic load, the delay is
almost the packet transmission time due to few contentions
for the wireless channel. Since the packet transmission
time reduced by compression is much less than the increase
caused by the compression processing time, the overall
delay increases, indicating a negative effect of compression.
We also notice that such increase is smaller for nodes
at higher levels, which can be explained by the fact that
nodes at higher levels require more hops of transmissions
to reach the sink while each transmission is shortened due
to the compression. Hence, the delay increase caused by
compression processing becomes a smaller portion in the
total packet delay.
On the other hand, when the packet generation rate gets
higher, the average delays in both cases increase and the
increase in the no-compression case grows much faster than
that in the compression case. When the rate is higher than
3.5, the compression effect becomes positive and yields
significant reductions on the average delays. This can be
explained by the queueing theory. If we consider each
node in the network as a queue, the packet generation rate
and the packet transmission time become the arrival rate
and the service time of the queueing system. When the
traffic is heavy, the transmission time grows rapidly due
to channel contentions. Therefore, the utilization of the
queueing server, the product of the arrival rate and service
time, also grows rapidly, which eventually causes great
increase in the average waiting time and the average packet
delay. Compression shortens the packet length and the
transmission time, thus effectively reducing the utilization
and the packet delay, which explains the much slower
growth of the packet delay with compression.
Another observation we can draw is that the delays of
different levels are quite similar. It implies that the main
effect of compression is on the transmissions from level
1 nodes to the sink. This is due to the fact that level 1
nodes undergo the heaviest traffic and hence the longest
transmission delay. In this case, compression can achieve
much more benefit for the transmissions of level 1 nodes
than those nodes at higher levels. Thus, based on this
observation, it is reasonable to use the average delay among
all the nodes in the network to approximate the delays of
nodes at different levels. We also conducted experiments
under other network configurations and the results show a
similar trend.
B.2 Maximum Packet Generation Rate
In this subsection, we examine the maximum packet gen-
eration rate allowed at each node. In Fig. 1, we observe that
the packet delay grows rapidly when the packet generation
rate is relatively high. For example, the average delay
without compression reaches 0.7swhen the generation rate
is 4.25, while the delay with compression is 0.6swhen the
generation rate is 5.25. This corresponds to the situation
when the utilization of the queueing server approaches 1.
To guarantee the success of transmissions, the utilization
should be kept below 1. Thus, the generation rate when
the utilization approaches 1 is the maximum generation rate
allowed in the network. Fig. 2 shows the maximum gener-
455
ation rate under different network configurations. Clearly,
compression increases the maximum generation rate for all
configurations. In particular, although the maximum gen-
eration rate in the no-compression case varies dramatically
under different configurations, the relative increase caused
by compression remains similar, about 20% to 25%.A
small difference is observed for different packet lengths, for
example, the increase is 5% lower for the packet length of
256Bthan that for the packet length of 512B,inthesame
network configuration. Since the average compression ratio
is smaller when the packet length is 256B, this result
indicates that a higher compression ratio leads to a higher
maximum generation rate.
5x5 grid 5x5 grid 7x7 grid 7x7 grid
0
2
4
6
8
10
Packet generation rate (/s)
512B, w/o comp.
512B, w/ comp.
256B, w/o comp.
256B, w/ comp.
d=15m d=10m d=15m d=10m
Fig. 2. Maximum generation rates under various configurations, where d
represents the neighboring distance.
B.3 Threshold Rate
Since compression may have either a positive or a neg-
ative effect on the packet delay, it would be interesting to
find the generation rate, at which the packet delay remains
unchanged in both no-compression and compression cases.
We call this rate the threshold rate.Fig.3showsthe
relationship between the threshold rates and the maximum
generation rates in the compression case under different
network configurations. When the packet generation rate
is in the range between the threshold rate and the maximum
generation rate, compression can improve the end-to-end
delay. We can see that the length of this range does not vary
much under different configurations, though the threshold
rate itself exhibits great variations.
512B 256B 512B 256B 512B 256B 512B 256B
0
2
4
6
8
10
Packet Generation rate (/s)
Negative
Positive
5×5 grid 5×5 grid 7×7 grid
d=15m d=10m d=15m d=10m
7×7 grid
Fig. 3. Different compression effects under various configurations, where
drepresents the neighboring distance.
B.4 Summary
The above experimental results demonstrate that the
delay caused by compression processing is clearly a non-
negligible factor in end-to-end packet delay for current
generation sensor nodes. Such delay can cause severe
performance degradation under light traffic load. On the
other hand, when the traffic load is heavier, compression
can effectively reduce packet delay and increase maximum
throughput. Thus, compression is preferred only when the
packet generation rate is higher than the threshold rate.
However, the threshold rate varies with network configu-
rations and traffic and thus cannot be obtained in advance.
Therefore, it is necessary to design an on-line adaptive
algorithm to determine when to perform compression on
incoming packets at each node.
IV. ON-LINE ADAP TI VE COMPRESSION ALGORITHM
In this section, we give an on-line adaptive compression
algorithm that can be easily implemented in sensor nodes to
assist the original LZW compression algorithm. The goal
is to accurately predict the difference of average end-to-end
delay with and without compression by analyzing the local
information at a sensor node and make right decisions on
whether to perform packet compression at the node. The
adaptive algorithm is distributively implemented on each
sensor node as Adaptive Compression Service (ACS) in an
individual layer created in the network stack to minimize
the modification of existing network layers. Next we first
introduce the architecture of ACS and then describe the
algorithm in detail.
A. Architecture of ACS
The architecture of ACS is described in Fig. 4. Located
between the MAC layer and its upper layer, ACS consists
of four functional units: a controller, an LZW compressor,
an information collector and a packet buffer. The controller
manages the traffic flow and makes compression decisions
on each incoming packet in this layer. The LZW compres-
sor is the functional unit that performs the actual packet
compression using the LZW algorithm. The information
collector is responsible for collecting local statistics infor-
mation about the current network and hardware conditions.
The packet buffer is used to temporarily store the packets to
be compressed.
With ACS, the traffic between the MAC layer and the
upper layer is now intervened by the controller in ACS. All
outgoing packets coming down from the upper layer are re-
ceived by the controller, which maintains two states. In the
No-Compression state, all packets are directed to the MAC
layer without further processing; in the Compression state,
only compressed packets, which are received from other
nodes, will be directly sent down to the MAC layer, and
other packets are sent to the packet buffer for compression.
On the other hand, for incoming packets from the MAC
layer, only the arrival time is recorded by the collector and
456
Buffer Compressor
LZW
Mac layer
Controller
Information
Collector
PP
PP
P
DD
Adaptive Compression Service
Upper Layer
P: Packet
D: Statistics Data
Fig. 4. Architecture of ACS, which resides in a created layer between the
MAC layer and its upper layer.
the packets themselvesare sent to the network layer without
delays.
Since compression is managed by the node state, the
function of the adaptive algorithm is to determine the node
state according to the network and hardware conditions.
In our adaptive algorithm, we utilize a queueing model to
estimate current conditions based on only local information
of sensor nodes. In the next section, we introduce the
queueing model.
B. Queueing Model for a WSN
The queueing model for a WSN includes both the net-
work model and the MAC model, which defines the net-
work topology, traffic model and MAC layer protocol.
B.1 Network Model: Topology and Traffic
We consider a wireless sensor network where Nsensor
nodes are randomly distributed in a finite two-dimensional
region. For convenience of calculation, we consider the
region of a circular shape with radius R. As will be
shown in the experiments, changing to other shapes, e.g., a
square, will not substantially affect the performance of the
proposed algorithm. Every node has an equal transmission
range r. A data sink is located at the center of the circular
region and all sensor nodes send the collected data to the
sink via the aforementioned multi-path routing strategy.
If the node density is sufficiently high, we can assume
that each node can always find a neighbor whose distance
to the sink is shorter than its own distance to the sink by
r. Thus, nodes between two circles with radius (i1) ·r
and i·rcan deliver packets to the sink with itransmissions.
According to the routing strategy, these nodes are consid-
ered level inodes. Without loss of generality, we assume
r=1and there are a total of Rlevels of nodes. Denote the
number of nodes at level ias ni. Then the average number
μR
μRμR−1
μR−1
μR−1 μ1
μ1
μ1
λ
λλ
λ
λλ
λ
λ
Sink
μR
λg
g
g
g
g
g
g
g
g
Fig. 5. Queueing model for a wireless sensor network, where λgis the
external arrival rate and μiis the service rate of a node at level i.
of nodes at level ican be calculated by
E[ni]=πi2π(i1)2
πR2N=(2i1)N
R2
Such a network can then be represented by an open
queueing network, as shown in Fig. 5. The queueing
network is divided into Rstages such that each queue in
stage icorresponds to a sensor node at level i. Every queue
has an external arrival rate λg, which corresponds to the
packet generation rate.
Denote λi
jas the arrival rate of node jat level i.When
i<R, by queueing theory, we have
λi
j=λg+
ni+1
k=1
λi+1
kpkj
where pkj is the transition probability from node kat level
i+1to node jat level i.
As all λi
j’s have the same expected value, we denote it
as λi. Summing λi
jand taking expectation on both sides of
the equation lead to
E[ni]λi=E[ni]λg+λi+1E
ni
j=1
ni+1
k=1
pkj
Since ni
j=1 ni+1
k=1 pkj =ni+1, the above equation can be
written as
λi=λg+Eni+1
niλi+1 =λg+2i+1
2i1λi+1 (1)
Thus, each node can estimate the arrival rates of nodes at
other levels based on their own arrival rates. To evaluate
the performance of the queueing system, we also need
another important parameter, the average packet service
time, which includes the possible packet compression time
and the MAC layer service time. While the compression
time can be obtained directly from the LZW algorithm,
the calculation of the MAC layer service time requires an
understanding of the MAC model.
B.2 MAC Model
The MAC layer packet service time is measured from
the time when the packet enters the MAC layer to the time
when the packet is successfully transmitted or discarded
457
due to transmission failure. To analyze the packet service
time, we first briefly describe the Distributed Coordination
Function (DCF) of IEEE 802.11 [16] used as our MAC
layer protocol.
DCF employs a backoff mechanism to avoid potential
contentions for the wireless channel. To transmit a packet,
a node must conduct a backoff procedure by starting the
backoff timer with a count-down time interval, which is
randomly selected between [0,CW)where CW is the con-
tention window size. The timer is decremented by 1 in each
time slot when the channel is idle and is suspended upon
the sensing of an ongoing transmission. The suspension
will continue until the channel becomes idle again. When
the timer reaches zero, the node completes the backoff
procedure and starts transmission. The whole procedure is
completed if the transmission is successfully acknowledged
by the receiver. Otherwise, the transmission is consid-
ered failed, which invokes a retransmission by restarting
the backoff timer. In each retransmission, the contention
window size CW will be doubled until it reaches the upper
bound defined in DCF. Finally, the packet will be discarded
if the number of retransmissions reaches the predefined
limit.
In our MAC layer, we also adopt the RTS/CTS mecha-
nism to reduce transmission collisions. Thus, it requires at
least 4 transmissions to successfully transmit a data packet:
the transmissions of RTS, CTS, data and ACK packets. Let
Ttran denote the minimum packet transmission time, which
can be calculated as the sum of all packet lengths dividedby
network bandwidth: Ttran =LRT S +LCTS +Ldata+LAC K
Bw ,
where we assume each type of packet is of a constant
length. Let Tsus denote the average duration of the timer
suspension in the backoff stage and Tcol denote the average
time spent in transmission collisions. The suspension dura-
tion is actually the time waiting for other nodes to complete
a packet transmission. Under the assumption of the constant
packet length, we can approximately have Tsus =Ttran .
On the other hand, with the RTS/CTS mechanism, the
collision mainly occurs during the transmission of RTS
and continues until the timeout for CTS. Hence Tcol
LRTS +LCT S
Bw . The overall MAC layer packet service time
can then be calculated as
Tmac =nsusTsus +ncol Tcol +Ttran
=(nsus +1)Ttran +ncolTcol
where nsus represents the number of suspensions and ncol
is the number of transmission collisions. We exclude the
backoff time and some interframe spaces in the above
equation due to their relatively small values.
B.3 Queueing Analysis for a Sensor Node
The queueing model of a sensor node is different in
different node states. In the No-Compression state, each
node is considered as a single queue. Its arrival process is a
combination of the local packet generation process and the
departure processes of its neighbors that send packets to the
node. As the simulation results in [9] showed, the departure
process of nodes adopting IEEE 802.11 MAC protocol can
be approximated as a Poisson process. Thus, we assume
the arrival process of each node is a Poisson process and
each node is a M/G/1 queue. In the Compression state, the
queueing model of each node can be modified as a system
of two queues as shown in Fig. 6 with the compression
queue and the transmission queue corresponding to ACS
and the MAC layer, respectively. We assume both queues
are M/G/1 queues since the arrival processes of both queues
are actually two splits of the arrival process of the sensor
node, and thus can be considered as Poisson processes. In
addition, since all outgoing traffic from the compression
queue goes to the transmission queue, the arrival rate of the
transmission queue equals the total arrival rate of the node.
Transmission
queue
queue
Compression
Packets
waiting for
compression
Other packets
Fig. 6. Queueing model of a sensor node with compression.
According to the well known Pollaczek-Khinchin for-
mula for M/G/1 queue, the average number of packets N
in a M/G/1 queue can be calculated as
N=ρ+ρ2
1ρ·1+c2
B
2,
where ρis the utilization of the queue and cBis the
coefficient of variance of the service time. By Little’s Law,
given the arrival rate λ, the average packet waiting time,
which is the packet delay in the node, can be derived as
T=N
λ=2ρρ2(1 c2
B)
2λ(1 ρ)(2)
C. Adaptive Compression Algorithm
We are now in the position to describe the adaptive com-
pression algorithm, which can be divided into two stages:
information collection and node state determination.
C.1 Information Collection
In ACS, the information collector is responsible for
collecting three types of statistics information:
1. The average compression ratio rcand the average
compression processing time Tp. The average used
here is the exponential moving average (EMA) [19],
which does not require storage of previous obser-
vations and works as follows. During time period
t, given the current observation ytand the previous
average St1, the latest average Stcan be calculated
as St=αyt+(1α)St1,whereαis a constant
in (0,1). Compared to the arithmetic mean, EMA
is more sensitive to the latest observations and thus
458
more suitable in WSNs where the sensing data can
change rapidly. Thus once a packet is compressed, the
compression ratio and processing time are measured
by the compressor and the two averages rcand Tpare
updated.
2. Packet arrival rates, which include the external arrival
rate λeand the packet generation rate λg. The calcula-
tions follow a time-slotted fashion. In each time slot,
the total number of packets arrived is counted and the
arrival rates are calculated as the division of the total
counts over time. Noting that not all external packets
are compressed in the adaptive algorithm, we also
record the ratio of compressed packets in the external
packets as pc.
3. MAC layer service time. Since the MAC layer
service is considered as an arbitrary process in the
queueing model, its mean Tmac and the coefficient
of the variance cmac are calculated and recorded for
the subsequent analysis. The calculation is done along
with the calculation of arrival rates in the same time
slot to reduce the implementation complexity.
C.2 State Determination
In ACS, the controller determines the appropriate node
state according to the statistics information collected. Since
most of information is collected in a time-slotted manner,
the decision on the node state is made at the end of each
time slot. Depending on the current node state, the decision
process is slightly different. Thus we first consider the
case when the node is currently in the No-Compression
state. Then the task of the controller is to decide whether
performing compression on this node can reduce packet
delays. From the experimental results in Section III, we
know that compression introduces an extra delay due to
compression processing and reduces the packet delay from
the current node to the sink. We now discuss these two
delays separately.
The incoming packets of the compression queue are the
uncompressed portion of the arrival packets at the node.
With the information provided by the collector, the arrival
rate can be calculated by λc=λe(1 pc)+λg, while the
servicerateis 1
Tp, and the utilization equals λcTp.In
addition, we assume the compression processing time has
little variation within a time slot and thus the coefficient of
variance is approximately 0. By Equation (2), the increased
compression time for a packet can be derived as
Tcom =1
2·2TpλcT2
p
1λcTp
.
As the ultimate goal of compression is to reduce average
delay, which is equivalent to reducing the sum of delays of
all packets, we define normalized delay as the total delay
for all nodes in a unit time. Thus, for a time interval t,the
total increased delay due to compression is λctTcom ,and
the normalized delay increase is λcTcom.
Now let’s look at the delay reduction in packet delivery
after compression. While it is difficult to accurately cal-
culate this reduction, we can easily obtain a lower bound.
Given the compression ratio rcand the length of the original
packet L, the packet length is shortened by LL
rcafter
compression and its transmission time is reduced by at least
L(rc1)
rc·Bw .Foralevelinode, there are itransmissions for
this packet after compression. Thus, the total reduction is
at least ΔTmin =i·L(rc1)
rc·Bw . We compare this lower bound
with the increased compression time Tcom.IfTcom
ΔTmin, the node is switched to the Compression state.
When Tcom >ΔTmin, we need to calculate the normal-
ized delay reduction. In fact, if only one node performs
compression, there will be no extra reduction other than the
reduced transmission time because the surrounding traffic
will not change. However, according to our network model,
nodes at the same level should share very similar traffic
conditions, thus although made independently, their state
determination is likely to coincide. Therefore, when we
consider the delay reduction due to compression on a node,
it is reasonable to assume other nodes at the same level
will also perform compression. In this sense, compression
actually affects the transmissions of all packets in the node
rather than only the uncompressed packets. Furthermore,
such effect not only resides in the local node, but also in the
downstream nodes on the routing path when they receive
those compressed packets. Thus, we need to examine the
delay reduction level by level.
The delay reduction on each node can be calculated as the
difference of the average packet delay in the transmission
queue before and after compression. By the queueing
model and Equation (2), the calculation of packet delay
requires three parameters: the arrival rate, the mean and
coefficient of variance of the service time.
We first examine the delay reduction in the local node.
For the simplicity of illustration, assume the node is at
level i. The packet delay before compression can be
easily obtained as the three required parameters are already
computed in the collector as λg+λe,Tmac and cmac ,
respectively. When compression is performed, the arrival
rate will not change. Next, we derive the average service
time after compression from Tmac. Based on the analysis
in Section IV-B.2, if the transmission delays of control
packets are ignored due to their relatively short lengths,
the service time is approximately proportionalto the packet
transmission time and hence is proportional to the average
packet length. We assume the original packet length is L
and the compressed packet length is then L/rcwhere rcis
the compression ratio obtained in the collector. Thus the
average packet length is Lpc(LL/rc)before compres-
sion and L/rcafter compression. The average service time
after compression can then be calculated as
T
mac =Tmac
L/rc
Lpc(LL/rc)=Tmac
rc+pcrc·pc
In addition, we assume the service time after compression
459
follows the same distribution and the coefficient of variance
keeps unchanged. Then the packet delay after compression
can be obtained and so is the delay reduction, denoted as
ΔTmac(i).
Now we estimate the delay reduction at a level knode
(k<i) on the routing path. By Equation (1), we can derive
thearrivalratebyλgand λe. We use the average packet
length again to calculate the service time from Tmac .Since
the local node is in the No-Compression state, the current
packet generation rate should be lower than the threshold
rate. As shown in Fig. 1, the threshold rate of lower level
nodes is higher than that of higher level nodes, therefore,
it is reasonable to assume the nodes with the same packet
generation rates at level kare also in the No-Compression
state. The average packet length and the service time can
then be calculated in a similar way to that of a local node.
We denote the reduction at level kas ΔTmac(k).
The normalized delay reduction can be calculated as
ΔTmac =
i
j=1
λjΔTmac(j).
The decision on the state of the node is then made by
comparing ΔTmac with λcTcom. The procedure of the state
determination in pseudo code is given Table 3.
TAB L E 3
STATE DETERMINATI ON PROCEDURE,PERFORMED AT THE END OF
EACH TIME SLOT FOR A NODE IN THE NO-COMPRESSION STATE
For each node at level i:
if state = No-Compression then
read rc,tp,λg,λe,pc,Tmac and cmac
compute Tcom and ΔTmin
if Tcom ΔTmin then
set state to Compression
else
set ito the node’s level number
set ΔTmac to zero
while i>0
calculate λi
compute reduction ΔTmac (i)
add λiΔTmac(i)to ΔTmac
decrease iby one
end while
if λcTcom ΔTmac then
set state to Compression
end if
end if
end if
When the node is currently in the Compression state, all
the calculations above will also be performed. The only
difference is that we compare the reduced processing time
with the increased transmission time due to compression
cancellation.
V. P ERFORMANCE EVALUATIONS
With the proposed on-line adaptive compression algo-
rithm, the packets can be dynamically determined to be
compressed or not at each node, adapting to the current net-
work and hardware environment. In this section, we present
the performance evaluation results for the proposed adap-
tive compression algorithm. We compare the performance
of our adaptive scheme with other two schemes: the no-
compression scheme in which no packets are compressed
at all, and the compression scheme where all packets are
compressed in data gathering.
The experiments are conducted in a 7×7grid network
with the sink be the center node. The transmission range is
1.5times of the neighboring distance. The original packet
length is set to 512B. Other parameters are similar to the
configuration in the previous experiments in Section III.
In the simulation, the packet generation rate is initiated at
1.5and increased 0.125 every 300 seconds. Such increase
continues until the the maximum rate is reached, when the
simulation also ends. The time slot length used in the
information collector is 20s. Fig. 7 shows the average end-
to-end delays for three schemes. The delays for different
levels and the average delays of all packets are displayed
to provide a comprehensive illustration. By observing the
average delays among all packets, we can see that the
delay for the adaptive scheme is always close to the lower
one among the No-compression and Compression schemes,
indicating overall good adaptiveness of the algorithm on the
network traffic. In particular, when the generation rate is
lower than the threshold rate, the adaptive scheme chooses
not to compress packets and thus yields nearly the same
results as the no-compression scheme. However, when the
generation rate is slightly higher than the threshold, which
is between 2 and 2.3, the adaptive scheme contributes a
slightly longer delay than the compression scheme. The
largest increase in delay is 17% when the generation rate
is 2. The reason is that as the adaptive algorithm makes
decisions completely based on the local information that
may deviate from the average, nodes sometimes may make
incorrect decisions. On the other hand, the fact that a
small deviation can change the compression decision also
indicates the benefit of compression is marginal in this case.
Thus, a good overall performance of the adaptive algorithm
is always achieved.
By looking at the delays for different node levels, we can
also notice that for nodes at level 1, the adaptive scheme
outperforms other two schemes when the generation rate
exceeds the threshold rate. For nodes at levels 2 and 3
at the same generation rate, however, the average delays
under the adaptive scheme are slightly longer than that
under the compression scheme. This can be explained
by the observation that the threshold rates for higher level
nodes are lower than the thresholds for lower level nodes.
When the generation rate is between two different threshold
rates, it is possible that the higher level nodes perform
compression but the lower level nodes do not. Therefore,
without suffering from the compression processing delay,
lower level nodes can still enjoy the benefit of the reduced
average packet length as they receive packets from higher
460
1.6 1.8 2 2.2 2.4 2.6
0
0.1
0.2
0.3
0.4
Packet generation rate (/s)
End−to−end delay (s)
Level 1
1.6 1.8 22.2 2.4 2.6
0
0.1
0.2
0.3
0.4
Packet generation rate (/s)
End−to−end delay (s)
Level 2
1.6 1.8 22.2 2.4 2.6
0
0.1
0.2
0.3
0.4
0.5
Packet generation rate (/s)
End−to−end delay (s)
Level 3
1.6 1.8 22.2 2.4 2.6
0
0.1
0.2
0.3
0.4
Packet generation rate (/s)
End−to−end delay (s)
Average
No−compression
Adaptive
Compression
Fig. 7. The average packet delays for different levels under three schemes.
level nodes, resulting shorter delays than that under com-
pression scheme. Such analysis can be verified in Fig. 8,
which graphs the proportions of time when compression is
performed in each node when the packet generation rate is
at 2. Clearly, from the figure higher level nodes have much
higher possibility to perform compression than lower level
nodes.
1 2 3 4 5 6 7
1
2
3
4
5
6
7
Fig. 8. The portion of the time used in compression in each node when
the packet generation rate is 2. The sink is located at (4,4). The portion is
represented by the area size of the circle, ranging from 0 to 1.
VI. CONCLUSIONS
In this paper, we have studied the effect of the com-
pression on end-to-end packet delay in data gathering in
WSNs. To incorporate the hardware processing time of
the compression algorithm in our experiments, we utilized
a software estimation approach to measuring the execution
time of a lossless compression algorithm LZW on micro-
controller TI MSP430F5418. Through extensive simula-
tions on NS-2, we found that compression has a two-sided
effect on packet delay in data gathering. While compression
increases the maximum achievable throughput, it tends
to increase the packet delay under light traffic loads and
reduce the packet delay under heavy traffic loads. We also
evaluated the impact of different network settings on the
effect of compression, providing a guideline for choosing
appropriate compression parameters.
We then proposed an on-line adaptive compression al-
gorithm to help make compression decisions based on the
current network and hardware conditions. The adaptive
algorithm runs in a completely distributed manner. Based
on a queueing model, the algorithm utilizes the local in-
formation to accurately estimate the overall network con-
dition and switches the node state between Compression
and No-Compression according to the potential benefit of
compression. Our extensive experimental results show
that the proposed adaptive algorithm can fully exploit the
benefit of compression while avoiding the potential hazard
of compression. Finally, the proposed adaptive algorithm
is not restricted to the LZW compression algorithm, and
in fact, it can be applied to any practical compression
algorithms by simply replacing the compressor in ACS.
REFERENCES
[1] TI MSP430F5418, http://focus.ti.com/docs/prod/folders/print/
msp430f5418.html.
[2] S. Zhu, W. Wang and C.V. Ravishankar, “PERT: a new power-
efficient real-time packet delivery scheme for sensor networks,”
International Journal of Sensor Networks, vol. 3, 2008.
[3] S. Pattem, B. Krishnamachari and R. Govindan,“The impact of
spatial correlation on routing with compression in wireless sensor
networks,” ACM Trans. Sensor Networks, vol. 4, no. 4, pp. 1-33,
2008.
[4] C.M. Sadler and M. Martonosi, “Data compression algorithms for
energy-constrained devices in delay tolerant networks,” Proc. of
SenSys, 2006.
[5] K.C. Barr and K. Asanovi´c, “Energy-aware lossless data compres-
sion,” ACM Trans. Computer Systems, vol. 24, no. 3, pp. 250-291,
2006.
[6] A. Jindal and K. Psounis, “Modeling spatially correlated data in
sensor networks,” ACM Trans. Sensor Networks, vol. 2, no. 4, pp.
466-499, 2006.
[7] J. Xiao, A. Ribeiro, Z. Luo and G.B. Giannakis, “Distributed
compression-estimation using wireless sensor networks,” IEEE Sig-
nal Processing Magazine, vol. 23, pp. 27-41, 2006.
[8] A. Scaglione and S. Servetto, “On the interdependence of routing and
data compression in multi-hop sensor networks,” Wireless Networks,
vol. 11, pp. 149-160, 2005.
[9] H. Zhai, Y. Kwon and Y. Fang, “Performance analysis of IEEE
802.11 MAC protocols in wireless LANs,” Wireless Communica-
tions and Mobile Computing, 2004.
[10] S.J. Baek, G. de Veciana and X. Su, “Minimizing energy consump-
tion in large-scale sensor networks through distributed data compres-
sion and hierarchical aggregation,” IEEE Journal on Selected Areas
in Communications, vol. 22, no. 6, pp. 1130-1140, 2004.
[11] D. Marco, E.J. Duarte-Melo, M. Liu and D.L. Neuhoff, “On the
many-to-one transport capacity of a dense wireless sensor network
and compressibility of its data,” Lecture Notes in Computer Science,
2003.
[12] T. He, J. A. Stankovic, C. Lu and T. Abdelzaher, “SPEED: a stateless
protocol for real-time communication in sensor networks,” Proc. of
IEEE ICDCS, 2003.
[13] T. He, B.M. Blum, J.A. Stankovic and T. Abdelzaher, “AIDA:
adaptive application independent data aggregation in wireless sensor
networks,” ACM Trans. Embedded Computing Systems, 2003.
[14] S.S. Pradhan, J. Kusuma and K. Ramchandran, “Distributed com-
pression in a dense microsensor network,” IEEE Signal Processing
Magazine, vol. 19, no. 2, pp. 51-60, 2002.
[15] X. Hong, M. Gerla, H. Wang and L. Clare, “Load balanced, energy-
aware communications for mars sensor networks,” Proc. of the
Aerospace Conference, vol. 3, 2002.
[16] IEEE Standard for Wireless LAN Medium Access Control (MAC)
and Physical Layer (PHY) specifications, ISO/IEC 8802-11:
1999(E), 1999.
[17] T.A. Welch, “A technique for high-performance data compression,”
IEEE Computer, vol. 17 no. 6, pp. 8-19, 1984.
[18] D. Slepian and J.K. Wolf, “Noiseless coding of correlated infor-
mation sources”, IEEE Trans. Inform. Theory, vol.19, pp.471-480,
1973.
[19] NIST/SEMATECH e-Handbook of Statistical Methods, Chapter 6,
http://www.itl.nist.gov/div898/handbook/.
461
... Traffic models in WSNs depend largely on network applications and behavior of sensed events [17]- [19]. In this work, each node generates data messages for the sink node. ...
... In addition to the messages generated locally by each node, any node can cooperatively relay packets originated by other nodes. Further, we assume that the distribution for the number of message arrivals generated by each node during the time interval between and ( + ) with the average message arrival rate of per node is given by the following expression [17]: ...
... We assume also that the inter-arrival times for the generated messages follow an exponential distribution with a probability density function ( ) ( ) = − ≥ 0. The nodes which are located in close vicinity to the sink node will have a high duty cycle compared to other nodes further away. Proper assumptions of realistic traffic models in performance evaluation of protocols in WSNs are important for accurate modeling and analysis, which ensures that protocols for WSNs are designed as effectively as possible [17]. ...
... A summary of the main related works in M2M communications is given in Table 1. For wireless sensor network (WSN) and other wireless networks, there has been some notable work to reduce the network load based on data [34,35]. Although adaptive data compression has been used in WSN [34], bandwidth management [35], and location update [36], it is not applied to M2M communications, involving uplink transmissions from a myriad of devices. ...
... For wireless sensor network (WSN) and other wireless networks, there has been some notable work to reduce the network load based on data [34,35]. Although adaptive data compression has been used in WSN [34], bandwidth management [35], and location update [36], it is not applied to M2M communications, involving uplink transmissions from a myriad of devices. This motivates us to explore adaptive [34][35][36] data compression techniques for designing efficient, novel communication and resource management strategy in M2M gateways. ...
... Although adaptive data compression has been used in WSN [34], bandwidth management [35], and location update [36], it is not applied to M2M communications, involving uplink transmissions from a myriad of devices. This motivates us to explore adaptive [34][35][36] data compression techniques for designing efficient, novel communication and resource management strategy in M2M gateways. Clustering and compression are widely used concepts in the wireless network; they have not been used to take benefit of delay-tolerant properties of M2M communication. ...
Article
Machine-to-machine (M2M) communications and applications are expected to play a significant role in the next-generation wireless networks. M2M communication features a myriad of devices (machines), generating a massive number of intermittent, small-sized data transmissions. Traditionally, gateways are used to provide wireless connectivity to such a large number of devices. However, with the emerging concept of Internet of Things (IoT) and 5G wireless technology, the number of devices is expected to increase exponentially. This exponential increase in the number of devices raises a significant question in the ability of traditional gateways to provide connectivity and schedule uplink transmissions. In this paper, we propose a novel Grouping and Adaptive Compression (GAC) strategy in the M2M gateways to reduce the number of uplink requests. The gateways first group the machines into different clusters and subsequently explore adaptive compression to reduce the uplink traffic to the base stations (BS). Simulation results on typical M2M communications demonstrate that our proposed GAC algorithm reduces the number of uplink requests and corresponding radio resource requirement by around 60% and 50%, respectively. Moreover, it also increases the existing BS capacity by almost 10 times in terms of M2M devices, thus addressing the key challenge of the exponential increase in M2M deployment.
... 0 ≤ fij ≤ cmax ∀i ∈ V, ∀j ∈ Oi (11) and (2) It is easy to verify that the optimization objective is convex. This is because constraints (10) and (11) are linear and (2) is convex, which show the reformulated problem PP2 is convex. To obtain distributed solutions in the context of WSNs, the main obstacle to solve problem PP2 distributively is that optimization variables R i and f ij are coexisted in constraint (10). ...
... Reducing the packet size will also reduce the packet transmission time and conflict on the wireless channel. To reduce the end-to-end packet delay a new [7] compression technique is used to reduce data size by exploiting data redundancy. Multi-parent wake-up scheduling was presented in [8] as a technique for providing bi-directional end-to-end latency guarantees while optimizing the node battery lifetime. ...
Conference Paper
Energy efficiency is one of the most important design metrics for wireless sensor networks. As sensor data always have redundancies, compression is introduces for energy savings. However, different emphases on algorithm design influence the operation effect of compression under various applications and network environments. In order to improve the energy utilization efficiency for the whole network, an adaptive data compression is proposed in this paper, which realizes a real-time adjustment of compression strategy. By prediction and feature extraction of several relevant parameters, the algorithm provides optimal execution strategies for each sensor node in the network. The simulation results show that, the proposed compression scheme enables all nodes to complete data communication with near optimal energy consumptions, and the maximum deviation against the ideal condition is no more than 5%. Moreover, the algorithm can effectively act on different data precision, transmit power and retransmission rate to meet the dynamic requirements of the network with only a few costs introduced.
Article
We propose a power-efficient transmission scheme for wireless sensor networks. In this scheme, the sensor nodes compress their corresponding source-relay channel state information (CSI) and transmit this compressed CSI sequence along with a selected subset of their received bits from the source, which we refer to as the good bits. We assume slowly varying fading channels between source and relays and analytically derive the compression rate of the CSI sequence. We then study the relay-fusion centre link and find an optimal threshold for our proposed scheme, based on the target bit error rate and the packet delivery ratio of the network. Combining the threshold optimization and the reliable bit transmission schemes, we study the total number of transmitted bits for our proposed system. We show that for slowly varying fading channels, our proposed scheme considerably reduces the number of transmitted bits, and consequently the transmission power of the relay nodes compared with a conventional scheme where all bits are transmitted to the fusion center. We also compare the performance of our proposed scheme with a special decode and forward (SDF) scheme previously introduced in the literature. In order to have a fair comparison, we modify the SDF scheme, such that the modified scheme includes the originally proposed SDF scheme as a special case. We provide detailed comparisons and discussions on the achieved bit error rate, energy efficiency, and feasibility of the proposed and the modified SDF schemes.
Conference Paper
Self-Organization is based on adaptivity. Adaptivity should start with the very basic fundamental communication tasks such as encoding the information to be transmitted or stored. Obviously, the less signal transmitted the less energy in transmission used. In this paper we present a novel on-line and entropy adaptive compression scheme for streaming unbounded length inputs. The scheme extends the window dictionary Lempel-Ziv compression, is adaptive and is tailored to on-line compress inputs with non stationary entropy. Specifically, the window dictionary size is changed in an adaptive manner to fit the current best compression rate for the input. On-line Entropy Adaptive Compression scheme (EAC), that is introduced and analyzed in this paper, examines all possible sliding window sizes over the next input portion to choose the optimal window size for this portion, a size that implies the best compression ratio. The size found is then used in the actual compression of this portion. We suggest an adaptive encoding scheme, which optimizes the parameters block by block, and base the compression performance on the optimality proof of Lempel Ziv algorithm when applied to blocks. The EAC scheme was tested over files of different types (docx, ppt, jpeg, xls) and over synthesized files that were generated as segments of homogeneous Markov Chains. Our experiments demonstrate that the EAC scheme typically provides a higher compression ratio than LZ77 does, when examined in the scope of on-line per-block compression of transmitted (or compressed) files.
Article
For wireless sensor networks of real-time data transmission phenomenon, a new routing method is proposed based on the uneven cluster network model. Through setting deadline for collecting data, estimating link delay and considering the factors influencing the validity of receiver's receiving data such as the deadline of information, link delay, etc, a new routing method fit for multi-service delay requirements is proposed. Simulation results show that the routing method can ensure the validity of the information.
Article
Full-text available
In densely deployed wireless sensor networks (WSN), sensor observations are highly correlated in the space domain. Furthermore, the nature of the physical phenomenon constitutes the temporal correlation between each consecutive observation of a sensor node. These spatial and temporal correlations along with the collaborative nature of the WSN bring significant potential advantages for the development of efficient communication proto-cols well-suited for the WSN paradigm. In this paper, a theoretical framework is developed to model the spatial and temporal correlations in sensor networks. The objective of this framework is to enable the development of efficient communication protocols which exploit these advantageous intrinsic features of the WSN paradigm. Based on this framework, possi-ble approaches are explored to exploit spatial and temporal correlation for efficient medium access and reliable event transport in WSN, respectively.
Conference Paper
Full-text available
Emerging applications of wireless sensor networks (WSNs) require real-time quality-of-service (QoS) guarantees to be provided by the network. Due to the nondeterministic impacts of the wireless channel and queuing mechanisms, probabilistic analysis of QoS is essential. One important metric of QoS in WSNs is the probability distribution of the end-to-end delay. Compared to other widely used delay performance metrics such as the mean delay, delay variance, and worst-case delay, the delay distribution can be used to obtain the probability to meet a specific deadline for QoS-based communication in WSNs. To investigate the end-to-end delay distribution, in this paper, a comprehensive cross-layer analysis framework, which employs a stochastic queueing model in realistic channel environments, is developed. This framework is generic and can be parameterized for a wide variety of MAC protocols and routing protocols. Case studies with the CSMA/CA MAC protocol and an anycast protocol are conducted to illustrate how the developed framework can analytically predict the distribution of the end-to-end delay. Extensive test-bed experiments and simulations are performed to validate the accuracy of the framework for both deterministic and random deployments. Moreover, the effects of various network parameters on the distribution of end-to-end delay are investigated through the developed framework. To the best of our knowledge, this is the first work that provides a generic, probabilistic cross-layer analysis of end-to-end delay in WSNs.
Article
Wireless transmission of a single bit can require over 1000 times more energy than a single 32-bit computation. It can therefore be beneficial to perform additional computation to reduce the number of bits transmitted. If the energy required to compress data is less than the energy required to send it, there is a net energy savings and an increase in battery life for portable computers. This article presents a study of the energy savings possible by losslessly compressing data prior to transmission. A variety of algorithms were measured on a StrongARM SA-110 processor. This work demonstrates that, with several typical compression algorithms, there is a actually a net energy increase when compression is applied before transmission. Reasons for this increase are explained and suggestions are made to avoid it. One such energy-aware suggestion is asymmetric compression, the use of one compression algorithm on the transmit side and a different algorithm for the receive path. By choosing the lowest-energy compressor and decompressor on the test platform, overall energy to send and receive data can be reduced by 11% compared with a well-chosen symmetric pair, or up to 57% over the default symmetric zlib scheme.
Article
Wireless ad hoc networks have played an increasingly important role in a wide range of applications. A key challenge in such networks is to achieve maximum lifetime for battery-powered mobile devices with dynamic energy-efficient algorithms. Recent studies in battery technology have revealed that the behavior of battery discharging is more complex than we used to know. Battery-powered devices might waste a huge amount of energy if their battery discharging is not carefully scheduled and budgeted. In this paper, we introduce a novel energy model for batteries and study the effect of battery behavior on routing in wireless ad hoc networks. We first propose an online computable discrete-time mathematical model to capture battery discharging behavior. The model has low computational complexity and data storage requirement. It is therefore suitable for online battery capacity computation in routing. Our evaluations indicate that the model can accurately capture the behavior of battery discharging. Based on this battery model, we then propose a battery-aware routing (BAR) scheme for wireless ad hoc networks. BAR is a generic scheme that implements battery awareness in routing protocols and is independent of any specific routing protocol. By dynamically choosing the devices with well-recovered batteries as routers and leaving the “fatigue” batteries for recovery, the BAR scheme can effectively recover the device's battery capacity to achieve higher energy efficiency. Our simulation results demonstrate that, by adopting the BAR scheme, network lifetime and data throughput can be increased by up to 28% and 24%, respectively. The results also show that BAR achieves good performance in various networks composed of different devices, batteries, and node densities. Finally, we also propose an enhanced prioritized BAR (PBAR) scheme for time-sensitive applications in wireless ad hoc networks. Our simulation results illustrate that PBAR achieves good performance in- - terms of end-to-end delay and data throughput.
Article
Abstract, The physical phenomena monitored by sensor networks, e. g. forest temperature, water contamination, usually yield sensed data that are strongly correlated in space. With this in mind, researchers have designed a large number of sensor network protocols and algorithms that attempt to exploit such correlations. To carefully study the performance of these algorithms, there is an increasing need to synthetically generate large traces of spatially correlated data representing a wide range of conditions. Further, a mathematical model for generating synthetic traces would provide guidelines for designing more efficient algorithms. These reasons moti-vate us to obtain a simple and accurate model of spatially correlated sensor network data. The model can capture correlation in data irrespective of the node density, the number of source nodes or the topol-ogy. We describe a mathematical procedure to extract the model parameters from real traces and generate synthetic traces using these parameters. Then, we validate our model by statistically comparing synthetic data and experimental data, as well as by comparing the performance of various algorithms whose performance depends on the degree of spatial correlation. Finally, we create a tool that can be easily used by researchers to synthetically generate traces of any size and degree of correlation.
Conference Paper
In wireless sensor networks, the energy consumption of participating nodes has crucial impact on the resulting network lifetime. Data compression is a viable approach towards preserving energy by reducing packet sizes and thus minimizing the activity periods of the radio transceiver. In this paper, we propose a compression framework utilizing a stream-oriented compression scheme for sensor networks. It is specifically tailored to the capabilities of employed nodes and network traffic characteristics, which we determine in a characterization of WSN traffic patterns. To mitigate the inapplicability of traditional compression approaches, we present the squeeze KOM compression layer. By shifting data compression into a dedicated layer, only minor modifications to applications are required, while efficient data transfer between nodes is provided. As a proof-of-concept, we implement a stream-based compression algorithm on sensor nodes and perform an experimental analysis to determine the potential gains under realistic traffic conditions. Results indicate that our presented lossless stream-oriented payload compression leads to considerable savings.
Article
In this paper we analyze the average end-to-end delay and maximum achievable per-node throughput in random access multihop wireless ad hoc networks with stationary nodes. We present an analytical model that takes into account the number of nodes, the random packet arrival process, the extent of locality of traffic, and the back off and collision avoidance mechanisms of random access MAC. We model random access multihop wireless networks as open G/G/1 queuing networks and use the diffusion approximation in order to evaluate closed form expressions for the average end-to-end delay. The mean service time of nodes is evaluated and used to obtain the maximum achievable per-node throughput. The analytical results obtained here from the queuing network analysis are discussed with regard to similarities and differences from the well established information-theoretic results on throughput and delay scaling laws in ad hoc networks. We also investigate the extent of deviation of delay and throughput in a real world network from the analytical results presented in this paper. We conduct extensive simulations in order to verify the analytical results and also compare them against NS-2 simulations.
Article
The IEEE802.11 standard for wireless local area networks is based on the CSMA/CA protocol for supporting asynchronous data transfers. CSMA/CA uses an acknowledgment mechanism for verifying successful transmissions and optionally, a handshaking mechanism for decreasing collisions overhead. In both cases, an exponential backoff mechanism is used. This work investigates the theoretical performance of both mechanisms in terms of throughput and delay under traffic conditions that correspond to the maximum load that the network can support in stable conditions. We present extensive numerical results in order to highlight the effect of the backoff mechanism parameters on network performance for both mechanisms.