Conference PaperPDF Available

Comparison of Lossless Data Compression Techniques in Low-Cost Low-Power (LCLP) IoT Systems

Authors:

Abstract and Figures

With the recent advances and proliferation of the Internet of Things (IoT) devices, there is a huge demand placed on its infrastructure requirements. The amount of data generated by these small low-cost, low-power (LCLP) IoT devices is phenomenal and at the same time due to the devices being low-powered, they cannot be used to perform complex computations and other algorithm implementations. There are also limitations in communication data rates at different stages in a Wireless Sensor Network (WSN), which mainly uses wireless technologies such as Bluetooth, Zigbee, LoRa, etc to achieve low power communication. These technologies come with limited bandwidth and are not very reliable at high data rates. Hence the challenge of handling high amounts of data with low bandwidth communication technologies is one of the main hurdles inefficient LCLP IoT system deployments. To address this problem, we propose a combination of data compression techniques, which will result in reduced data size, without compromising affecting the quality of the data. This paper describes the implementation of a combination of Delta and RLE compression techniques on specific sensor data, particularly those used in our deployment of the World's First Wireless Sensor Network-based System for Early warning and Monitoring of Rainfall induced Landslides in Southern India. The test results show a good compression ratio of 52.67% for 12bit ADC, without compromising on the quality of the data. This has been implemented on a Programmable System-on-a-Chip (PSoC) system and the results presented.
Content may be subject to copyright.
Comparison of Lossless Data Compression
Techniques in Low-Cost Low-Power (LCLP) IoT
Systems
Aravind Hanumanthaiah, Athira Gopinath,Chandni Arun, Balaji Hariharan,Ravisankar Murugan
Amrita Center for Wireless Networks & Applications (AmritaWNA)
Amrita School of Engineering,Amritapuri, Amrita Vishwa Vidyapeetham,India
aravindh@am.amrita.edu,athirag@am.amrita.edu,Chandniarun@am.amrita.edu,balajih@am.amrita.edu,ravisankar@am.amrita.edu
Abstract—With the recent advances and proliferation of the
Internet of Things (IoT) devices, there is a huge demand
placed on its infrastructure requirements. The amount of data
generated by these small low-cost, low-power (LCLP) IoT de-
vices is phenomenal and at the same time due to the devices
being low-powered, they cannot be used to perform complex
computations and other algorithm implementations. There are
also limitations in communication data rates at different stages
in a Wireless Sensor Network (WSN), which mainly uses wireless
technologies such as Bluetooth, Zigbee, LoRa, etc to achieve
low power communication. These technologies come with limited
bandwidth and are not very reliable at high data rates. Hence the
challenge of handling high amounts of data with low bandwidth
communication technologies is one of the main hurdles inefficient
LCLP IoT system deployments. To address this problem, we
propose a combination of data compression techniques, which
will result in reduced data size, without compromising affecting
the quality of the data. This paper describes the implementation
of a combination of Delta and RLE compression techniques on
specific sensor data, particularly those used in our deployment
of the World’s First Wireless Sensor Network-based System for
Early warning and Monitoring of Rainfall induced Landslides in
Southern India. The test results show a good compression ratio
of 52.67% for 12bit ADC, without compromising on the quality
of the data. This has been implemented on a Programmable
System-on-a-Chip (PSoC) system and the results presented.
Index Terms—Data compression, Lossless methods, Lossy
methods, Compression ratio, Compression rate, Space Saving,
RLE [Run Length Encoding]
I. INTRODUCTION
The ubiquitous presence of IoT has enabled researchers to
collect data with ease, especially for data mining. Also, in
certain fields, like seismic related research, researchers prefer
to collect more data with a high sampling rate, to detect a
rare event. As a result, there can be a lot of redundant data
that would be transmitted unnecessarily. Each stage in the
embedded systems also has limitations in throughput while
sending and receiving data. As an example, the 3G GPRS
module has a throughput of 384Kbits/s [1].
This challenge is common in real-time video and audio
data streaming [22]. Hence the data transmitted by an LCLP
IoT system must be limited to a value that can be handled by
Fig. 1. Basic Block diagram of Compression and Decompression stage in a
LCLP IoT System
the supported communication technologies. There are several
ways to tackle this challenge. One of them is to send only
the event-based data after the data process. But, before such
implementation, we need to classify the events. Thus, it is
inevitable to send the non-processed data with a high data
rate, before the classification of the events. Fig. 1 showcase
the stages of compression and decompression in Low-cost,
Low-power IoT system. The data compression techniques
are widely used to handle high data transmission. Data
Compression, also known as compaction, is a method that
reduces the amount of data needed to be stored or transmitted
by using the encoding algorithm.
Data compression can be achieved either by lossless or
lossy compression techniques. In the lossless compression
method, the integrity of the data is preserved so that the
simple decoding of the compressed data will give the original
data. Lossless methods are usually used when there is no
compromise in data loss.
In the lossy compression technique, some values of the data
can be lost and a close approximation is provided. For the
seismic related research the accuracy and lossless data is very
important. There are many lossless compression techniques
that give a very good compression ratio like LZ77, but it
requires more compression time. In this paper, we have
described the implementation of RLE [Run Length Encoding]
and Delta lossless compression techniques which need a
smaller compression time for data compression. This means
it is less intensive on computational requirements and is
appropriate for the LCLP IoT devices. The compression time
is directly proportional to the power consumption and it is an
important factor when the system is deployed in the field for
remote monitoring [21]978-1-7281-4177-0 /19/$31.00 © 2019 IEEE
Authorized licensed use limited to: University of Melbourne. Downloaded on May 30,2020 at 10:45:03 UTC from IEEE Xplore. Restrictions apply.
The compression and decompression algorithm is
implemented in PSoC. The 3-axis accelerometer sensor
data is read, compressed and the compression ratio is
calculated for the analysis.
The rest of the paper is organized as follows: Section 2
explains the related works, section 3 elaborates the Methods
and its implementation of lossless compression techniques,
section 4 explains the various measurement parameters,
section 5 gives the analysis and discussion and section 6
conveys the conclusion and future work
II. RELATED WORKS
Prof. Dipti Mathpal et al [2] describe the compression
techniques, types of compression techniques and show a
comparative study on various lossless compression techniques.
Prof. Ruchi Gupta et al [3], have compared lossy and lossless
data compression methodologies for measurement parameters
like compression ratio, compression factor, compression gain,
saving percentage, compression time, etc.
Amandeep Singh et al [4] present a hybrid data compression
technique, which takes lesser compression time than the
existing techniques. A hybrid approach is the combination of
dynamic bit reduction and Huffman coding, which provides
a better compression ratio than the conventional compression
techniques.
Based on the comparison of lossless and lossy data
compression, Rupinder Singh et al [5] proposed a new bit
reduction algorithm to compress the text data by considering
the number theory system and file differential technique. This
reduces the time complexity.
Balasubramanian et al [6] compares different lossless
techniques such as Huffman coding, Arithmetic coding,
Lossless predictive coding, Lossless Jpeg, Run-length coding
on image and concluded that lossless Jpeg is found to be the
best lossless image compression technique with the aid of
better compression ratio and time.
Vijayalakshmi et al [7] portrait the architecture of compression
of the Tamil document. Concerning the compression ratio,
peak signal-noise ratio, the Huffman compression technique
is well suited for image compression rather than other
conventional lossless compression methods [8].
In this paper [9], the pre-processing and delta encoding
algorithm is used to curtail the amount of data to be
transmitted and enhance the performance of the system for
railway transportation applications. Delta encoding and delta
compression techniques used to boost the response size and
response delay for the significant subset of HTTP content
types [10].
V.G.Savani et al [11] describes the implementation of a
data compression algorithm in FPGA using Xilinx Embedded
Development Kit. The paper also stated the major benefits
of this kind of implementation. Which includes the ease of
hardware update and faster compression time.
Yoshifuji et al [12] explains the implementation and the
Fig. 2. Basic block diagram for transmission
performance of the sparse matrix and vector on the PEZY-SC
processor.
From the literature survey, it is found that the delta and RLE
have a relatively simpler algorithm therefore less computation
time and might be suitable for high sampling application.
The advantages of the FPGA in the data compression was
noted, PSoC5LP was chosen as it is based on FPGA and has
built in ARM controller.
III. METHODS AND ITS IMPLEMENTATION
Fig. 2 shows the basic hardware block diagram of the
implementation, with data generation, compression, and trans-
mission at different stages.
The data from the MEMS accelerometer is fed into
the ADC of the PSoC, the data is then compressed using
compression techniques and collected at the serial output.
The received data is then decompressed and compared with
the original data.
A. Accelerometer
The ADXL335 is a MEMS-based accelerometer to measure
the dynamic and static acceleration. Generally, it is used
in applications where the inclination, vibration needs to be
measured. The output signals generated by the accelerometer
is in terms of voltage which is proportional to the acceleration
[13].
B. PSoC
The PSoC [14] is a Programmable System on Chip
by Cypress Semiconductor. It resembles an FPGA and
in addition to that, it has configurable analog and digital
peripheral blocks with a builtin microcontroller on a chip.
This unique feature enables developers to update hardware
design using a programmable feature and prevents the board
redesign. The PSoC5LP has been used in this prototype, it
has a 32 bit ARM CORTEX-M3 with operating frequency up
to 80MHz.The compression algorithm has been implemented
on the PSoC’s microcontroller. The ADC of the PSoC5LP
is programmable and the SAR ADC can be configured to
8, 10 or 12 bits of resolution. Also, the filter design can be
designed using configurable analog or digital blocks for our
future design.
Authorized licensed use limited to: University of Melbourne. Downloaded on May 30,2020 at 10:45:03 UTC from IEEE Xplore. Restrictions apply.
C. Delta Encoding
Delta encoding is a lossless compression technique ,it needs
relatively a fewer compression time because of its simple
approach. As shown in the Fig.3 flowchart, the differences
between all the consecutive samples in a data set is calculated.
The original sample in the data set is replaced with its
difference with the next sample.
In the resultant delta compressed data set, the first sample
will be the first sample of the original data set. The rest of the
samples are the difference of the consecutive samples.
The Delta Encoding is performed on 8bit,10bit and 12bit
ADC data. A good compression was noticed in the TABLE
1 [15].
D. Run-Length Encoding
Run-length encoding also lossless compression technique,
it requires a small compression time because of its simple
computation. And it works best on the data set that has
recurrence value [16].
As shown in the Fig.3 flowchart, the consecutive occurrence
of a sample are replaced with its count of occurrence and
a copy of the sample itself. This method gives a very good
compression ratio as the duplicated samples are not included
in the compressed file.
E. Delta and Run-Length Encoding
Delta compression works best when there is a small or con-
stant variation between adjacent samples [15]. This technique
increases the probability of occurrence of the recurrence data
in the compressed file.
And as explained earlier the RLE works best on recurrence
sample. Thus the combination of these two compression
techniques will give a better compression ratio.
The combination of Delta and RLE compression techniques
are implemented and a better compression parameters are
noticed in the TABLE1
IV. MEASUREMENT PARAMETERS
A. Compression Ratio
The Compression ratio represents the relative decrease in
the size of the data by a given compression algorithm. A
Compression ratio of any compression algorithm is obtained
by taking the ratio of compressed file size to the original file
size. Let ‘CR’ be the compression ratio. According to [17],
the compression ratio can be calculated by,
Compression Ratio =C ompressed F ile S ize
Orig inal F ile S ize (1)
Fig. 3. Flowchart of Delta and Run-length Compression
B. Compression Factor
A Compression Factor is determined as the ratio of original
file size by compressed file size. The Compression factor is
the inverse of the Compression Ratio and is denoted as CF in
the paper.
Compr ession F actor =O riginal F ile Siz e
Compr essed F ile S ize (2)
C. Space Saving
Space saving determines the curtailment in size with respect
to the uncompressed size and it can be calculated using the
following equation. Let ‘SS’ be the Space saving
Space S aving = 1
Compr essed F ile S ize
Orig inal F ile S ize (3)
Authorized licensed use limited to: University of Melbourne. Downloaded on May 30,2020 at 10:45:03 UTC from IEEE Xplore. Restrictions apply.
TABLE I
COMPARATIVE ANA LYSI S OF DELTA AN D A CO MBI NATIO N OF DE LTA AND
RUN -LE NG TH ENCODING COMPRESSION TECHNIQUES FOR DIFFERENT
ADC RESOLUTION
*8 Bit ADC 10 Bit ADC 12 Bit ADC
Original
Data (Bytes) 1110 1214 1515
Delta Encoding
Compressed
Data (Bytes) 691 738 860
CF 1.606 1.644 1.856
SS(%) 37.5 32.1 46.1
CR(%) 62.25 60.79 53.86
Delta and Run-Length Encoding
Compressed
Data (Bytes) 529 697 798
CF 2.098 1.741 1.898
SS(%) 52.35 42.59 47.33
CR(%) 47.65 57.41 52.67
V. ANALYSIS AND DISCUSSIONS
The above Table I shows the test results for the implemented
Delta compression and the combination of Delta and RLE
compression techniques.Both the compression techniques are
tested for different ADC resolution (8 bits, 10 bits, and 12
bits).
The MEMS accelerometer sensor [18] generates the data
proportional to the mechanical vibration, shocks along the
3 axis (x, y, and z). The test was performed by placing the
sensor on a vibration platform and the generated data from
the sensors are read from the ADC of the PSoC. The output
data size of the ADC was denoted as the original data. In
the Table I the 8 Bit ADC has the smallest and the 12 Bit
ADC has the highest original data size. This is because the
higher resolutions uses more binary digits to represent the
same data.
The Original data was compressed using Delta Encoding
compression technique. Based on the compressed file and
original file size, the compression factor[CF],compression
ratio[CR] and space saving[SS] are calculated. The Table
I shows the calculated parameters for the compression for
different ADC resolution.
This procedure is repeated for the combination of Delta
and Run length encoding compression techniques , which
gives a better compression parameters as shown in the TableI.
For the Delta encoding, the best compression ratio of
53.86% was obtained for the 12 bit ADC. But, there was not
much improvement when the combination of Delta and RLE
was implemented , as CR decreased to only 52.67%. This
is because the Delta encoding did not create a significant
change in the number of recurring data.
On the other hand for an 8 bit ADC, the Delta encoding
compressed the data with the CR of 62.25% , it also increased
the number of recurring data significantly. So, the combination
of Delta and RLE improved the CR to 47.65%.
These compression techniques takes relatively lesser
computation cycle , as they have simpler approach and is
implemented in PSoC rather than an FPGA. Which makes
the system a Low-Cost and Low Power system
VI. CONCLUSION
The proposed work delivers a Low-Cost Low Power (LCLP)
IoT system with compression techniques. The lightweight
compression technique which takes less compression time is
implemented in the PSoC. This enables data streaming at a
higher data rate within its bandwidth. The test results prove
that the combination of Delta and RLE technique has a better
compression ratio than just Delta compression technique. In
the future work, digital blocks of PSoC will be used to design
anti aliasing filters. Wireless communication module will be
integrated to make the module an IoT device
ACKNOWLEDGEMENT
We express our deep gratitude to our Chancellor and world-
renowned humanitarian leader Sri. Dr. Mata Amritanandamayi
Devi (Amma) for her inspiration and support towards working
on interdisciplinary research that has direct societal benefit.
REFERENCES
[1] Anwar, Toni, and Lim Wern Li. ”Performance Analysis of 3G Com-
munication Network.” Journal of ICT Research and Applications 2.2
(2008): 130-157.
[2] Prof. Dipti Mathpal and Prof. Mittal Darji, “ A Research Paper on
Lossless Data CompressionTechniques” IJIRST –International Journal
for Innovative Research in Science & Technology— Volume 4 — Issue
1 — June 2017.
[3] Gupta, Ruchi, Mukesh Kumar, and Rohit Bathla. ”Data Compression-
Lossless and Lossy Techniques.” International Journal of Application or
Innovation in Engineering & Management 5.7 (2016): 120-125.
[4] Sidhu, Amandeep Singh, and Meenakshi Garg. ”Research Paper on Text
Data Compression Algorithm using Hybrid Approach.” International
Journal of Computer Science and Mobile Computing 3.12 (2014): 01-10.
[5] Brar, R. S., and B. Singh. ”A survey on different compression techniques
and bit reduction algorithm for compression of text data.” International
Journal of Advanced Research in Computer Science and Software
Engineering (IJARCSSE) Volume 3 (2013).
[6] AVUDAIAPPAN. T, ILAM PARITHI.T, BALASUBRAMANIAN.R and
SUJATHA.K, “PERFORMANCE ANALYSIS ON LOSSLESS IMAGE
COMPRESSION TECHNIQUES FOR GENERAL IMAGES,” Interna-
tional Journal of Pure and Applied Mathematics,Volume 117 No. 10
2017, 1-5.
[7] B. Vijayalakshmi and N. Sasirekha,“LOSSLESS TEXT COMPRES-
SION FOR UNICODE TAMIL DOCUMENTS”, ICTACT JOURNAL
ON SOFT COMPUTING, JANUARY 2018, VOLUME: 08, ISSUE: 02.
[8] Vikash Kumar, Sanjay Sharma, “Lossless Image Compression through
Huffman Coding Technique and Its Application in Image Processing
using MATLAB,” International Journal of Soft Computing and Engi-
neering (IJSCE) ISSN: 2231-2307, Volume-7 Issue-1, March 2017.
[9] Lin, Yangxin, et al. ”Class-based delta-encoding for high-speed train
data stream.” 2015 IEEE 34th International Performance Computing and
Communications Conference (IPCCC). IEEE, 2015.
[10] Mogul, Jeffrey C., et al., ”Potential benefits of delta encoding and data
compression for HTTP.” ACM SIGCOMM Computer Communication
Review. Vol. 27. No. 4. ACM, 1997.
Authorized licensed use limited to: University of Melbourne. Downloaded on May 30,2020 at 10:45:03 UTC from IEEE Xplore. Restrictions apply.
[11] Savani, V. G., Bhatasana, P. M., & Mecwan, A. I. (2012). Imple-
mentation of Data Compression Algorithms on FPGA using Soft-core
Processor. International Journal of Advancements in Technology, 3(4),
270-276.
[12] Yoshifuji, Naoki, et al., ”Implementation and evaluation of data-
compression algorithms for irregular-grid iterative methods on the
PEZY-SC processor.” Proceedings of the Sixth Workshop on Irregular
Applications: Architectures and Algorithms. IEEE Press, 2016.
[13] Tran, Duc Tan, et al, ”Development of a rainfall-triggered landslide
system using wireless accelerometer network.” International Journal of
Advancements in Computing Technology 7.5 (2015): 14.
[14] Dakave, Mr Chetan C., and M. S. Gaikwad. ”Angular Variation Method-
ology For Landslide Measurement Using Programmable System on
Chip.” (2015).
[15] Klein, Shmuel T., and Dana Shapira. ”Compressed delta encoding for
LZSS encoded files.” In 2007 Data Compression Conference (DCC’07),
pp. 113-122. IEEE, 2007.
[16] Arif, Mohammad, and R. S. Anand. ”Run length encoding for speech
data compression.” 2012 IEEE International Conference on Computa-
tional Intelligence and Computing Research. IEEE, 2012.
[17] https://en.wikipedia.org/wiki/Data compression ratio
[18] Ramesh, Maneesha Vinodini, Divya Pullarkatt, T. H. Geethu, and P.
Venkat Rangan. ”Wireless Sensor Networks for Early Warning of Land-
slides: Experiences from a Decade Long Deployment.” In Workshop on
World Landslide Forum, pp. 41-50. Springer, Cham, 2017.
[19] https://www.cypress.com/file/139501/download
[20] https://www.analog.com/media/en/technical-documentation/data-
sheets/ADXL335.pdf
[21] Murugesh, Remya, Aravind Hanumanthaiah, Ullas Ramanadhan, and
Nirmala Vasudevan. ”Designing a Wireless Solar Power Monitor for
Wireless Sensor Network Applications.” In 2018 IEEE 8th International
Advance Computing Conference (IACC), pp. 79-84. IEEE, 2018.
[22] Krishnapriya, S., Balaji Hariharan, and Sangeeth Kumar. ”Resolution
scaled quality adaptation for ensuring video availability in real-time
systems.” 2012 IEEE 14th International Conference on High Perfor-
mance Computing and Communication & 2012 IEEE 9th International
Conference on Embedded Software and Systems. IEEE, 2012.
Authorized licensed use limited to: University of Melbourne. Downloaded on May 30,2020 at 10:45:03 UTC from IEEE Xplore. Restrictions apply.
... Hanumanthaiah et al. [14] gather information about compression techniques and design a solution within a node that compresses data using two lossless compression techniques. The author's concern in designing the method was to create a system that can compress data at a minimal energy cost in low-power devices. ...
... Ultimately, this taxonomy improves the clarity and accessibility of data reduction techniques, facilitating their evaluation and selection for specific data management and analysis purposes. In the proposed taxonomy, data reduction approaches are divided into two main categories:  Lossless data reduction techniques [14,21,22]: these focus on identifying redundancies, patterns, and other inherent characteristics within the data to eliminate or minimize unnecessary and repetitive information. Reducing the data size without any loss is remarkable. ...
... • Lossless data reduction techniques [14,21,22]: these focus on identifying redundancies, patterns, and other inherent characteristics within the data to eliminate or minimize unnecessary and repetitive information. Reducing the data size without any loss is remarkable. ...
Article
Full-text available
The exponential growth in data generation has become a ubiquitous phenomenon in today’s rapidly growing digital technology. Technological advances and the number of connected devices are the main drivers of this expansion. However, the exponential growth of data presents challenges across different architectures, particularly in terms of inefficient energy consumption, suboptimal bandwidth utilization, and the rapid increase in data stored in cloud environments. Therefore, data reduction techniques are crucial to reduce the amount of data transferred and stored. This paper provides a comprehensive review of various data reduction techniques and introduces a taxonomy to classify these methods based on the type of data loss. The experiments conducted in this study include distinct data types, assessing the performance and applicability of these techniques across different datasets.
... Table 1 shows the aggregated temperature data, which forms a data set that exhibits similarity over time and space, because the sensors are deployed in a linear array to measure near-surface temperature gradients, and measurements are repeated regularly. While LPWAN payloads are often transmitted with lossy compression [29] or no compression at all (e.g., CayenneLPP [12]), some studies have investigated lossless data compression in LPWAN networks [13,30]. Dictionary based compression techniques (e.g., Huffman encoding) can be a solution for efficient lossless compression, but lost packets can impact decoding in the long term and the exchange of dictionaries would cause prohibitive overhead for our monitoring system. ...
... The compression algorithm reduces the number of packet transmissions to less than one per measurement, not only improving battery life, but ensuring the usability of the proposed network protocol. In related work [30], compression ratios between 0.48 and 0.62 are reported. The difference in data sets prevents a direct comparison of algorithm performance, but the results presented in this paper demonstrate the potential of spatial and temporal similarity for compression purposes. ...
Article
Full-text available
Densely distributed sensor networks can revolutionize environmental observations by providing real-time data with an unprecedented spatiotemporal resolution. However, field deployments often pose unique challenges in terms of power provi- sions and wireless connectivity. We present a framework for wirelessly connected distributed sensor arrays for near-surface temperature and/or deformation monitoring. Our research focuses on a novel time division duplex implementation of the LoRa protocol, enabling battery powered base stations and avoiding collisions within the network. In order to minimize transmissions and improve battery life throughout the network, we propose a dedicated delta encoding algorithm that utilizes the spatial and temporal similarity in the acquired data sets. We implemented the developed technologies in a AA battery powered hardware platform that can be used as a wireless data logger or base station, and we conducted an assessment of the power consumption. Without data compression, the projected battery life for a data logger is 4.74 years, and a wireless base stations can last several weeks or months depending on the amount of network traffic. The delta encoding algorithm can further improve this battery life with a factor of up to 3.50
... Furthermore, ref. [23] introduced a dual subsystem approach for IoT data compression and transmission, underscoring the effectiveness of compression algorithms in reducing data payload sizes. Lastly, ref. [24] detailed the implementation of hybrid compression techniques on IoT devices, highlighting the pivotal role of efficient algorithms in mitigating resource constraints. Despite the challenges stemming from constrained resources, multiple studies have demonstrated the efficacy of compression techniques in streamlining data transmission and storage within IoT ecosystems. ...
Article
Full-text available
The integration of Internet of Things (IoT) technology into agriculture has revolutionized farming practices by using connected devices and sensors to optimize processes and facilitate sustainable execution. Because most IoT devices have limited resources, the vital requirement to efficiently manage data traffic while ensuring data security in agricultural IoT solutions creates several challenges. Therefore, it is important to study the data amount that IoT protocols generate for resource-constrained devices, as it has a direct impact on the device performance and overall usability of the IoT solution. In this paper, we present a comprehensive study that focuses on optimizing data transmission in agricultural IoT solutions with the use of compression algorithms and secure technologies. Through experimentation and analysis, we evaluate different approaches to minimize data traffic while protecting sensitive agricultural data. Our results highlight the effectiveness of compression algorithms, especially Huffman coding, in reducing data size and optimizing resource usage. In addition, the integration of encryption techniques, such as AES, provides the security of the transmitted data without incurring significant overhead. By assessing different communication scenarios, we identify the most efficient approach, a combination of Huffman encoding and AES encryption, to strike a balance between data security and transmission efficiency.
... A higher compression factor indicates more efficient compression. The calculation for compression factor according to [18] is as follows: ...
Article
Full-text available
In today's technological landscape, the convergence of the Internet of Things (IoT) with various industries showcases the march of progress. This coming together involves combining diverse data streams from different sources and transmitting processed data in real-time. This empowers stakeholders to make quick and informed decisions, especially in areas like smart cities, healthcare, and industrial automation, where efficiency gains are evident. However, with this convergence comes a challenge – the large amount of data generated by IoT devices. This data overload makes processing and transmitting information efficiently a significant hurdle, potentially undermining the benefits of this union. To tackle this issue, we introduce "Beyond Orion," a novel lossless compression method designed to optimize data compression in IoT systems. The algorithm employs advanced techniques such as Lempel Ziv-Welch and Huffman encoding, while also integrating strategies like pipelining, parallelism, and serialization for greater efficiency and lower resource usage. Through rigorous experimentation, we demonstrate the effectiveness of Beyond Orion. The results show impressive data reduction, with up to 99% across various datasets, and compression factors as high as 7694.13. Comparative tests highlight the algorithm's prowess, achieving savings of 72% and compression factor of 3.53. These findings have significant implications. They promise improved data handling, more effective decision-making, and optimized resource allocation across a range of IoT applications. By addressing the challenge of data volume, Beyond Orion emerges as a significant advancement in IoT data management, marking a substantial step towards realizing the full potential of IoT technology.
... Aravind et al. [27] had analysis several lossless compression techniques in low cost low power IOT systems. Here, the analog to digital conversion process initially takes place before compression, and at the receiver side, the decompression process is executed. ...
Article
Full-text available
The avionic environment requires a high speed data connecting medium due to continuous advances in computer processor and peripheral performance. The mode of communication should be able to send accurate information over a long distance. The interconnecting technology must be serial and asynchronous in order to satisfy all of these requirements. High-speed and low-latency communication between the end systems is made possible via fibre channels. FC protocol is proposed in this study to speed up data communication compared to existing protocols significantly. Because of its tiny size, high integration, rapid speed, parallel processing, and programmable capabilities, FPGA is widely employed in a variety of sectors. Therefore, the FPGA-based FC with enhanced data compression methods is proposed and implemented in this research. The FC module is embraced with three major modules, namely serial to parallel (S/P) conversion module (S/P CM), link initialization module (LIM), and frame process module (FPM). The transformation of the optical signal to a digital signal, serial to parallel or parallel to serial, is employed in the first module. Also, this study investigates the Gigabit transceiver (GTH) with its functional modules. With a maximum data rate of 13.1 Gbps, the GTH is more potent than the Gigabit Transceiver with Low Power (GTP). The GTH module uses less power and includes configurable user-defined features and parameters. In addition, in the first module, the enhanced data compression technique is proposed for an efficient data transfer and is termed a time series delta difference neighbourhood indexing sequence (TSD²NIS) model. The LIM module includes the receiver, transmitter, and port state machine. The initialization of each fibre channel port’s link is controlled by a port state machine, which may also guarantee the recovery of link errors and regular transmission of data. The FPM module seeks to deal with frame protocols. The experimental results show the proposed architecture has flexible scalability and strong performance in high-speed data transmission using the FC protocols. The proposed work is simulated using Xilinx ISE tools, and the Virtex 7 FPGA family and comparative study were conducted. It attains a compression ratio of 40%, delay of 0.33 ns and frequency of 321.425 MHZ, respectively.
... Actually, there have been several types of research analyzing compression techniques. Hanumathaiah et al. 2019 [5] proposed a low-cost lowpower internet of things system for DC. The work is implemented using a mixture of the Delta technique and Run Length Encoding (RLE). ...
Article
Full-text available
Since the demand for data transfer and storage is always increasing, sending data in its original form will take a long time to send and receive. Compression is an important issue for digital communications systems because it imposes an important rule while reducing complexity and power requirements. The goal of compression is to reduce the file size without compromising the quality of the information, which leads to more capacity saving and reduces the required bandwidth in terms of the communications system. This paper proposes a system that consists of a hybrid of two lossless techniques, including a concatenation of Huffman and LZ4 in order to enhance the traditional techniques. The result of the proposed system demonstrates that the proposed combination techniques reduce the file size significantly, achieving between 73.649 % and 79.708 % in terms of average saving ratio (SR). The above would give us credible, cost-effective, and affordable lossless encoding systems for electronic communication systems.
Chapter
With the appearance of the Internet of Things (IoT) systems, the problem of processing and efficiently transmitting data has attracted increased attention due to the need to work in real-time. There are a lot of theoretical and practical researches in this direction but the critical remains data processing and transmission particular in real time mode. This paper addresses the challenges of data compression and transmission in the networks of the IoT, using DeepZip - a neural network-based model, distributed systems, and edge computing methodologies. Detailed research has been conducted on data compression through DeepZip and its application to IoT networks. The modified method of data compression in the IoT network using algorithms of a trained neural network and edge computing technologies is proposed, which combines the advantages of recurrent neural networks with the capabilities of the classical compression method, which made it possible to reduce the volume of data transmitted by the network. Practical data derived from real-world IoT networks have been employed to demonstrate the efficiency of these approaches. The data compression and transmission methodologies were optimized with DeepZip, distributed systems, and edge computing. This improves the operational efficiency of IoT networks. As such, this approach enables engineers to design and implement IoT systems with improved data handling capabilities. Application of the modified method made it possible to optimize data transfer and reduce network load. Software developed on the basis of a modified method is used in Internet of Things networks to process large volumes of data, including in real time.
Article
Full-text available
The Internet of Things (IoT) end devices have major limitations related to hardware and energy autonomy. Generally, the highest energy consumption is related to communication, which accounts for up to 60% of consumption depending on the application. Among the strategies to optimize the energy consumed by communication, data compression methods are one of the most promising. However, most data compression algorithms are designed for personal computers and need to be adapted to the IoT context. This study aims to adapt classical algorithms, such as LZ77, LZ78, LZW, Huffman, and Arithmetic coding, and to analyse their performance and energy metrics in IoT end devices. The evaluation is performed in a device with an ESP32 processor and LoRa modulation. The study makes use of real datasets derived from two IoT applications. The results show compression rates close to 70%, a three-fold increase in the number of messages sent, and a reduction in energy consumption of 22%. An analytical model was also developed to estimate the gain in the battery life of the device using the adapted algorithms.
Conference Paper
Full-text available
Open image in new windowLandslides are the third largest disasters worldwide. In order to save innocent lives and property damage, a system for understanding, assessment and early warning of the landslides is highly necessary. In this work, we have designed and developed an integrated wireless sensor network system for real-time monitoring and early warning of landslides. This paper will discuss the detailed requirements and design criteria considered in the design and development of the Intelligent Wireless Probe (IWP), to capture the relevant landslide triggering parameters. The network of IWPs is used to derive the local or regional contribution of geological, hydrological, and meteorological factors towards the initiation of a potentially imminent landslide. This heterogeneous sensor system provides the capability for gathering real-time context aware data to understand the dynamic variability in landslide risk. The data from these systems are continuously transmitted to our control center for real-time data analysis to derive the possibility of an imminent landslide. Based on the knowledge discovery from these analyses a three level warning system was developed to issue real-time landslide warnings. We have deployed the complete system in Western Ghats and North Eastern Himalayas in India. The system in Munnar has proven its validity by delivering real time warnings to the community in 2009, 2011, and 2013 and continues to monitor landslides even today for the tenth year in a row. The results from the experimentation shows this system has contributed in enhancing the reliability of landslide warning, reduced false alarm rate, and provides the capability to issue warnings in local, slope and regional levels. After the success of this work, Government of India has adopted the system nationally as a result of which we have carried out a second deployment in the North Eastern Himalayas.
Article
Full-text available
Recently, Micro-Electro-Mechanical-Systems (MEMS) based accelerometers become a good candidate for replacing the geophone and inclinometer in landslide monitoring. By integrating different kinds of sensors to each node of a communication system, Wireless sensor network (WSN) is a good facility to measure and remote monitor the environment's parameters. Obviously, the cost of the WSN needs to be reduced in order to set-up the systems in many more wide, unstable slopes. However, for a low-cost solution, there is no Wireless Accelerometer Network (WAN) could provide all information of vibration, inclination, and displacement at once due to essence drawbacks of MEMS based sensors. Therefore, in this paper, we explored the use of low cost, three degrees of freedom (3-DoF) ADXL335 chip based MEMS accelerometers to measure and real-time monitor the inclination, the vibration, and the displacement of the soil slope. Both deterministic and stochastic errors are investigated to maximally reduce the incorrectness from sensors. Several signal processing algorithms are applied to provide the accurate information of the inclination and vibration. For the displacement estimation, a novel procedure is proposed to eliminate the accumulated error.
Article
Full-text available
With the increase in the requirement of online real time data, data compression algorithms are to be implemented for increasing throughput. This paper describes the methods of creating dedicated hardware which can receive uncompressed data as input and transmit compressed data at the output terminal. This method uses FPGA for the same, wherein the hardware part has been created using Xilinx Embedded Development Kit (EDK) and data compression algorithms have then been implemented on the same hardware. The EDK helps creating a Soft Core Processor on the FPGA with desired specifications. The data compression algorithm can be implemented on this processor. The advantage of this kind of a system is that, without changing the hardware, the FPGA can be reprogrammed with a new algorithm whenever a better technique is discovered. For the proof of concept the Huffman coding technique has been implemented. The Soft Core Processor uses serial port and for direct input the GPIO of the processor were used. The user enters text data through this port, and the soft core processor using Huffman's data compression algorithm gives compressed data as the output.
Article
Iterative methods on irregular grids have been used widely in all areas of comptational science and engineering for solving partial differential equations with complex geometry. They provide the flexibility to express complex shapes with relatively low computational cost. However, the direction of the evolution of high-performance processors in the last two decades have caused serious degradation of the computational efficiency of iterative methods on irregular grids, because of relatively low memory bandwidth. Data compression can in principle reduce the necessary memory memory bandwidth of iterative methods and thus improve the efficiency. We have implemented several data compression algorithms on the PEZY-SC processor, using the matrix generated for the HPCG benchmark as an example. For the SpMV (Sparse Matrix-Vector multiplication) part of the HPCG benchmark, the best implementation without data compression achieved 11.6Gflops/chip, close to the theoretical limit due to the memory bandwidth. Our implementation with data compression has achieved 32.4Gflops. This is of course rather extreme case, since the grid used in HPCG is geometrically regular and thus its compression efficiency is very high. However, in real applications, it is in many cases possible to make a large part of the grid to have regular geometry, in particular when the resolution is high. Note that we do not need to change the structure of the program, except for the addition of the data compression/decompression subroutines. Thus, we believe the data compression will be very useful way to improve the performance of many applications which rely on the use of irregular grids.
Article
Digital image compression has numerous practical applications to effectively utilize storage capacities & shorten transmission time. This paper aims at the analysis of various lossless and lossy image compression techniques. For lossless compression Huffman coding method and for lossy compression Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT) using Embedded Zerotree Wavelet (EZW) and Block Truncation Coding (BTC) methods are described. These image compression strategies are examined for their relative effectiveness. Finally a performance comparison is made between these techniques based on different parameters like Peak Signal to Noise Ratio(PSNR), Compression Ratio (CR), Root Mean Square Error(RMSE) , Structural Similarity (SSIM) etc.
Conference Paper
In this paper, run length encoding scheme have been applied on the speech signals of different languages to observe the effect on them. First of all, the speech signals from IPA data base in the category of American-English, German and Japanese of Conventions have taken into account on which this algorithmic scheme is applied. At the encoder end, the speech signal is compressed by this scheme to observe the effect. Further, the speech signal is reconstructed at the decoder end from the compressed speech signal. The hearing quality of the reconstructed speech signal is observed to be the same as the RLE technique is a loss less compression technique. Finally, a comparison of computed CRs of three languages is produced which reflects the relative performance of this encoding algorithmic scheme.
Conference Paper
Online videos are one of the most prevalent forms of communication over the Internet because humans are attracted to visual stimulations. Multimedia services, especially real-time services, require data to arrive on time. Unlike any other transmissions, video streaming needs to be monitored and adapted to bandwidth, when the video stream passes through the best-effort environment. A guaranteed QoS (Quality of Service) is essential in all real-time applications. This paper presents a context aware mechanism, which improves the QoS of real time video transmission, over wireless networks. This proposed RSQA system ensures video availability even when the bandwidth is low. Drawbacks to other solutions for saving bandwidth in real-time include being: (1) too expensive due to requiring a priority based router (in the case of proxy based and frame prioritization approaches); (2) too complex; (3) not appropriate for real-time systems. This RSQA system succeeds in being of low complexity, appropriate for real-time systems and cost effective (not requiring a priority based router to function). This RSQA system is independent of wireless technology making, thus it appropriate for use in a heterogeneous environment as well. Whenever a user experiences packet loss or a delay in receiving a video, the server will be notified about the congestion and decrease the video quality to make the video available at low bandwidth. The decision regarding the needed QoS parameters and the rate control is based on context information (available bandwidth). The RTP (Real-time Transport Protocol) based streaming is simulated and verified by an Eclipse simulator and Java using an MJPEG standard. Results show that the QoS parameters of video transmission were improved considerably by this RSQA system.