Figure - available from: Multimedia Tools and Applications
This content is subject to copyright. Terms and conditions apply.
Flow chart of sensing data compression algorithm based on Huffman coding

Flow chart of sensing data compression algorithm based on Huffman coding

Source publication
Article
Full-text available
Based on the application of wireless sensor networks, this thesis studies the sensing data compression algorithm on sensor nodes and the compressed storage processing method of massive sensor data in wireless sensor networks. Considering the spatio-temporal correlation between sensor data of a single node, an improved adaptive Huffman coding algori...

Similar publications

Article
Full-text available
The problem of encoding information in order to eliminate its statistical redundancy has been considered. The most widespread techniques for lossless encodingare arithmetic and Huffman coding. One disadvantage of these methods is the lack of efficiency when encoding characters withextra-large alphabets. In this paper,the efficiency of image data co...
Conference Paper
Full-text available
Image Compression using Huffman coding technique is simpler & easiest compression technique. Compression of image is an important task as its implementation is easy and obtains less memory. The purpose of this paper is to analyse Huffman coding technique which is basically used to remove the redundant bits in data by analysing different characteris...
Article
Full-text available
Data stored in physical storage or transferred over a communication channel includes substantial redundancy. Compression techniques cut down the data redundancy to reduce space and communication time. Nevertheless, compression techniques lack proper security measures, e.g., secret key control, leaving the data susceptible to attack. Data encryption...
Preprint
Full-text available
The Asymmetric Numeral Systems (ANS) is a class of entropy encoders by Duda that had an immense impact on the data compression, substituting arithmetic and Huffman coding. The optimality of ANS was studied by Duda et al. but the precise asymptotic behaviour of its redundancy (in comparison to the entropy) was not completely understood. In this pape...

Citations

... Therefore, efficient data compression techniques are indispensable when dealing with large volumes of data transmission [1][2][3][4]. With the continuous improvement of data compression technology, its application scope is also expanding, with widespread use in areas such as communication and image compression [5][6][7][8][9][10][11]. The adaptive region algorithm is an improved method based on Huffman coding [12]. ...
Article
Full-text available
The adaptive region algorithm is an improved compression algorithm based on Huffman coding. Because of the large number of rules for dividing regions in the algorithm, there are problems such as high computing costs, slow speed, and low compression efficiency. To address these problems, this paper investigates the adaptive region algorithm based on a ternary optical computer (TOC) combined with the characteristics of a ternary optical computer, such as many data bits, high parallelism, and three-valued coding. According to the characteristics of TOC three-valued coding, this paper designs a three-valued character coding scheme that can effectively shorten the coding length of characters by changing the original coding rules and further improve the compression efficiency of the adaptive region algorithm. Furthermore, in conjunction with the advantages of TOC enabling parallel computation, this paper presents an efficient computational scheme capable of effectively improving computational efficiency during the process of region partitioning. Through case studies, the compression efficiency and computational efficiency of the adaptive region algorithm implemented on TOC and an electronic computer were analyzed, respectively. It was found that the compression efficiency of the TOC-based algorithm is 50.4%, while that of the electronic-computer-based algorithm is only 36%. In the comparison of computational efficiency, the computational time complexity of TOC is $O(n)$ O ( n ) , whereas that of the electronic computer (EC) is $O({n^2})$ O ( n 2 ) . Finally, it is concluded through experimental validation that the TOC-based adaptive region compression algorithm performs well in terms of computational performance and compression efficiency, giving full play to the three-valued coding characteristics of TOC-based as well as the advantages of being able to realize parallel computation.
... Moreover, the speciality of the Hadoop framework is the monitoring function for the verification process. It is the programming framework that helps to carry on the parallel functions with the presence of several systems (computers) in distributing paradigm [38]. Moreover, one of the MapReduce schemes like, Hadoop has played a significant role in digital distributed environments [20]. ...
Article
Full-text available
Hadoop is one of the biggest software structures for distributing the data to compute and handle big data. Big data is a group of composite and enormous datasets that contains a massive amount of data such as real-time data, social media, capabilities of data management, money laundering, and so on. Also, big data is measured as regards terabytes and petabytes. The main issue of the Hadoop application is unauthorized access. There are several existing techniques introduced to secure the data, but they have data errors, malicious attacks, and take a long time to compute. So the author proposed a novel ChaApache framework to secure the Hadoop application from an unauthorized person also to save processing time of data, and reduce the error rate. The main aim of the developed replica is securing data from an unauthorized person or unauthorized access. Moreover, the developed ChaApache framework is implemented in python, and the Hadoop application contains 512 bits of data, and the data are encrypted by four 32 bits. Furthermore, the proposed model is compared with other existing replicas in terms of computation time, resource usage, data sharing rate, encryption speed, and so on.
... In another study [46] using a new scheme called somDA (SOM-based Data Aggregation) scheme, SOM (Self-Organized-Map) aims to reduce excess data and eliminate outliers. And research by [11] using the EDAGD (Entropy-driven Data Aggregation with Gradient Distribution) method with 3 algorithms usage strategies: 1) Multihop Tree-based Algorithm Data Aggregation (MTDA), 2) Entropy-driven aggregation-based Tree routing Algorithm (ETA), and 3) Gradient Deployment Algorithm (GDA) aimed at energy-saving (energyefficient) WSN. ...
Article
Full-text available
It has been stated that the implementation of Wireless Sensor Networks (WSN) has majorproblems that can affect its performance. One of the problems he faced was the limited energy source (battery-powered). Therefore, in an attempt to use energy as best as possible, several mechanisms have been proposed. Energy efficiency in WSN is a very interesting issue to discuss. This problem is a challenge for researchers. This paper focuses on the discussion of how research has developed in energy efficiency efforts in the WSNs over the past 10 years. One of the proposed mechanisms is data reduction. This paper discusses data reduction divided into 4 Parts; 1) aggregation, 2) adaptive sampling, 3) compression, and 4) network coding. Data reduction is intended to reduce the amount of data sent to the sink. Data reduction approachescan affect the accuracy of the information collected. Data reduction is used to improve latency, QoS (Quality of Service), good scalability, and reduced waiting times. This paper discusses more adaptive sampling techniques and network coding. It was concluded that using data reduction mechanisms in target detection applications proved efficient compared to without using data reduction mechanisms. To save energy, data reduction (especially with adaptive sampling algorithms) can save up to 79.33% energy.
... But for code compression, lossless compression must be used because any instruction information loss after decompression will lead to wrong results. The lossless compression-entropy coding [17] generates variable-length code based on information entropy, represented by Huffman coding [18], [19]and arithmetic coding [20]. ...
Article
Full-text available
Stream processor has been widely used in multimedia processing because of the high performance gained by parallelism. In order to achieve higher parallelism, the stream processor employs large width structure of VLIW (very long instruction word, VLIW) and multiple parallelizable instructions are organized into one VLIW. Because the width of VLIW is fixed, there are a large number of empty operations (non-operation, NOP) filled in VLIW, which results in serious code size expansion problem. Aiming at this issue, the horizontal code compression and vertical code compression methods are applied on the VLIW of stream processor respectively. First the VLIW is divided into several subfields according to the logic characteristics of VLIW instruction, then the horizontal code compression scheme which based on Huffman coding is applied on each subfield and this method can achieve approximately 78% code size reduction on average. However, the extra-long time required to decode the compressed VLIW before instruction execution may cause system performance penalty. In order to reduce the decompression time consumption, the vertical compression scheme is proposed. The vertical compression can reduce the code size nearly 70% by deleting the NOPs of VLIW in vertical direction. Furthermore the VLIW after vertical compression can be executed directly without decompression operation by using banked instruction memory. Specifically, the vertical compression can compress stream processor VLIW code size significantly and without any negative influence on performance.
Preprint
Full-text available
The paper introduces combined Run-Length encoding (RLE) and Huffman encoding approach for image compression. The approach can be applied to the peculiar type of images that satisfies specific characteristics in terms of color components data. The devised approach is supposed to implement lossless compression for the preprocessed image data. First, the RLE is applied to the prepared data, then, Huffman coding process is performed. The results received during the test session of the implemented software showcase that the proposed technique is feasible and provides a decent level of compression. It can be implemented also in the hardware since the complexity of the operations used is relatively low.
Article
In modern life, there is invisible data being continuously generated, data that if collected and processed can detect risks and changes and enable us to mitigate their effect. Thus, there is an essential need to sense everything around us in order to make better use of it. This lead us to an era known as "sensing-era" in which wireless sensor networks (WSN) play a vital role in the monitoring of natural and artificial environments. Indeed, the collection and transmission of huge amounts of redundant data by sensor nodes will lead to a faster consumption of their limited battery power, which is sometimes difficult to replace or recharge, reducing the overall lifetime of the network. Therefore, an effective way to increase lifetime by saving energy is to reduce the amount of transmitted data by eliminating redundancy along the path to the sink. In this paper, we propose a Zoom-In Zoom-Out (ZIZO) mechanism aimed to minimize data transmission in WSN. ZIZO works on two WSN levels: on the sensor level where we propose a compression method called index-bit-encoding (IBE) in order to aggregate similar readings before sending them to the second network level, e.g. cluster-head (CH). The CH searches then for correlation among node data in order to optimize the sampling rate of the sensors in the cluster through a process called sampling rate adjustment (SRA). We evaluate the performance of our mechanism based on both simulations and experiments while the obtained results are compared to other existing techniques, and we show reduced energy consumption by up to 90% in some cases.
Conference Paper
Full-text available
Smart-Device to Smart-Device (D2D) communications over Wi-Fi Direct (WFD) technologies in 5G network is still faces many dilemmas especially if mention some services available to the users such as social sharing of video streaming, audio and caching solutions, with massive number of mobile users, considering rate-distortion (RD) characteristics for QoS with finding the best way to calculate distance between multi-hop candidate devices and whether the device is static or moving with regard to characteristics of flow traffic speed such as High-Speed railways and subway with the mobility feature of communications according to the future demands in 5G. In this article, we present a survey of current methodologies and techniques related to neighboring discovery in the network area, interference management, and low energy consumption communication and its impact on multimedia delivery.
Article
Wireless sensor networks (WSN) comprises of several sensor nodes scattered wirelessly to accomplish a particular task. Each sensor node is empowered by a battery. The various functions of the node namely sensing, computing, storage and transmission/reception of data consumes power from the battery with limited capacity. As these batteries do not last for a long time, an efficient algorithm is required to extend its life time. Data compression algorithm is a unique method adopted to minimize the amount of data being sent or received and thereby reduces the power consumed during communication. This would further increase the lifetime of node and also the network. In this paper a simple lossless compression algorithm is proposed and is also compared with the existing Adaptive Huffman coding algorithm that is been widely used in wireless sensor network applications. The comparative analysis is based on different compression parameters like compression ratio, compression factor, saving percentage, RMSE and encoding & decoding time. The data set for comparison is acquired using a temperature sensor interfaced with NI 3202 programmable sensing node. The comparative analysis is performed and the results are simulated using MATLAB software. The NI WSN nodes are used to execute the algorithm for instantaneous data. The analysis of number of packets transmitted during wireless communication, both before and after compression is performed using Wireshark network analyzer tool. The simulation result shows that the proposed lossless compression algorithm performs better than the existing one. The hardware implementation has proven that the amount of data traffic is reduced after compression which will help in reducing the transmission power and thereby saves the lifetime of the node in a wireless sensor network.