Figure 1 - uploaded by Sandeep Gupta
Content may be subject to copyright.
Huffman coding tree 

Huffman coding tree 

Source publication
Article
Full-text available
The paper deals with formal description of data transformation (compression and decompression process). We start by briefly reviewing basic concepts of data compression and introducing the model based approach that underlies most modern techniques. Then we present the arithmetic coding and Huffman coding for data compression, and finally see the pe...

Similar publications

Article
Full-text available
The volume of trajectory data has become tremendously huge in recent years. How to effectively and efficiently maintain and compute such trajectory data has become a challenging task. In this paper, we propose a trajectory spatial and temporal compression framework, namely CLEAN. The key of spatial compression is to mine meaningful trajectory frequ...
Article
Full-text available
Text search engines are a fundamental tool nowadays. Their efficiency relies on a popular and simple data structure: inverted indexes. They store an inverted list per term of the vocabulary. The inverted list of a given term stores, among other things, the document identifiers (docIDs) of the documents that contain the term. Currently, inverted ind...
Article
Full-text available
This paper proposes a lossless coder for real-time processing and compression of hyperspectral images. After applying either a predictor or a differential encoder to reduce the bit rate of an image by exploiting the close similarity in pixels between neighboring bands, it uses a compact data structure called k 2 -raster to further reduce the bit ra...
Article
Full-text available
This paper presents a novel technique for color image compression in the transform domain. For compression of images vector quantization (VQ) technique is used and the codebook in VQ is designed using Kohonen's Self Organizing Feature Maps (SOFM). This work exploits the special features of SOFM for generic codebook generation that enables to constr...
Conference Paper
Full-text available
In many applications, it is desirable to provide visually lossless (but, in fact, lossy) image compression. This can be done using modern visual quality metrics and iterative image compression/decompression procedure for setting proper parameters of a coder for a given image. Performance of such a procedure is analyzed for a wide set of images and...

Citations

... DST is used for signal transformation, where transform coefficients are thresholded using bisection algorithm to match the predefined user-specified percentage root mean square difference (UPRD). Huffman coding [3] is used to encode the lookup tables that record the position map of zero and non-zero coefficients (NZCs). The Max-Lloyd quantizer, quantize NZCs followed by Arithmetic coding [4][5][6][7][8][9][10][11][12]. ...
Article
Full-text available
In this work, Blood Pressure (BP) Signal Compression in Salt Sensitive Dahl Rat has been done using Discrete Sine Transform (DST). In order to keep the user specified percentage root mean square difference (UPRD) within the tolerance, first transform coefficients are thresholded by bisection algorithm. The binary look up tables is used to store the position map for zero and non-zero coefficients (NZC). After quantization of NZC, it is compressed with Arithmetic coding is used to encode the quantized NZC. The analysis includes the dissimilar characteristics of blood pressure signals and found that compression ratio is directly proportional to the user define PRD (UPRD).
... Huffman coding [4] is used to encode the lookup tables that record the position map of zero and non-zero coefficients (NZCs). The Max-Lloyd quantizer is used to quantize NZCs followed by Arithmetic coding [5][6][7]. ...
Article
Full-text available
This paper is to show the quality controlled based compression using Fractional Fourier Transform (FRFT) for Blood Pressure Signal Compression in Salt Sensitive Dahl Rat. The methodology used for this work is as first transform coefficients are thresholded by bisection algorithm. It is done to keep the user specified percentage root mean square difference (UPRD) within the tolerance. The position map for zero and non-zero coefficients (NZC) are stored in binary look up tables. Arithmetic coding is used to encode the quantized NZC. For analysis different characteristics of blood pressure signals are consider and found that compression ratio is directly proportional to the user define PRD (UPRD).
... DST is used for signal transformation, where transform coefficients are thresholded using bisection algorithm to match the predefined user-specified percentage root mean square difference (UPRD). Huffman coding [3] is used to encode the lookup tables that record the position map of zero and non-zero coefficients (NZCs). The Max-Lloyd quantizer, quantize NZCs followed by Arithmetic coding [4][5][6][7][8][9][10][11][12]. ...
Article
Full-text available
In this work, Blood Pressure (BP) Signal Compression in Salt Sensitive Dahl Rat has been done using Discrete Sine Transform (DST). In order to keep the user specified percentage root mean square difference (UPRD) within the tolerance, first transform coefficients are thresholded by bisection algorithm. The binary look up tables is used to store the position map for zero and nonzero coefficients (NZC). After quantization of NZC, it is compressed with Arithmetic coding is used to encode the quantized NZC. The analysis includes the dissimilar characteristics of blood pressure signals and found that compression ratio is directly proportional to the user define PRD (UPRD).
... Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are the most popular transform techniques [2,3]. Huffman coding and Arithmetic coding are the most popular entropy coding techniques [3,4]. ...
Article
Medical imaging techniques such as Magnetic Resonance Imaging (MRI), Computed Tomography (CT) and Ultrasound (US) produce a large amount of digital medical images. Hence, compression of digital images becomes essential and is very much desired in medical applications to solve both storage and transmission problems. But at the same time, an efficient image compression scheme that reduces the size of medical images without sacrificing diagnostic information is required. This paper proposes a novel threshold-based medical image compression algorithm to reduce the size of the medical image without degradation in the diagnostic information. This algorithm discusses a novel type of thresholding to maximize Compression Ratio (CR) without sacrificing diagnostic information. The compression algorithm is designed to get image with high optimum compression efficiency and also with high fidelity, especially for Peak Signal to Noise Ratio (PSNR) greater than or equal to 36 dB. This value of PSNR is chosen because it has been suggested by previous researchers that medical images, if have PSNR from 30 dB to 50 dB, will retain diagnostic information. The compression algorithm utilizes one-level wavelet decomposition with threshold-based coefficient selection.
... Singla et al. [1] gives the comparative analysis between Huffman and Arithmetic, concluding arithmetic coding is superior to Huffman. As arithmetic accommodates adaptive models easily and provide separation between model and coding. ...
... Huffman Coding was developed by David A. Huffman while he was a Ph.D. student at MIT, and published in the 1952. paper "A Method for the Construction of Minimum-Redundancy Codes" [1]. Huffman coding is an entropy encoding algorithm used for lossless data compression. ...
Article
Full-text available
Data compression is an effective means for saving storage space and channel bandwidth. There are two main types of compression lossy and lossless. This paper will deal with lossless compression techniques named Huffman, Arithmetic, LZ-78 and Golomb coding. The paper attempts to do comparative analysis in terms of their compression efficiency and speed. The test files used for this include English text files, Log files, Sorted word list and geometrically distributed data text file. The implementation results of these compression algorithms suggest the efficient algorithm to be used for a certain type of file to be compressed taking into consideration both the compression ratio and speed of operation. In terms of compression ratios, Golomb is best suited for very low frequency Text files, arithmetic for moderate and high frequency. Implementation is done using MATLAB software.
... From Table 3, and the results of illustrative example 2, showed how efficient and practicable [9] the proposed data compression scheme in Table 1 and decompression algorithms in section 4.2 in handling any kinds of plaintext in an unsecure channel and how it increases storage capacity for faster data processing [3]. ...
... The meaningless results in Table 3 is in line with the proposed data compression and decompression algorithms which helped to conceal the content of sensitive information from the terrorist [9,10]. These meant that the proposed algorithms has proved to prevent the hacker from gaining insight of what the content of information was all about while in either transit or store. ...
Article
Full-text available
This work deals with encoding algorithm that conveys a message that generates a "compressed" form with fewer characters only understood through decoding the encoded data which reconstructs the original message. The proposed factorization techniques in conjunction with lossless method were adopted in compression and decompression of data for exploiting the size of memory, thereby decreasing the cost of the communications. The proposed algorithms shade the data from the eyes of the cryptanalysts during the data storage or transmission.
Article
Due to the power is limited on each node of wireless sensor networks (WSNs), a data compression mechanism is required to achieve the purpose of power saving. In this paper, an efficient data compression method is proposed to reduce the size of transmission data under the given error bound. We first apply the observed transmission data to construct a static Huffman codebook which is related to the data correlation of the monitoring environment. Given an error bound, the proposed method determines whether the new sensed data should be sent or not by comparing it with the reference data such as the previous sensed data (for temporal correlation), the neighboring sensed data (for spatial correlation) and the codebook encoded data (for data correlation). Thus, the total size of transmission data can be minimized for power saving. Simulation results show that the proposed method can make WSNs more efficient in energy consumption. Even the error bound is set as a small value (under 0.009), the proposed method can reduce a lot of the transmission data (over 65%) to cut down the total energy consumption. Comparing to DF-TS, our improvement is nearly 70% in the total energy consumed