Figure - available from: Journal of Ambient Intelligence and Humanized Computing
This content is subject to copyright. Terms and conditions apply.
Performing Huffman code flowchart

Performing Huffman code flowchart

Source publication
Article
Full-text available
The objective of this paper is to provide a proposed method for radiographic image compression with maximum Compression Ratio (CR) as possible and keeping all details, especially in the Region Of Interest (ROI) that contains the important information in the image. In the proposed method; firstly, the ROI is separated from the image background using...

Similar publications

Article
Full-text available
The increasing test data volume is considered as a biggest challenge in circuit under test. This challenge leads to higher computational complexity associated with the testing circuits. To reduce the testing time and addresses the problem related to high overhead, high fault coverage andincreased power dissipationmethod is used in system-on-chip (S...

Citations

... Therefore, efficient data compression techniques are indispensable when dealing with large volumes of data transmission [1][2][3][4]. With the continuous improvement of data compression technology, its application scope is also expanding, with widespread use in areas such as communication and image compression [5][6][7][8][9][10][11]. The adaptive region algorithm is an improved method based on Huffman coding [12]. ...
Article
Full-text available
The adaptive region algorithm is an improved compression algorithm based on Huffman coding. Because of the large number of rules for dividing regions in the algorithm, there are problems such as high computing costs, slow speed, and low compression efficiency. To address these problems, this paper investigates the adaptive region algorithm based on a ternary optical computer (TOC) combined with the characteristics of a ternary optical computer, such as many data bits, high parallelism, and three-valued coding. According to the characteristics of TOC three-valued coding, this paper designs a three-valued character coding scheme that can effectively shorten the coding length of characters by changing the original coding rules and further improve the compression efficiency of the adaptive region algorithm. Furthermore, in conjunction with the advantages of TOC enabling parallel computation, this paper presents an efficient computational scheme capable of effectively improving computational efficiency during the process of region partitioning. Through case studies, the compression efficiency and computational efficiency of the adaptive region algorithm implemented on TOC and an electronic computer were analyzed, respectively. It was found that the compression efficiency of the TOC-based algorithm is 50.4%, while that of the electronic-computer-based algorithm is only 36%. In the comparison of computational efficiency, the computational time complexity of TOC is $O(n)$ O ( n ) , whereas that of the electronic computer (EC) is $O({n^2})$ O ( n 2 ) . Finally, it is concluded through experimental validation that the TOC-based adaptive region compression algorithm performs well in terms of computational performance and compression efficiency, giving full play to the three-valued coding characteristics of TOC-based as well as the advantages of being able to realize parallel computation.
... In 2016, Vidhyaa et al. [53] proposed a Huffman compression based embedding technique where Huffman code is employed as a prefix type code which compresses the secret bits by knowing the frequency of occurrence of each character or probability of each symbol. After three years later, Kasban et al. [54] suggested an algorithm where ROI (Region Of Interest) is compressed by the Huffman code along with low compression ratio and minimum loss with maximum 39.37 (dB) P SN R value. In the same year, Yuan et al. [55] reported a data embedding scheme where embedding is done by Huffman compression method after the image is filtered by wavelet transform to remove the redundant information in the image. ...
Article
Full-text available
In the area of data hiding and information security, the greater need is to ensure high embedding capacity of the stego media without hampering the visual quality while ensuring the tightest possible security. Now, it has become one of the biggest challenge to meet all these requirements. Embedding secret messages in dual images is one of the latest techniques that aims to cater such needs. However, there are very few data hiding schemes that are currently using dual images, but are suffering from several shortcomings, such as security loophole and poor visual quality with limited hiding capacity. In this paper, a dual image-based reversible data hiding scheme has been proposed where multiple bits can be embedded within a pair of pixels dynamically by Huffman compression where different pseudo codes are generated for different chunk of secret bits which are embedded by Bit-Reversal Permutation (BRP) and X-OR operations. Finally, pixels are interchanged between the dual images based on odd-even properties of the pixel values. The functional efficiency of the proposed scheme is evaluated by comparing the relevant parameters of different state-of-the-art schemes. It is established that the proposed scheme provides more than 64 dB PSNR value on average after embedding a maximum of 3, 27, 680 bits of secret data which ensures good visual quality and high embedding capacity as compared to other state-of-the-art schemes thereby benefitting innumerable applications across public and private sectors.
... H Kasban proposed an image compression technology based on hierarchical vector quantization and Huffman encoding, which uses pyramid compression and lossy vector quantization compression technology to compress the image background, and Huffman code to compress the region of interest. This method can preserve the details of the region of interest under the maximum compression ratio [8]. MD Atiqur Rahman et al. proposed a lossy compression method combining histogram modification and Huffman coding, which reduces the number of bits used in Huffman coding by changing the pixel value. ...
Article
Full-text available
Huffman coding is an important part of image compression technology, the image compression platform is based on GUI, and Huffman is also widely used. This paper introduces the basic principle of the Huffman algorithm, compares it with arithmetic coding and run length encoding, and expounds on the application of these three algorithms in JPEG compression. The AC algorithm combined block-based, fine texture models and adaptive arithmetic coding in the given an example. The RLE algorithm used automatic threshold, direction judgment, and selective value counts to improve its compression efficiency. JPEG algorithm adopted an adaptive quantization table to reduce the distortion rate of image compression. This paper proves the possibility of a better compression rate and distortion rate of the hybrid compression algorithm by demonstrating the improved example of the basic image compression algorithm. In the future, the improved basic algorithms can be combined based on the original JPEG algorithm, and different algorithms can be integrated on the GUI to face different use environments.
... The reductions of complexity and improvement are very attractive solutions for onboard compression. In Kasban et al., (2019), authors explained the importance of feature coding for the image compressor. Due to the problem of the compression error, the encoding process is one of the most complex processes in the field of image processing. ...
Article
Full-text available
Multimedia applications, such as image processing including image and video transfer, heavily rely on reduction. The traditional device methods to picture reduction use more space, energy, and processing time. The majority of current efforts use the Golomb‐Rice encoding, due to its larger memory requirement and higher computing difficulty. So, this research concentrated on hardware design‐oriented probability run length (PRL) coding technique based on lossless colour image compression. The block truncation coding (BTC) features of the compression process are used by the suggested PRL method. The proposed image compression hardware consists of various modules such as a Parameter calculator, fuzzy table, bitmap generator, BTC parameters training, prediction, and error control, and PRL‐based finite state machine (PRL‐FSM). The proposed image compressor utilizes the parameter calculator block, which estimates the block type based on the image pixel intensities for each sub‐block. Thus, each block of the image is compressed by using a new block type and generates a variable block size. The proposed method utilizes the PRL‐BTC encoding method, which also calculates the probability of error between the compressed image to the test image. The process is iterated until the performance trade‐off between hardware cost and compression ratio (CR) is achieved. Hence, both smooth regions and non‐smooth regions of images are perfectly compressed by the probability‐based block selection. The simulation results show that the proposed method resulted in a better area, power, delay metrics, peak signal‐to‐noise ratio (PSNR), and CR compared to the state of art approaches.
... The basic framework of JPEG-2000 is shown in Fig. 7. The entropy encoding scheme used in JPEG-2000 is Huffman encoding (Kasban and Hashima, 2019), RLE (Qin et al., 2018) or arithmetic encoding (Xiang et al., 2018). ...
... Recently, it's been highly crucial to acquire a reconstructed image with high quality. The major intention behind the image compression is to transmit the images with a lower count of bits [13,33,37]. In the image compression, the redundant bit identification, identification of optimal encoding technique and transformation technique are indeed are the key factors [20,26,38]. ...
... In 2019, Kasban et al. [13] established a technique for radiographic image compression based on the separation of radiographic images into ROI and image background. The image background was compressed via image pyramid compression, which was then followed through a GLA compression algorithm based on VQ. ...
... Table 1 represents the review of the extant models. Some limitations of the existing methods are listed as follows: poor error measure values [28], required detailed analysis of experimentation [14], the BA-LBG algorithm requires some additional parameters [12], the Exp-Golomb code method suffers from high time consumption [37], the rectangular transform approach was prone to information loss [33], the efficiency of the compression process was needed to be increased in the hierarchical VQ and Huffman encoding approach [13], more experimental results are needed in the IDE-LBG model [26], and poor image secrecy [38]. ...
Article
Large amounts of storage are required to store the recent massive influx of fresh photographs that are uploaded to the internet. Many analysts created expert image compression techniques during the preceding decades to increase compression rates and visual quality. In this research work, a unique image compression technique is established for Vector Quantization (VQ) with the K-means Linde–Buzo–Gary (KLBG) model. As a contribution, the codebooks are optimized with the aid of hybrid optimization algorithm. The projected KLBG model included three major phases: an encoder for image compression, a channel for transitions of the compressed image, and a decoder for image reconstruction. In the encoder section, the image vector creation, optimal codebook generation, and indexing mechanism are carried out. The input image enters the encoder stage, wherein it’s split into immediate and non-overlapping blocks. The proposed GMISM model hybridizes the concepts of the Genetic Algorithm (GA) and Slime Mould Optimization (SMO), respectively. Once, the optimal codebook is generated successfully, the indexing of the every vector with index number from index table takes place. These index numbers are sent through the channel to the receiver. The index table, optimal codebook and reconstructed picture are all included in the decoder portion. The received index table decodes the received indexed numbers. The optimally produced codebook at the receiver is identical to the codebook at the transmitter. The matching code words are allocated to the received index numbers, and the code words are organized so that the reconstructed picture is the same size as the input image. Eventually, a comparative assessment is performed to evaluate the proposed model. Especially, the computation time of the proposed model is 69.11%, 27.64%, 62.07%, 87.67%, 35.73%, 62.35%, and 14.11% better than the extant CSA, BFU-ROA, PSO, ROA, LA, SMO, and GA algorithms, respectively.
... Entropy coding is one of the most commonly used compression methods, and its common ways are Golomb coding [1,2], arithmetic coding [3,4], Asymmetric Numeral systems (ANS) coding [5,6], and Huffman coding [7,8]. Among these methods, Golomb coding is an encoding method that can only encode non-negative integers, which divides the integer to be encoded into two parts, quotient and remainder, according to the parameter N. When N is not a power of 2, the encoding complexity is high; arithmetic coding is a full-sequence coding, which encodes the data source as binary values of size between 0 and 1. ANS coding is a compression algorithm that balances coding speed and algorithm complexity, and has two implementations: rANS and tANS. ...
Article
Full-text available
In this thesis, a minimum redundant prefix coding with higher compression ratio and lower time complexity is proposed for lossless compression of HD images. The compression algorithm is based on Canonical Huffman coding, preprocesses the data source to be compressed according to the image data features, and then compresses the data in batches using the locally uneven features in the data, which improves the compression ratio by 1.678 times compared with the traditional canonical Huffman coding. During the implementation of the algorithm, the counting sorting method with lower time complexity and the code-length table construction method without relying on binary trees are used to reduce the complexity of the algorithm and achieve the purpose of real-time data processing by the system. Finally, the proposed compression algorithm is deployed in FPGA to improve the encoding rate by parallel hardware circuit and pipeline design.
... The Huffman tree is a data structure that is commonly used for data compression and coding. Later, Huffman trees were also used in other areas such as channel coding (Yin et al., 2021;Liu et al., 2018;Wu et al., 2012), text compression (Dath and Panicker, 2017;Bedruz and Quiros, 2015;Mantoro et al., 2017), image compression (Yuan and Hu, 2019;Kasban and Hashima, 2019;Patel et al., 2016), audio coding (Yi et al., 2019;Yan and Wang, 2011), etc. In recent years, with the development of deep learning, the idea of Huffman trees has been introduced. ...
... An algorithm based on the frequency of occurrence of a symbol in the file is compressed and on statistical coding, meaning that the probability of a symbol has a direct bearing on the length of its representation [22]. The more likely the occurrence of a symbol is, the shorter will the bit-size representation be. ...
Article
Full-text available
Digital data compression aims to reduce the size of digital files in line with technological development. However, most data is distinguished by its large size, which requires a large storage capacity, and requires a long time in transmission operations via the Internet. Therefore, a new compress files method is needed to reduce the image size, maintain its quality, utilize storage spaces, and minimize time. This paper aims to improve digital image compression’s compression rates by dividing the image into several blocks. Thus, a new near-lossless method using the Huffman Coding technique is proposed. Digital image compression techniques are classified as lossless and lossy. Huffman Coding is a lossless-based technique used in the proposed method to maintain image quality during compression. The proposed method consists of several steps, which are dividing the image into blocks, finding the lowest value in each block and subtracting it from the rest of the values in the same block, then subtracting one from the odd numbers, dividing all the values on two, and finally applying the Huffman Coding technique to the block. The proposed method is applied to a well-known gray and color set with different types and different dimensions. Standard evaluation measures are used (i.e., PSNR, MSE, and CR) to evaluate the proposed method’s performance. When compressing images using the proposed method, the results demonstrated 0.11% enhancement when used two by two blocks. It also got high compression rates (25%).
... for otherwise. The entropy encoding scheme used in JPEG-2000 is Huffman encoding [19], RLE [20] or arithmetic encoding [21]. ...
Preprint
Full-text available
In the realm of image processing and computer vision (CV), machine learning (ML) architectures are widely applied. Convolutional neural networks (CNNs) solve a wide range of image processing issues and can solve image compression problem. Compression of images is necessary due to bandwidth and memory constraints. Helpful, redundant, and irrelevant information are three different forms of information found in images. This paper aims to survey recent techniques utilizing mostly lossy image compression using ML architectures including different auto-encoders (AEs) such as convolutional auto-encoders (CAEs), variational auto-encoders (VAEs), and AEs with hyper-prior models, recurrent neural networks (RNNs), CNNs, generative adversarial networks (GANs), principal component analysis (PCA) and fuzzy means clustering. We divide all of the algorithms into several groups based on architecture. We cover still image compression in this survey. Various discoveries for the researchers are emphasized and possible future directions for researchers. The open research problems such as out of memory (OOM), striped region distortion (SRD), aliasing, and compatibility of the frameworks with central processing unit (CPU) and graphics processing unit (GPU) simultaneously are explained. The majority of the publications in the compression domain surveyed are from the previous five years and use a variety of approaches.