Figure 5 - available via license: CC BY
Content may be subject to copyright.
An example of LZ77 encoding. 

An example of LZ77 encoding. 

Source publication
Article
Full-text available
High-throughput experiment refers to carry out a large number of tests and attain various characterizations in one experiment with highly integrated sample or facility, widely adopted in biology, medicine, and materials areas. Consequently, the storing and treating of data bring new challenges because of large amount of real-time data, especially h...

Context in source publication

Context 1
... core idea is to use the data structure of the data to compress the data. An example of LZ77 encoding is shown in Figure 5. The main steps of LZ77 algorithm are as follows: ...

Similar publications

Preprint
Full-text available
Ontologies are formal representations of concepts and complex relationships among them. They have been widely used to capture comprehensive domain knowledge in areas such as biology and medicine, where large and complex ontologies can contain hundreds of thousands of concepts. Especially due to the large size of ontologies, visualisation is useful...
Article
Full-text available
Abstract. The results of the bioacoustic study carried out on larvae of Phileurus didymus (L.) which emit a sound by compression and release of air (forced air) when they are disturbed are presented. The average duration and frequency of the acoustic patterns obtained, along with illustrations are presented. The biological functions of these sounds...
Article
Full-text available
Spontaneous self-organization (clustering) in magnetically oriented bacteria arises from attractive pairwise hydrodynamics, which are directly determined through experiment and corroborated by a simple analytical model. Lossless compression algorithms are used to identify the onset of many-body self-organization as a function of experimental tuning...

Citations

... -The deviating character indicates that a new character with unrecognized pattern has been found [5]. This algorithm is particularly advantageous identifiable patterns occur frequently in the processed image, allowing it to represent long streams of patterns with only one tuple. ...
Article
This paper aims to provide a comprehensive analysis of the pros and cons of various lossless image compression algorithms for computer scientists, including RLE, Huffman coding, and LZ77. The pros and cons of different compression methods will be examined by various metrics such as space efficiency, space complexity, and time complexity. Each method will be tested upon various image file types, including BMP, TIFF, PPM, JPG, and PNG. The results indicated that Huffman encoding was particularly effective for PPM images, outperforming RLE and LZ77 with notably higher compression ratios. RLE had slightly higher compression ratios in compressing BMP files. TIFF images exhibit lower compressibility compared to BMP and PPM, but with Huffman encoding still demonstrating superior results. However, when lossless compression algorithms are applied to JPG and PNG images, they yield negative outcomes, indicating that JPG and PNG files have limited compressibility due to prior compression.
... This algorithm starts by searching the window for the longest match at the start of the look-ahead buffer and outputs the pointer of that match [8]. As seen in the algorithm, the match pointer comes out as a triple of elements: dis -represents the offset, ln -represents the length of the match, and char -represents the next character after the character match [9,10]. If there is no match, the output of the algorithm is a null pointer (the offset and length of the character match will be: 0) [11], as well as the first character from the input. ...
... Using the feature of high equivalence in color and shape, Ref. [16] have aimed to design a compression algorithm which results in high throughput. Key features of target image are extracted and used for compression. ...
Article
Unlike conventional networks, in Wireless Sensor Network the nodes have constrained energy, memory and processing capabilities. These nodes deployed in a constrained environment monitor any changes in surrounding environment and transfer the changes to the cluster heads. Each node has its own memory, battery, and transceivers. Efficient utilization of these resources can result in the enhancement of network lifetime. In order to securely transfer the data in the form of images, an efficient and cost effective image compression algorithm is required. Hence, in this paper, a detailed review of image compression algorithms has been carried out. The selected algorithms are compared in terms of various performance metrics such as compression ratio, compression time, speed, type of data, etc. The results showed that algorithm proposed by Borici and Arber is the best in case of compression ratio, as it provides better compression ratio in comparison to other algorithms.
Preprint
Full-text available
The constant population growth brings the needing to make up for food also grows at the same rate. The livestock provides one-third of humans protein base as meat and milk. To improve cattles health and welfare the pastoral farming employs Precision Livestock farming (PLF). This technique implementation brings a challenge to minimize energy consumption due to farmers not having enough energy or devices to transmit large volumes of information at the size are received from their farms monitors. Therefore, in this project, we will design an algorithm to compress and decompress images reducing energy consumption with the less information lost. Initially, the related problems have been read and analyzed to learn about the techniques used in the past and to be updated with the current works. We implemented Seam Carving and LZW algorithms. The compression of all images, around 1000 takes a time of 5 hours 10 min. We got a compression rate of 1.82:1 with 13.75s average time for each file and a decompression rate of 1.64:1 and 7.5 s average time for each file. The memory consumption we obtained was between 146MB and 504 MB and time consumption was between 30,5s for 90MB to 12192s for 24410 MB, it was all files.
Article
Composition-graded materials could be designed to rapidly establish the structure-property with high-throughput methods. In this study, stainless steel 316L(SS316L) - 431(SS431) graded material with the SS316L content ranging from 0 to 100 wt.% was fabricated by directed energy deposition additive manufacturing. Composition, phase constitution, microstructure and corrosion behavior of the graded material were characterized by laser-induced breakdown spectroscopy (LIBS), micro-beam X-ray diffraction (XRD), scanning electron microscope (SEM) and high-throughput local electrochemical techniques respectively. The results show that the relative amount of γ-Fe phase increases with the increasing SS316L content, leading to a noticeable decline of microhardness from 578 to 205 HV. Accordingly, the dominant microstructure varies from equiaxed dendrites to a mixture of dendritic and cellular structures. As the content of SS316L increases, the reduced carbides at grain boundaries and the increasing compactness of passive film improve the general and pitting corrosion resistance of the material. When the SS316L content is higher than 50 wt.%, the Volta potential and pitting susceptibility are similar to the pure SS316L part, while the microhardness is higher. Such a high-throughput screening process allows one to reliably select the constituents with the presence of SS316L over 50 wt.% as a potential component under the requirement of high corrosion resistance and wear resistance.