Fig 1 - uploaded by Bradley Erickson
Content may be subject to copyright.
The JPEG Algorithm. The image is ®rst separated into 8  ́ 8 pixel subimages. The DCT of each subimage then is computed. These coef®cients then are quantized using a quantization table (for this illustration, each value is divided by 5). Finally, the quantized values are encoded from the upper left corner, with a `marker value' sent when there are no more nonzero values. 

The JPEG Algorithm. The image is ®rst separated into 8 ́ 8 pixel subimages. The DCT of each subimage then is computed. These coef®cients then are quantized using a quantization table (for this illustration, each value is divided by 5). Finally, the quantized values are encoded from the upper left corner, with a `marker value' sent when there are no more nonzero values. 

Source publication
Article
Full-text available
The volume of data from medical imaging is growing at exponential rates, matching or exceeding the decline in the costs of digital data storage. While methods to reversibly compress image data do exist, current methods only achieve modest reductions in storage requirements. Irreversible compression can achieve substantially higher compression ratio...

Contexts in source publication

Context 1
... JPEG (Joint Photographic Experts Group) compression standard is a widely used compression method that includes both revers- ible and irreversible techniques, and has been described in detail by Wallace. 2 Although JPEG was not designed for medical imagery (ie, it was not de®ned for 12-or 16-bit intensity scales), it has been adapted for radiologic images as de- scribed by Gillespy and Rowberg. 3 Figure 1 shows how the algorithm operates. It begins by dividing the image into 8 pixeí 8 pixel blocks. ...
Context 2
... STORAGE and image transfer requirements of medical images have hampered attempts to implement picture ar- chiving and communications systems (PACS) and teleradiology. Image compression recently has been explored as a means of reducing costs of managing large image data sets. Lossless compression methods use redundancy within an image to more eciently transmit image information while allowing perfect reconstruction, but these methods achieve only 2:1 to 4:1 re- duction for medical image. 1 Irreversible or ``lossy'' techniques can reduce images by arbi- trarily large ratios, but do not perfectly repro- duce the original image. However, the reproduction may be good enough that there is no perceptible image degradation nor compro- mised diagnostic value. This report reviews the application of image compression techniques to medical imagery, focusing on the irreversible methods, including the JPEG2000 standard. Following that is a review of measures for evaluating compression algorithm performance and some of the recent results for wavelet compression. Most irreversible image compression techniques involve 3 steps: transformation, quantization, and encoding. Transformation is a lossless step in which the image is transformed from grayscale values in the spatial domain to coecients in some other domain. One familiar transform is the Fourier transform used in re- constructing magnetic resonance images (MRI). Other transforms such as the discrete cosine transform (DCT) and discrete wavelet transform (DWT) are more commonly used for image compression. No loss of information occurs in the transformation step. Quantization is the step in which data integrity is lost. It attempts to minimize information loss by preferentially preserving the most important coecients, whereas less important coecients are roughly approximated, often as zero. Quantization may be as simple as converting ̄oating point values to integer values. Finally, these quantized coecients are encoded. This also is a lossless step in which the quantized coecients are com- pactly represented for ecient storage or transmission of the image. The JPEG (Joint Photographic Experts Group) compression standard is a widely used compression method that includes both revers- ible and irreversible techniques, and has been described in detail by Wallace. 2 Although JPEG was not designed for medical imagery (ie, it was not de®ned for 12- or 16-bit intensity scales), it has been adapted for radiologic images as described by Gillespy and Rowberg. 3 Figure 1 shows how the algorithm operates. It begins by dividing the image into 8 pixel ́ 8 pixel blocks. The DCT of each image block is computed, resulting in an 8 ́ 8 block of spectral coecients. Most of the information is concentrated in relatively few coecients in the upper left corner of this DCT image. Quantization is performed next. In this step, the coecients are approximated to values that are easy to represent in a small amount of space. There is an 8 ́ 8 table (called the quantization table), which contains the values by which corresponding coecients are to be divided. By using dierent values, spectral frequencies that are more important to the visual system can be preserved preferentially over less- important frequencies. The resulting values then are rounded o to the nearest integer. JPEG encodes the quantized coecients by reordering them in a zigzag pattern. This places the largest values ®rst, with long strings of zeros at the end, which can be eciently represented. Although the JPEG lossy algorithm is good for many types of images, it has some draw- backs when applied to radiographic images. It degrades ungracefully at high compression ratios, with prominent artifacts at block bound- aries, and it cannot take advantage of patterns larger than the 8 ́ 8 pixel blocks. Wavelet- based compression schemes generally outper- form JPEG in terms of image quality at a given compression ratio, and the improvement can be dramatic at high compression ratios. 4 The DWT of an image is computed 5 (Fig 2) using a pair of high- and low-pass ®lters with special mathematical properties. Many such ``wavelet'' ®lters exist, but many groups have adopted the 9-tap/7-tap bi-orthogonal ®lters of Antonini et al, 5 because they seem to work well in real-world application. 6 The 2 ®lters split the image into 2 components or subbands in each direction (each is half the original size). This produces 4 subband images, 1 containing the low-frequency information, 1 each for the high- frequency information in the X or Y direction, and 1 for high-frequency information in both X and Y. The process is repeated on the low- frequency component, breaking it up into ``high-low'' and ``low-low'' components. If this process is performed n times, an n -level discrete wavelet transform is created. A 5-level discrete wavelet transform of an MRI is shown in Fig 2. The DWT is eective for compression because it eectively concentrates the information into a few coecients, with most other coecients being zero or close enough to zero that they can be considered zero without degrading the image. Most wavelet compression algorithms compute a 4- or 5-level DWT, quantize the resulting coecients, and eciently encode the quantized coecients. The quantization is performed by dividing each coecient by a quantization parameter and rounding o to the nearest integer. Having a larger quantization parameter will result in more coecients that are zero, and hence, increases the compression ratio. Finally, encoding converts the coecients into values that can be stored or transmitted eciently. It is the way that the nonzero coecients are encoded that dierentiates the advanced wavelet compression techniques. Figure 2 graphically shows the hierarchical structure of the DWT; advanced techniques capitalize on this tree-based organization of the coecients. The most well known of these techniques is embedded zerotree coding, described by Shapiro, 7 and enhanced by Said and Pearlman. 8,9 The latter approach, termed set partitioning in hierarchical trees (SPIHT), was one of the early successful advanced wavelet techniquesÐit yielded signi®cantly better results than conven- tional wavelet compression with similar com- putational complexity. 9 In addition to resulting in ecient compression, it also transmitted the compressed bitstream in which approximations of the most important coecients (regardless of location) are transmitted ®rst. The values of these coecients are progressively re®ned, and the most important remaining informa- tionÐthat which yields the largest distortion reductionsÐis transmitted next. It can be shown that such a transmission scheme (with uniform weighting) is the optimal way to de- crease the root-mean-square (RMS) error in the reconstructed image. 9 Because JPEG was speci®ed for computers that existed over a decade ago, and because new technologies like wavelet had surpassed JPEG for many types of images, the JPEG group set out to update the standard, which is now known as JPEG2000. For this paper, JPEG will refer to the older compression method, wavelet will refer to the family of speci®c wavelet methods, and JPEG2000 will refer to the de- veloping standard. The JPEG2000 eort has been substantial. This group identi®ed a number of shortcomings of the JPEG standard that JPEG2000 would address. Among these were: 1. Better performance at high compression ratios 2. A single codestream that would support irreversible and lossless compression ...

Similar publications

Article
Full-text available
We report on the test results of a recently fabricated data reader for our Write Once, Read Forever (WORF) high-density, very long-term archival data storage system. At the 2014 and 2015 IS&T Archiving conferences we described our technology and our progress establishing that multi-state data can be stored on monochromatic, dye-free, silver halide...

Citations

... Nevertheless, the utilisation of proprietary compression techniques substantially amplifies the expenses and exertion involved in transmitting data across diverse systems, hence compelling the implementation of digital communication standards [4]. It is important to acknowledge that prior assessments of medical image compression techniques have been documented in published literature [5][6][7][8][9][10][11][12][13][14]. ...
Chapter
Book series on Medical Science gives the opportunity to students and doctors from all over the world to publish their research work in a set of Preclinical sciences, Internal medicine, Surgery and Public Health. This book series aim to inspire innovation and promote academic quality through outstanding publications of scientists and doctors. It also provides a premier interdisciplinary platform for researchers, practitioners, and educators to publish the most recent innovations, trends, and concerns as well as practical challenges encountered and solutions adopted in the fields of Medical Science. It also provides a remarkable opportunity for the academic, research and doctors communities to address new challenges and share solutions and discuss future research directions.
... Several methods evaluate the clinical acceptance of the compression level [46]. The first is the numerical analysis of the pixel before and after compression [47]. This simple method is recommended for calculating the mean pixel error for the compressed image but has no correlation with radiologists' evaluations and therefore has no clinical significance. ...
Article
Full-text available
Advanced microscopic techniques such as high-throughput, high-content, multispectral, and 3D imaging could include many images per experiment requiring hundreds of gigabytes (GBs) of memory. Efficient lossy image-compression methods such as joint photographic experts group (JPEG) and JPEG 2000 are crucial to managing these large amounts of data. However, these methods can get visual quality with high compression ratios but do not necessarily maintain the medical data and information integrity. This paper proposes a novel and improved medical image compression method based on color wavelet difference reduction. Specifically, the proposed method is an extension of the standard wavelet difference reduction (WDR) method using mean co-located pixel difference to select the optimum quantity of color images that present the highest similarity in the spatial and temporal domain. The images with large spatiotemporal coherence are encoded as one volume and evaluated regarding the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). The proposed method is evaluated in the challenging histopathological microscopy image analysis field using 31 slides of colorectal cancer. It is found that the perceptual quality of the medical image is remarkably high. The results indicate that the PSNR improvement over existing schemes may reach up to 22.65 dB compared to JPEG 2000. Also, it can reach up to 10.33dB compared to a method utilizing discrete wavelet transform (DWT), leading us to implement a mobile and web platform that can be used for compressing and transmitting microscopic medical images in real time.
... CT IQA metrics have been researched considering their own characteristics. MSE, PSNR, and SSIM have been used for CT IQA as basic guidelines for the evaluation of algorithms; however, they do not correlate well with human perception and have little relationship with diagnostic utility [22]. Some classical methods for the estimation of CT IQA are the modulation transfer function and the noise power spectrum (NPS) [23,24]. ...
Article
Full-text available
Accurate image quality assessment (IQA) is crucial to optimize computed tomography (CT) image protocols while keeping the radiation dose as low as reasonably achievable. In the medical domain, IQA is based on how well an image provides a useful and efficient presentation necessary for physicians to make a diagnosis. Moreover, IQA results should be consistent with radiologists’ opinions on image quality, which is accepted as the gold standard for medical IQA. As such, the goals of medical IQA are greatly different from those of natural IQA. In addition, the lack of pristine reference images or radiologists’ opinions in a real-time clinical environment makes IQA challenging. Thus, no-reference IQA (NR-IQA) is more desirable in clinical settings than full-reference IQA (FR-IQA). Leveraging an innovative self-supervised training strategy for object detection models by detecting virtually inserted objects with geometrically simple forms, we propose a novel NR-IQA method, named deep detector IQA (D2IQA), that can automatically calculate the quantitative quality of CT images. Extensive experimental evaluations on clinical and anthropomorphic phantom CT images demonstrate that our D2IQA is capable of robustly computing perceptual image quality as it varies according to relative dose levels. Moreover, when considering the correlation between the evaluation results of IQA metrics and radiologists’ quality scores, our D2IQA is marginally superior to other NR-IQA metrics and even shows performance competitive with FR-IQA metrics.
... 2,3 With the application of image compression and decompression techniques, such expectations and challenges can be met to preserve all clinically relevant information. [4][5][6][7][8] These techniques can be divided into two types: lossless and lossy. Lossless compression methods are the group of algorithms that allow you to recover all data to its initial intact state after the decompression process, therefore these methods are known to result in a lower compression ratio. ...
... In 1992, the Joint Photographic Experts Group developed the JPEG lossy compression algorithm, which has been introduced as an ISO standard and is widely used [7,22]. Despite the relatively smaller size of JPEG file format [23,24], it did not impair in the diagnosis of IRR and/or ERR. Furthermore, one of the main advantages of the use of images with greater compression is the facilitated digital storage and online transmission. ...
Article
Full-text available
Objectives To evaluate the influence of different image file formats of digital radiographic images on the diagnosis of external (ERR) and internal root resorption (IRR). Materials and methods Thirty-four human teeth were selected. For ERR, 20 teeth were used (10 control and 10 with simulated ERR), and for IRR, 14 teeth were used (before and after IRR simulation). Digital periapical radiographs were acquired using the Digora Toto system and exported in four different image file formats: TIFF, BMP, PNG, and JPEG, totaling 192 radiographs. Five examiners evaluated the images using the JPEGView software and scored the detection of ERR or IRR on a 5-point scale. Sensitivity, specificity, accuracy, and the area under the ROC curve were obtained for the diagnosis of ERR and IRR in the different image file formats. Two-way ANOVA compared the diagnostic values between the file formats and the Kappa test assessed intra- and inter-examiner agreement. The significance level was set at 5% (α = 0.05). Results The mean values of intra-examiner agreement were substantial (0.740) for ERR and almost perfect (0.836) for IRR and, inter-examiner was fair (0.263) and moderate (0.421), respectively. No statistically significant differences were found among the different file formats for the diagnostic values of ERR and IRR. Conclusion The file format of digital radiographs does not influence the diagnosis of ERR and IRR. Clinical relevance Digital radiographic images may be susceptible to computational factors; however, they can be stored in multiple file formats without affecting the diagnosis of dental root resorptions.
... Image compression is also an important step in teleradiology and the original image files size are too big for transmission. So compression of these images makes them possible for easy transmission by at the expanse of image quality (Erickson, 2002). ...
Article
Full-text available
Teleradiology is the practice of transmitting the different radiological reports generated via X-rays, CT-scan and MRI from one part of the world to the another or one location to the other to get the consultation and interpretation from the different expert radiologists. In Veterinary, it was first commercially available in the early 90s. Previously it was poorly developed due to the limited internet speed and lack of essential software but currently, the practice of teleradiology in Veterinary Medicine has become universal due to proper broadband connection, availability of image compression, PACS and DICOM software. This review aims to summarize the goals, applications, transmission, method of image acquisition and digitalization, types of teleradiology, its legal issues and market in Veterinary Medicine.
... [22][23][24] The lossy compression method consists of 3 steps-transformation, quantization, and encoding-and eliminates less important information, thereby reducing transmission and storage requirements. 24,25 The theoretical major loss in image information occurs during the quantization step. 25,26 However, the present study revealed no significant influence on the diagnosis of proximal caries lesions between the file formats assessed. ...
... 24,25 The theoretical major loss in image information occurs during the quantization step. 25,26 However, the present study revealed no significant influence on the diagnosis of proximal caries lesions between the file formats assessed. Thus, considering the benefits of storing and sharing files of smaller sizes, the JPEG file format can be used without concern. ...
Article
Objectives This in-vitro study aimed to evaluate the influence of the radiographic image file format and the transmission application (app) on the diagnosis of proximal caries lesions. Study Design Twenty bitewing radiographs of 40 posterior human teeth placed in phantoms were acquired using the Digora Toto digital sensor. All images were exported as TIFF, BMP, PNG, and JPEG and transmitted online via WhatsApp and Messenger. Five examiners evaluated the radiographs with no online transmission and as transmitted through the 2 apps for the presence of proximal caries lesions by using a 5-point scale. The reference standard for caries lesions was established using micro-computed tomography. Two-way ANOVA compared values of sensitivity, specificity, accuracy, and area under the receiver operating characteristic (ROC) curve (Az) (α=0.05). The kappa test was used to assess intra- and interexaminer agreements. Results Sensitivity, specificity, accuracy, and Az values showed no significant differences in the diagnosis of proximal caries lesions between the different image file formats (p ≥ 0.773) and transmission apps (p ≥ 0.608). Intraexaminer agreement was substantial (κ = 0.742) and interexaminer agreement was moderate (κ = 0.475). Conclusion The digital file format and transmission app did not influence the radiographic diagnosis of proximal caries lesions.
... Also in the medical imaging field, to cope with the tremendously increasing volume of digital images, selected studies have investigated the effects of lossy compression on the quality of medical pictures and their diagnostic potential (Erickson, 2002;Seeram, 2006;Flint, 2012). Although compression ratios of the raw (Bae & Whiting, 2001) or final images between 1:5 and 1:15 seem adequate to guarantee correct medical diagnosis (Koff et al., 2008;ESR, 2011), the results vary between studies and strongly depend on the imaging modality and used validation metrics. ...
Article
Full-text available
Modern detectors used at synchrotron tomographic microscopy beamlines typically have sensors with more than 4–5 mega-pixels and are capable of acquiring 100–1000 frames per second at full frame. As a consequence, a data rate of a few TB per day can easily be exceeded, reaching peaks of a few tens of TB per day for time-resolved tomographic experiments. This data needs to be post-processed, analysed, stored and possibly transferred, imposing a significant burden onto the IT infrastructure. Compression of tomographic data, as routinely done for diffraction experiments, is therefore highly desirable. This study considers a set of representative datasets and investigates the effect of lossy compression of the original X-ray projections onto the final tomographic reconstructions. It demonstrates that a compression factor of at least three to four times does not generally impact the reconstruction quality. Potentially, compression with this factor could therefore be used in a transparent way to the user community, for instance, prior to data archiving. Higher factors (six to eight times) can be achieved for tomographic volumes with a high signal-to-noise ratio as it is the case for phase-retrieved datasets. Although a relationship between the dataset signal-to-noise ratio and a safe compression factor exists, this is not simple and, even considering additional dataset characteristics such as image entropy and high-frequency content variation, the automatic optimization of the compression factor for each single dataset, beyond the conservative factor of three to four, is not straightforward.
... However, the issue of image quality limits the development of lossy image compression techniques. [1][2][3] Several quantitative and qualitative methods are used to ascertain the performance of image quality for lossy image compression. Distortion assessment approaches, such as mean-squared error (MSE) and peak signal-to-noise ratio (PSNR), are frequently used to evaluate the quality of reconstructed images. ...
Article
Full-text available
Image quality can be measured visually. In the human visual system, a compressed image can be judged by the human eye. Image quality may not be perceived to decline in a region with low compression. However, image quality clearly declines in a region with high compression. As image compression increases, image quality gradually transitions from visually lossless to lossy. In this study, we aim to explain this phenomenon. A few images from different datasets were selected and compressed using JJ2000 and Apollo, which are well‐known image compression algorithms. Then, error‐based and correlation‐based metrics were applied to these images. The correlation‐based metrics agree with human‐vision evaluations in experiments, but the error‐based metrics do not. Inspired by the positive result of the correlation‐based metrics, a new metric named the simple correlation factor (SCF) was proposed to explain the aforementioned phenomenon. The results of the SCF show good consistency with human‐vision results for several datasets. In addition, the computation efficiency of the SCF is better than that of the existing correlation‐based metrics.
... The lossy (irreversible) compression offers the advantage of greater compression at the cost of loss of some information whereas, LS (reversible) compression offers the moderate gain without any loss of information (Erickson, 2002). In medical imaging, there are a lot of controversies in the usage of lossy compression techniques (Koff and Shulman, 2006). ...
Chapter
Full-text available
This paper presents a hybrid feature selection (HFS)-based feature fusion system that selects the best features among multiple feature sets to classify liver ultrasound images into four classes: normal, chronic, cirrhosis, and heptocellular carcinomas evolved over cirrhosis. After extracting features by gray-level co-occurrence matrices, gray-level difference matrix and ranklet transform, the system utilizes HFS to select features. Here, HFS method is proposed by combining filter (ReliefF) and wrapper [sequential forward selection (SFS)] methods. Firstly, ReliefF method rank features and preselection are done by discarding low ranked features. Secondly, SFS method finds the optimal feature set. The advantage of proposed method is to make feature selection faster since filter method rapidly reduces the effective number of features under consideration. Thereafter, to take advantage of complementary information from different feature sets, feature fusion schemes are implemented: serial feature combination, serial feature fusion and, hierarchical feature fusion. Experiments are conducted to evaluate the (1) effectiveness of extracted features and proposed HFS method, (2) effectiveness of feature fusion schemes, and (3) performance based on the number of selected features, computational time and accuracy of ReliefF, SFS, sequential backward selection, and proposed method. Finally, the HFS-based hierarchical fusion set obtained accuracy of 95.2% with k-nearest neighbor.