ArticlePDF Available

Statistical analysis of image quality measures

Authors:

Abstract and Figures

In this paper we conduct a statistical analysis of the sensitivity and consistency behavior of objective image quality measures. We categorize the quality measures and compare them for still image compression applications. The measures have been categorized into pixel difference-based, correlation-based, edge-based, spectral-based, context-based and HVS-based (Human Visual System-based) measures. The mutual relationships between the measures are visualized by plotting their Kohonen maps. Their consistency and sensitivity to coding as well as additive noise and blur are investigated via ANOVA analysis of their scores. It has been found that measures based on HVS, on phase spectrum and on multiresolution mean square error are most discriminative to coding artifacts. 1 INTRODUCTION Image quality measures are figures of merit used for the evaluation of imaging systems or of coding/processing techniques. Our main goal in this study is to investigate the statistical discriminative power of several quality measures to distortion due to compression, additive noise and blurring. We determine the commonalities between these measures with a view to ultimately extract and combine a subset of measures which will satisfy most of the image quality desiderata [1,2,3].
Content may be subject to copyright.
STATISTICAL ANALYSIS OF IMAGE QUALITY MEASURES
İsmail Avcıbas, Bülent Sankur
Department of Electrical and Electronic Engineering, Bogaziçi University, İstanbul, Turkey
avcibas@busim.ee.boun.edu.tr sankur@boun.edu.tr
ABSTRACT
In this paper we conduct a statistical analysis of the sensitivity
and consistency behavior of objective image quality measures.
We categorize the quality measures and compare them for still
image compression applications. The measures have been
categorized into pixel difference-based, correlation-based,
edge-based, spectral-based, context-based and HVS-based
(Human Visual System-based) measures. The mutual
relationships between the measures are visualized by plotting
their Kohonen maps. Their consistency and sensitivity to
coding as well as additive noise and blur are investigated via
ANOVA analysis of their scores. It has been found that
measures based on HVS, on phase spectrum and on
multiresolution mean square error are most discriminative to
coding artifacts.
1 INTRODUCTION
Image quality measures are figures of merit used for the
evaluation of imaging systems or of coding/processing
techniques. Our main goal in this study is to investigate
the statistical discriminative power of several quality
measures to distortion due to compression, additive
noise and blurring. We determine the commonalities
between these measures with a view to ultimately extract
and combine a subset of measures which will satisfy
most of the image quality desiderata [1,2,3].
We consider in this paper 6 categories of distortion
measures, namely: a) pixel difference-based, b)
correlation-based, c) edge-based, d) spectral-based, e)
context-based and f) HVS-based (Human Visual
System-based) measures. We actually compute
numerical scores for a great variety (actually 30) of these
measures. In our comparisons of the image quality
measures for compression, we used the two well-known
compression algorithms; the DCT-based JPEG [6] and
wavelet-based method, Set Partitioning in Hierarchical
Trees (SPIHT) algorithm [7].
2 IMAGE QUALITY MEASURES
The formulae of image quality measures are given in
Table A. We denote the multispectral components of an
image at the pixel position i, j, and in band k as
()
jiC
k
,,
where
Kk ,...,1= and i, j = 1, ...N. The boldface
symbols
()
ji,C and
()
ji,
ˆ
C indicates the multispectral
pixel vectors, respectively, of the original and distorted
image. C itself denotes a generic K-band image.
2.1 Measures Based on Pixel Difference
In the pixel difference based measures D1-D8 in Table
A, the first four correspond to, respectively, the Mean
Square Error, Error in the L*a*b* space, Minkowsky
Measure and Maximum Difference. The noise-prone
nature of the maximum difference metric can be
mitigated by ranking the pixel vector differences in
descending order and by considering the r.m.s. of the
largest r ones. D6 is the Czenakowski distance. D7 is
the difference over a neighborhood, and D8
multispectral multiresolution distance measure.
2.2 Correlation Based Measures
Three correlation measures often referred in the
literature [9] are: Structural Content C1, Normalized
Cross Correlation C2, and Image Fidelity C3. A variant
of correlation based measures can be obtained by
considering the absolute mean and variance statistics,
C4, C5, of the angles between the pixel vectors of the
original and coded images.
2.3 Edge Quality Measures
In the perception of scene contents, edges play the
dominant role. We first used the measure distance
introduced by Pratt (E1). The other measure is edge
stability measure (E2), which is defined as the
consistency,
()
jiQ , , of edge evidences across different
scales in both the original and coded images. The third
measure in this category that we consider is the surface
properties, i.e. mean and Gaussian curvatures (E3).
2.4 Spectral Distance Measures
The distortion occurring in the phase and magnitude
spectra
()
),(arctan),( vuvu Γ=
ϕ
, ),(),( vuvuM Γ= are
indicated in the spectral magnitude S1, spectral phase S2
and combined S3 measures. (
),( vu
k
Γ
and ),(
ˆ
vu
k
Γ
denote the k'th band spectra). Alternatively the spectral
distortion can be calculated over the transforms taken
over the blocks of the image bands. A global quality
metric can be obtained by finding a statistics of the
block-based distortions. Distortion measures S4, S5, S6
have been obtained as the median of block distortions
(magnitude, phase, and combined penalty figures of
l
M
J ,
l
J
φ
,
l
J ). Block sizes of 8 to 32 have been found
adequate.
2.5 Context Measures
Most of the compression algorithms and computer vision
tasks are based on the neighborhood information of the
pixels. In this sense any loss of contextual information
could be a good measure of image quality. Such
statistical information lies in the context probability, that
is the p.m.f (
p) of pixels in a neighborhood. Changes in
the context probabilities can be used to track loss in
quality.
The high dimensional (at least 9-12) p.m.f. is estimated
by judicious usage of kernel estimation and cluster
analysis for multispectral images [8]. Once the p.m.f’s
are obtained, rate distortion-based
R(p) (X1), and various
f-divergence-based measures X2, X3, X4 can be
obtained for image quality measurement.
2.6 Human Visual System Based (HVS) Measures
The incorporation of even a simplified HVS model into
objective measures reportedly [1, 2, 10] leads to a better
correlation with the subjective ratings. Let the human
visual system is modeled as a band-pass filter, [9], with a
transfer function in polar coordinates
[]
ï
î
ï
í
ì
<
=
7
705.0
)(
3.2
1010
554.0
9loglog9
ρ
ρ
ρ
ρ
ρ
e
e
H
where
()
2/1
22
vu +=
ρ
, u and v being the spatial
frequencies. Both the original and coded images are
preprocessed via this filter to simulate the HVS effect.
The image operation of multiplying the DCT of the
image by the spectral mask above, and inverse DCT
transforming is denoted by the
{}
U operator. in H1-H3.
Some possible measures for the multispectral images are
given, H1, H2 and H3. The multiscale model H4 is too
detailed to be explicated [10] but it includes channels,
which account for perceptual phenomena such as color,
contrast, color-contrast and orientation selectivity. From
these channels, features are extracted and then an
aggregate measure of similarity using a weighted linear
combination of the feature differences is formed.
3 RESULTS AND STATISTICAL ANALYSIS
3.1 Data and Methods
In our comparison of image quality measures we used
ten multispectral satellite images compressed at five
different bit rates with the DCT-based JPEG, and
wavelet-based SPIHT compressors. The selected bit
rates were 0.35, 0.65, 0.85, 1.30 and 1.95 bits/pixel
experimentally determined to reflect five categories of
quality.
3.2 Discriminative Power of The Measures
Since image quality scores overall were approximately
normally distributed Analysis of Variance (ANOVA)
could be used to analyze the results. Using both box
plots and ANOVA
F-tests, we have compared the groups
of compressed images at different bit rates. On one hand
box plots of scores with a sharp slope and little overlap
and on the other hand high
F scores in the ANOVA tests
are both indicative of a good measure of quality. The
ANOVA results of the image quality measures (out of 30
tested) in each category for the pooled data obtained
from JPEG and SPIHT compression algorithms are
given in Table 1.
This analysis aims to identify quality measures that are
sensitive in a consistent way to image quality variations
due to compression, (F scores with respect to bit rate
BR-F), and to the type of coders employed, (F scores
with respect to compressor type CT-F). The main
findings from ANOVA analysis and box plots are as
follows:
1) In each of the six categories the following measures
were found to be the most sensitive: Multiresolution
Distance measure (D8) in the pixel difference group;
Image Fidelity measure (C3) in the image similarity
group; Edge Stability measure (C3) in the edge
distortion group; Weighted Spectral Distance measure
(S3) in the spectral distortions group; Matusita distance
(X4) in the context group;
2
L error with HVS filtering
(H3) in the human visual system group.
Table 1. Two-way ANOVA results for different bit
rates (BR-F), and types of coder, (CT-F).
Measure
BR-F CT-F
Measure
.
BR-F CT-F
D1 18.0 1.22 E3 2.9 0.07
D2 20.1 1.37 S1 21.5 0.09
D3 18.9 0.33 S2 180.8 6.41
D4 20.9 0.05 S3 212.9 5.75
D5 19.3 0.19 S4 22.3 0.27
D6 27.4 0.49 S5 101.2 1.44
D7 10.2 0.24 S6 101.2 1.40
D8 125.5 47.81 X1 9.4 0.94
C1 10.2 1.82 X2 7.8 0.25
C2 19.5 0.61 X3 7.6 0.26
C3 25.3 1.79 X4 10.9 0.47
C4 19.1 0.51 H1 34.5 11.88
C5 8.7 0.01 H2 33.6 23.17
E1 4.1 0.07 H3 497.9 187.3
E2 9.0 0.23 H4 331.0 228.9
2) In particular if one ranks all the 30 measures with
respect to their F statistics the three most reliable image
quality measures appear to be the Weighted Spectral
Distance, the HVS filtered
2
L measure and the
Multiresolution Distance Measure.
3.3 Relationship Between Quality Measures
We have investigated the correlation between quality
measures to explore how similarly they respond to
compression artifacts, and how they are positioned in a
self-organizing map. Self-Organizing Maps (SOM) can
be useful for the visual assessment of similarities and
correlation present in these measures.
The SOM of quality measures is obtained by processing
vector measurements on different images. More
specifically for each measure we calculate a 10-
dimensional vector (since ten images were considered).
Furthermore there are five such vectors, one for each bit
rate considered. The main conclusions from the
observation of SOM’s and their correlation are:
1) The clustering tendency in pixel difference-based
measures and correlation measures in the lower right
corner of Figure 1, (D3, D6, D7, D4, D5 and C1, C2,
C4), is not surprising, since they are similar to each
other. The smaller the pixel differences are, the higher
the correlation between the uncompressed and the
compressed images should be.
2) Similarly spectral magnitude measures are correlated
with pixel difference or correlation based measures (S1,
S4, D3, D1, C2) as in the lower right of the map as to be
expected from Parseval’s energy preservation theorem.
3) Spectral phase based measures, (S2, S3, S5), take
place in the upper right corner of the map and they are
drastically different.
4) The multiresolution distance measure, D8, is well-
correlated with HVS based measures (H1, H4, H2), since
the idea behind this measure is to mimic image
comparison by eye more closely, by assigning larger
weight to low resolution components and less to the
detailed high frequency components.
Figure 1. SOM map of image quality measures for the
pooled data obtained from JPEG and SPIHT
compression algorithms.
5) The second category of measures highly correlated
with HVS-based measures is the context probability
based measures (H1, H2, H4, X2, X3, X4). The reason
behind this fact is that the interband correlation along
with contextual information is both taken into account in
the computation of these measures.
6) The proximity between the Pratt measure (E1) and the
maximum difference measures (D4, D5) is meaningful,
since the maximum distortions in reconstructed images
are expected to be near the edges. The constrained
maximum distance or sorted maximum distance
measures can be used in coder designs to preserve the
two dimensional features, such as edges, in reconstructed
images.
7) The spectral phase measures are observed to stand
apart from almost any other measure. It is also known
that phase information in images is more important
under certain circumstances. We have observed that the
spectral phase measures possess high sensitivity and
discriminative power to coding artifacts. In fact the
highest
F and Q scores in Table 1 were found with the
spectral measures. Thus spectral phase measure
deserves more attention in the design of compression
algorithms.
4 CONCLUSIONS
In this work we have presented collectively the major
image quality measures in their multispectral version and
classified them into six categories. The Kohonen map of
the measures when the respective feature vectors were
their performance figures at various compression ratios
has been useful in identifying measures that behave
similarly, and conversely in identifying the ones that are
sensitive to different distortion artifacts in compressed
images.
Furthermore statistical investigation of the 30 different
measures using a two-way ANOVA analysis has
revealed that HVS-based measures (H3, H4) are the
most sensitive to coding artifacts while being less
dependent on image variety. Other measures that are
close competitors to HVS-based measures are
multiresolution distance measure (D8) and spectral
phase-based measures (S2, S3).
In conclusion multiresolution distance measure and / or
spectral phase measure should be paid more attention in
the design of compression algorithms.
5 REFERENCES
[1] A. B. Watson, Ed., Digital Images and Human Vision,
Cambridge, MA: MIT Press, 1993.
[2] M. P. Eckert, A. P. Bradley, “Perceptual quality metrics
applied to still image compression”, Signal Processing,
(70), 177 – 200, (1998).
[3] I. Avcibas, B. Sankur, K. Sayood, “Statistical Evaluation
of Quality Measures In Image Compression”, J. of
Electronic Imaging, (in review).
[4] C. Rencher, Methods of Multivariate Analysis, New York,
Wiley, 1995.
[5] T. Kohonen, Self-Organizing Maps. Springer-Verlag,
1995.
[6] G. K. Wallace, “The JPEG Still Picture Compression
Standard”, IEEE Trans. Cons. El., 38(1), 18-34, 1992.
[7] A. Said, W. Pearlman, A New Fast and Efficient Image
Codec Based on Set Partitioning in Hierarchical Trees,
IEEE Trans. Circuits and Systems for Video Technology,
6(3), 243-250, 1996.
[8] K. Popat, R. Picard, “Cluster Based Probability Model
and It’s Application to Image and Texture Processing”,
IEEE Trans. Image Process., 6(2), 268-284, 1997.
[9] M. Eskicioglu, P. S. Fisher, “Image Quality Measures and
Their Performance”, IEEE Trans. Commun., 43(12),
2959-2965, 1995.
[10] T. Frese, C. A. Bouman and J. P. Allebach, “Methodology
for Designing Image Similarity Metrics Based on Human
Visual System Models”, Proceedings of SPIE/IS&T
Conference on Human Vision and Electronic Imaging II,
Vol. 3016, San Jose, CA, 472-483, 1997.
Appendix A
Table A: Expressions for the distortion measures.
Pixel Difference Based Measures
2
1
0,
2
),(
ˆ
),(
11
1
å
=
=
N
ji
jiji
N
K
D CC
() () ()
[]
å
=
++=
1
0,
2
*
2
*
2
*
2
,,,
1
2
N
ji
jibjiajiL
N
D
() ()
γ
=
=
γ
åå
ï
ï
ý
ü
ï
î
ï
í
ì
=
/1
K
1k
1N
0j,i
kk
2
j,iC
ˆ
j,iC
N
1
K
1
3D
() ()
å
=
==
K
k
kk
ji
jiCjiC
K
LD
1
,
,
ˆ
,
1
max4
()
å
=
=
r
m
kk
m
r
D
1
2
ˆ
1
5 CC
() ()
()
() ()
()
ååå
===
÷
÷
÷
ö
ç
ç
ç
è
æ
+
÷
÷
÷
ö
ç
ç
ç
è
æ
=
1
0, 11
2
,
ˆ
,/,
ˆ
,,min21
1
6
N
ji
K
k
kk
K
k
kk
jiCjiCjiCjiC
N
D
() ()
()
() ()
()
[]
å
=
+
=
wN
ji
ww
jijidjijid
wN
D
0,
2
2
,,,
ˆ
,
ˆ
,,
)(2
1
7 CCCC
åå
==
=
K
k
R
r
k
r
d
K
D
11
1
8
,
å
=
=
1
2
1,
22
ˆ
2
1
2
1
r
ji
ijij
rr
r
ggd
Correlation Based Measures
() ()
åå å
=
=
=
÷
÷
÷
ö
ç
ç
ç
è
æ
=
K
k
N
ji
N
ji
kk
jiCjiC
K
C
1
1
0,
1
0,
22
,
ˆ
/,
1
1
()() ()
ååå
=
=
=
÷
÷
÷
ö
ç
ç
ç
è
æ
=
K
k
N
ji
k
N
ji
kk
jiCjiCjiC
K
C
1
1
0,
2
1
0,
,/,
ˆ
,
1
2
() ()
[]
÷
÷
÷
ö
ç
ç
ç
è
æ
÷
÷
÷
ö
ç
ç
ç
è
æ
=
åå
=
=
K
k
N
ji
kkk
jiCjiC
K
C
1
1
0,
2
2
/,
ˆ
,
1
13
σ
||
N
1
4C
1N
0j,i
ij
2
å
=
θ
Θ=µ ,
),(
ˆ
),(
),(
ˆ
),,(
cos
1
jiji
jiji
ij
CC
CC
=Θ
()
2/1
2
1N
0ji,
ij
2
1
5
ù
ê
ê
ê
ë
é
Θ=
å
=
θ
µ
N
C
Edge Based Measures
{}
å
=
+
=
d
n
i
i
td
ad
nn
E
1
2
1
1
,max
1
1
() ()
()
å
=
=
1
0,
2
2
,
ˆ
,
1
2
N
ji
jiQjiQ
N
E ,
() ()
()
2
1
1
0,
2
,
ˆ
,
11
3
åå
=
=
=
K
k
N
ji
kk
jiSjiS
N
K
E
Spectrum Based Measures
() ()
2
1
0,
2
,
ˆ
,
1
1
å
=
=
N
vu
vuMvuM
N
S
()()
2
1
0,
2
,
ˆ
,
1
2
å
=
=
N
vu
vuvu
N
S
ϕϕ
()() ( ) () ()
ç
ç
ç
è
æ
+=
åå
=
=
2
1
0,
2
1
0,
2
,
ˆ
,1,
ˆ
,
1
3
N
vu
N
vu
vuMvuMvuvu
N
S
λϕϕλ
l
l
M
JmedianS =4
l
l
φ
JmedianS =5
l
l
JmedianS =6
Context Based Measures
()()
() ()
2121
ˆˆˆˆ
1 pRpRppDppDX ==
ò
=
λ
dppX
ˆ
2
1
2
()
ò
=
λ
dppX
2
ˆ
2
1
3
ò
=
λ
dppX
r
rr
/1/1
ˆ
4
1
r
HVS Based Measures
(){} ()
{}
(){}
ååå
=
=
=
÷
÷
÷
ö
ç
ç
ç
è
æ
=
K
k
N
ji
k
N
ji
kk
jiCUjiCUjiCU
K
HV
1
1
0,
1
0,
,/,
ˆ
,
1
1
(){} ()
{}
[]
(){}
[]
åå å
=
=
=
ö
ç
ç
ç
è
æ
=
K
k
N
ji
N
ji
kkk
jiCUjiCUjiCU
K
HV
1
1
0,
1
0,
2
2
,/,
ˆ
,
1
2
(){} ()
{}
2/1
1
1
0,
2
2
,
ˆ
,
11
3
åå
=
=
ï
ï
ý
ü
ï
î
ï
í
ì
=
K
k
N
ji
kk
jiCUjiCU
N
K
H
å
=
=
102
1
4
i
ii
dH
ω
... If one models a gray level image as a three-dimensional ͑3D͒ topological surface, then one can analyze this surface locally using differential geometry. A measure based on the discrepancy of mean and Gaussian curvatures between an image and its distorted version was used in Ref. 49. However this measure was not pursued further due to the subjective assignment of weights to the surface types and the fact that this measure did not perform particularly well in preliminary tests. ...
... where ⍀(u,v) denotes the 2D DCT of the image and DCT Ϫ1 is the 2D inverse DCT. Some possible measures 5,49 for the K component multispectral image are normalized absolute error: ...
Article
In this work we comprehensively categorize image quality measures, extend measures defined for gray scale images to their multispectral case, and propose novel image quality measures. They are categorized into pixel difference-based, correlation-based, edge-based, spectral-based, context-based and human visual system (HVS)-based measures. Furthermore we compare these measures statistically for still image compression applications. The statistical behavior of the measures and their sensitivity to coding artifacts are investigated via analysis of variance techniques. Their similarities or differences are illustrated by plotting their Kohonen maps. Measures that give consistent scores across an image class and that are sensitive to coding artifacts are pointed out. It was found that measures based on the phase spectrum, the multiresolution distance or the HVS filtered mean square error are computationally simple and are more responsive to coding artifacts. We also demonstrate the utility of combining selected quality metrics in building a steganalysis tool.
... Combination for a basic HVS pattern into objective measures, it is said manages to a best connection with the subjective evaluations. Consign HVS is displayed such as a band pass filter, by an activation function at polar coordinates [11]. ...
... The multiscale model (H4) is also described to be explained but it contains channels, which explanation for perceptual phenomena for instance, orientation selectivity, contrast, color and colorcontrast. Starting these channels, extraction of features and afterward an aggregate criterion of similarity using a weighted linear mixture of the feature changes is formed [11]. ...
... The peak-signal-to-noise ratio (PSNR) measures the similarity of two signals, a reference signal, and a processed version of it, and this ratio is given in decibels [28,29] as some signals hold a very wide dynamic range. The PSNR is a metric complementary to distortion. ...
Article
Full-text available
In the literature, robust reversible watermarking schemes (RWSs) allow the extraction of watermarks after the images have suffered attacks; however, the modified images are compromised. On the other hand, self-recovery schemes will restore the compromised regions of the images, but no secret messages are inserted in these schemes. A framework for robust reversible watermarking with signal restoration capabilities was previously proposed in the literature. This study selects four fragile reversible watermarking techniques and two self-recovery schemes to design different framework configurations. These configurations are evaluated to test the framework’s performance and determine the structure that yields better results in terms of perceptual transparency using a well-known image database as the signal input. It was found that fragile reversible watermarking schemes hold low perceptual distortion, while self-recovery schemes produce high perceptual distortion levels. The inherent characteristics of each algorithm determine, a priori, the behavior of the framework, which is approximated by a proposed equation.
... Hence, quality assessment methods based on known characteristics of the Human Visual System (HVS) were proposed. Avcibas and Sankur (2000) proposed statistical analysis of image quality measures. Quality measures based on pixel difference, correlation, edge analysis, spectral analysis, context analysis and Human Visual System were used. ...
Article
Full-text available
Digital tamper detection is a substantial research area of image analysis that identifies the manipulation in the image. This domain has matured with time and incredible accuracy in the last five years using machine learning and deep learning-based approaches. Now, it is time for the evolution of fusion and reinforcement-based learning techniques. Nevertheless, before commencing any experimentation, a researcher needs a comprehensive state of the art in that domain. Various directions, their outcome, and analysis form the basis for successful experiments and ensure better results. Universal image forensics approaches are a significant subset of image forensic techniques and must be explored thoroughly before experimentation. This motivated authors to write a review of these approaches. In contrast to the existing recent surveys that aim at image splicing or copy-move detection, our study aims to explore the universal type-independent techniques required to highlight image tampering. Several universal approaches based on resampling, compression, and inconsistency-based detection are compared and evaluated in the presented work. This review communicates the approach used for review, analysed literature, and lastly, the conclusive remarks. Various resources beneficial for the research community, i.e. journals and datasets, are explored and enumerated. Lastly, a futuristic reinforcement learning-based model is proposed.
... When it undergoes some compression data loss occurs to overcome this problem entropy encoding is used to produce stego image which is a very complicated compression method to maintain the data integrity. Table 2 shows the comparison of the results obtained by the proposed method with PSNR and MSE [6] The image quality of an image is measured using peak signal to noise ratio (PSNR) and mean square error (MSR) [15] for stego image and cover image and also histogram is used to measure the distortion of stego image and cover image using the following equations. ...
Article
Full-text available
In a digital world, data protection has become highly important issues. For secure data transmission, steganography and cryptography are used where steganography is focusing on the existence of a message secret and cryptography is focusing on content message secret combing both technologies will give more protection for data. In this paper, a base64 encoding method is used to encrypt the secret information which will be embedded into the cover image. An android application is built to perform encoding and decoding operations which is user friendly, fast and secure. Image quality is measured using Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE) and histogram analysis. The experimental result shows that the stego image can store large amount of data while PSNR, MSR, and histogram analysis values proves that stego image quality is almost similar to cover image and distortionless images.
... The peak signal-to-noise ratio (PSNR) measures the similarity of two signals-typically a reference signal and a processed version of it-which defines the relation between the maximum energy of a signal and the noise affecting it, expressing this ratio in decibels [24,25]. Given a 16-bit audio clip f and a copy of the processed audio g, both of size N, the PSNR between f and g is computed by PSNR(f, g) = 10 · log 10 65, 535 2 MSE( f , g) , ...
Article
Full-text available
In the data-hiding field, it is mandatory that proposed schemes are key-secured as required by the Kerckhoff’s principle. Moreover, perceptual transparency must be guaranteed. On the other hand, volumetric attack is of special interest in audio data-hiding systems. This study proposes a data-hiding scheme for audio signals, which is both key-based secured and highly perceptually transparent and, thus, robust to the volumetric attack. A modification to a state-of-the-art data-hiding algorithm is proposed to achieve key-based security. Embedding is carried out in the integer discrete cosine transform (DCT) domain; selected samples for embedding are determined by the entropy of the Integer DCT coefficients. Of the two key-based improvements proposed, the multiplicative strategy gives better results, guaranteeing the worst bit error rate when an incorrect key is used. Additionally, the perceptual transparency of the proposed scheme is higher, compared to the state-of-the-art schemes using similar embedding strategies.
... The measurement of change or degradation in the quality of a digital image can be done by Image Quality Assessment. In measuring the quality of digital images, it can be based on some assessment techniques, which include pixel difference, correlation, edge, spectral, context and Human Visual System [1]. For steganography method in spatial domain, pixel difference measurement was used, by computing MSE and PSNR values of the digital image as cover-object and stego-object. ...
Conference Paper
The use of Internet and new technologies have changed the way in which digital content is provided, unfortunately due to the ease with which such data can be copied and modified, the illegal copying of digital content has been notably increased. This problem has created the need to protect digital content to avoid the illegal use and handling of such information. The watermarking is one of the most popular technologies focused on copyright protection. This technique involves inserting a message that contains information about the copyright owner. An alternative for storing this information is the use of QR Codes (Quick Response Codes) whose maximum symbol size can encoding 4296 alphanumeric data. This paper presents a Rational Dither Modulation based watermarking scheme for QR code embedding into color images. Each watermarked image was measured with the PSNR, SSIM and MSE. Results showed that the obtained values of the used measured were into the valid ranges. Also, the proposed scheme shows robustness to gamma correction attack.
Article
Full-text available
In this work we comprehensively categorize image quality measures, extend measures defined for gray scale images to their multispectral case, and propose novel image quality measures. They are categorized into pixel difference-based, correlation-based, edge-based, spectral-based, context-based and human visual system {(HVS)-based} measures. Furthermore we compare these measures statistically for still image compression applications. The statistical behavior of the measures and their sensitivity to coding artifacts are investigated via analysis of variance techniques. Their similarities or differences are illustrated by plotting their Kohonen maps. Measures that give consistent scores across an image class and that are sensitive to coding artifacts are pointed out. It was found that measures based on the phase spectrum, the multiresolution distance or the {HVS} filtered mean square error are computationally simple and are more responsive to coding artifacts. We also demonstrate the utility of combining selected quality metrics in building a steganalysis tool
Article
Full-text available
Embedded zerotree wavelet (EZW) coding, introduced by J. M. Shapiro, is a very effective and computationally simple technique for image compression. Here we offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation, based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previosly reported extension of the EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by arithmetic code.
Article
This paper is a revised version of an article by the same title and author which appeared in the April 1991 issue of Communications of the ACM. For the past few years, a joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG’s proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT-based method is specified for “lossy’ ’ compression, and a predictive method for “lossless’ ’ compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. This article provides an overview of the JPEG standard, and focuses in detail on the Baseline method. 1
Article
We present a review of perceptual image quality metrics and their application to still image compression. The review describes how image quality metrics can be used to guide an image compression scheme and outlines the advantages, disadvantages and limitations of a number of quality metrics. We examine a broad range of metrics ranging from simple mathematical measures to those which incorporate full perceptual models. We highlight some variation in the models for luminance adaptation and the contrast sensitivity function and discuss what appears to be a lack of a general consensus regarding the models which best describe contrast masking and error summation. We identify how the various perceptual components have been incorporated in quality metrics, and identify a number of psychophysical testing techniques that can be used to validate the metrics. We conclude by illustrating some of the issues discussed throughout the paper with a simple demonstration.
Article
We develop, analyze, and apply a specific form of mixture modeling for density estimation within the context of image and texture processing. The technique captures much of the higher order, nonlinear statistical relationships present among vector elements by combining aspects of kernel estimation and cluster analysis. Experimental results are presented in the following applications: image restoration, image and texture compression, and texture classification
Article
A joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG's proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT (discrete cosine transform)-based method is specified for `lossy' compression, and a predictive method for `lossless' compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. The author provides an overview of the JPEG standard, and focuses in detail on the Baseline method
Article
A number of quality measures are evaluated for gray scale image compression. They are all bivariate, exploiting the differences between corresponding pixels in the original and degraded images. It is shown that although some numerical measures correlate well with the observers' response for a given compression technique, they are not reliable for an evaluation across different techniques. A graphical measure called Hosaka plots, however, can be used to appropriately specify not only the amount, but also the type of degradation in reconstructed images