Fig 2 - uploaded by Ronny Vallejos
Content may be subject to copyright.
A image of the "TNO UN Camp" database: (a) IR image and (b) V image; and (c) − (f ) fused image obtained by: LP, RP, DWT and SIDWT methods; (g) fusion metrics performance according to image fusion methods. 

A image of the "TNO UN Camp" database: (a) IR image and (b) V image; and (c) − (f ) fused image obtained by: LP, RP, DWT and SIDWT methods; (g) fusion metrics performance according to image fusion methods. 

Source publication
Conference Paper
Full-text available
n this paper, we present a novel objetive measure for image fusion based on the codispersion quality index, following the structure of Piella’s metric. The measure quantifies the maximum local similarity between two images for many directions using the maximum codispersion quality index. This feature is not commonly assessed by other measures of si...

Contexts in source publication

Context 1
... Experiment: 32 sets of infrared (IR) and visual images (V) from "TNO UN Camp" database are used as source images (see Fig. 2). The evaluation results of the metrics for this image set are shown in Fig. 2 (g). In all schemes, the metrics assign the highest values to LP and SIDWT methods and the lowest to RP. The Kendall τ rank correlation coefficient reveals that CQ M has reaso- nable agreement with Q W (τ = 0.706), Q C (τ = 0.771) and Q Y (τ = 0.770), ...
Context 2
... Experiment: 32 sets of infrared (IR) and visual images (V) from "TNO UN Camp" database are used as source images (see Fig. 2). The evaluation results of the metrics for this image set are shown in Fig. 2 (g). In all schemes, the metrics assign the highest values to LP and SIDWT methods and the lowest to RP. The Kendall τ rank correlation coefficient reveals that CQ M has reaso- nable agreement with Q W (τ = 0.706), Q C (τ = 0.771) and Q Y (τ = 0.770), respectively. These outcomes are consistent with those obtained by Lui et al. ...

Similar publications

Conference Paper
Full-text available
Nowadays, rapid changes in technology have a significant influence on learners' educational life. The technological devices of information and communication are developed to deliver valuable knowledge quickly, regardless of the place and time, novel media demonstration formats emerged. Infographics are examples of this format, which use graphic vis...

Citations

... It looks at how similar the fused image is to the images that were used to make it. With and without a reference image, there are two ways to do quantitative analysis [11,21,[35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51]. ...
Article
Full-text available
Multi-focus image fusion (MIF) uses fusion rules to combine two or more images of the same scene with various focus values into a fully focused image. An all-in-focus image refers to a fully focused image that is more informative and useful for visual perception. A fused image with high quality is essential for maintaining shift-invariant and directional selectivity characteristics of the image. Traditional wavelet-based fusion methods, in turn, create ringing distortions in the fused image due to a lack of directional selectivity and shift-invariance. In this paper, a classical MIF system based on quarter shift dual-tree complex wavelet transform (qshiftN DTCWT) and modified principal component analysis (MPCA) in the laplacian pyramid (LP) domain is proposed to extract the focused image from multiple source images. In the proposed fusion approach, the LP first decomposes the multi-focus source images into low-frequency (LF) components and high-frequency (HF) components. Then, qshiftN DTCWT is used to fuse low and high-frequency components to produce a fused image. Finally, to improve the effectiveness of the qshiftN DTCWT and LP-based method, the MPCA algorithm is utilized to generate an all-in-focus image. Due to its directionality, and its shift-invariance, this transform can provide high-quality information in a fused image. Experimental results demonstrate that the proposed method outperforms many state-of-the-art techniques in terms of visual and quantitative evaluations.
... Ojeda et al. also used the CQmax coefficient as an intermediate step to develop a recovery algorithm, generating an original image from two distorted images. These results suggest the CQmax index outperformed the SSIM [12,13] and CQ indices, without comparing the image fusion results with traditional statistical parameters. ...
... We call the modified algorithms Algorithm 1 and Algorithm 2. We applied them to reconstruction and segmentation of images using the BMM estimator of the parameters in the model (1), in different window sizes. To compare the images generated by the algorithms and, therefore, to evaluate the performance of the BMM-2D estimator, we calculated three indexes used in the literature; the SSIM index (Wang and Bovik 2002), the CQ index (Ojeda et al. 2012), and CQmax index (Pistonesi et al. 2015;Ojeda et al. 2018). The SSIM index gives a global assessment of the similarity between two images as a function of luminance, contrast and linear correlation between them. ...
Article
Full-text available
Robust methods have been a successful approach for dealing with contamination and noise in the context of spatial statistics and, in particular, in image processing. In this paper, we introduce a new robust method for spatial autoregressive models. Our method, called BMM-2D, relies on representing a two-dimensional autoregressive process with an auxiliary model to attenuate the effect of contamination (outliers). We compare the performance of our method with existing robust estimators and the least squares estimator via a comprehensive Monte Carlo simulation study, which considers different levels of replacement contamination and window sizes. The results show that the new estimator is superior to the other estimators, both in accuracy and precision. An application to image filtering highlights the findings and illustrates how the estimator works in practical applications.
... They are classified into four groups according to their characteristics: information theory based metrics, image feature based metrics, human perception inspired fusion metrics, and image structural similarity based metrics [4]. In the context of measures based on image structural similarity, Piella's metric [7], Cvejic's metric [1], Yang's metric [13], and Codispersion Fusion Quality metric [8] have been developed. These fusion performance measures are based on the Universal Image Quality index (Q) [11]. ...
... Following the structure of Piella's metric (11), Pistonesi et al. [8] introduced an objetive measure for image fusion. This fusion measure, labeled CQ M , is based on a modification of the CQ index, called CQ max index, ...
... In the first example, we performed a 3-level decomposition and in the second and third, a 4-level decomposition was used. In order to ensure fairness of comparison, in the algorithms of the fusion quality metrics we used the same setting that appears in [7], [1], [13] and [8]. For Piella's, Cvejic's and Codispersion Fusion Quality metrics we used the same window size, 8 × 8 pixels. ...
... Later, we inspected and compared the performance of these estimators in Algorithms 1 and 2 on contaminated images. To compare the images generated by the algorithms and, therefore, the performance of the different estimators, we calculated three indexes used in the literature; the SSIM index ( [43]), the CQ index ( [28]), and CQmax index ( [30]). Next, we present two numerical experiments using the image "Lenna", which was taken from the USC-SIPI image database http://sipi.usc.edu/database/. ...
Preprint
Robust methods have been a successful approach to deal with contaminations and noises in image processing. In this paper, we introduce a new robust method for two-dimensional autoregressive models. Our method, called BMM-2D, relies on representing a two-dimensional autoregressive process with an auxiliary model to attenuate the effect of contamination (outliers). We compare the performance of our method with existing robust estimators and the least squares estimator via a comprehensive Monte Carlo simulation study which considers different levels of replacement contamination and window sizes. The results show that the new estimator is superior to the other estimators, both in accuracy and precision. An application to image filtering highlights the findings and illustrates how the estimator works in practical applications.
Article
Full-text available
Image fusion of satellite sensors can generate a high-resolution multi-spectral image from inputs of a high spatial resolution panchromatic image and a low spatial resolution multi-spectral image for feature extraction and target recognition, such as enclosure seines and floating rafts. However, there is currently no clear and definite method of image fusion for different aquaculture areas distribution extraction from high-resolution satellite images. This study uses three types of high-resolution remote sensing images, GF-1 (Gaofen-1), GF-2 (Gaofen-2), and WV-2 (WorldView-2), covering the raft and enclosure seines aquacultures in the Xiangshan Bay, China, to evaluate panchromatic and multispectral image fusion techniques to determine which is the best. This study applied PCA (principal component analysis), GS (Gram–Schmidt), and NNDiffuse (nearest neighbor diffusion) algorithms to panchromatic and multispectral images fusion of GF-1, GF-2, and WV-2. Two quantitative methods are used to evaluate the fusion effect. The first used seven statistical parameters, including gray mean value, standard deviation, information entropy, average gradient, correlation coefficient, deviation index, and spectral distortion. The second is the CQmax index. Comparing the evaluation results by these seven common statistical indicators with the results of the image fusion evaluation by index CQmax, the results prove that the CQmax index can be applied to the evaluation of image fusion effects in different aquaculture areas. For the floating raft cultured area, the conclusion is consentaneous; NNDiffuse was also optimal for GF-1 and GF-2 data, and PCA was optimal for WV-2 data. For the enclosure seines culture area, the conclusion of quantitative evaluations is not consistent and it shows that there is no definite good method that can be applied to all areas; therefore, careful evaluation and selection of the best applicable image fusion method are required according to the study area and sensor images.
Article
In the last decade, image quality indices have received considerable attention to quantify the dissimilarity between two images. The codispersion coefficient, commonly used in spatial statistics to address the association between two processes has also been used for this aim. Here we introduce an image quality index (CQmax) that is based on codispersion. This new coefficient is a directional evaluation of the spatial association, and consists of computing the maximum codispersion for a finite set of spatial lags on the plane, which also allows to obtain the direction associated with the maximum codispersion. From the CQmax index, a pseudo-metric that can be used as a cost functional for related optimization problems is defined. We carry out Monte Carlo simulations to explore the performance of the proposed index and its capability to detect directional contaminations. Additionally, we introduce a novel algorithm to restore directionally contaminated images and present an application with real data in the context of image fusion.