Fig 5 - uploaded by Jemi Jeba
Content may be subject to copyright.
Fusing process based on fuzzy index[6]

Fusing process based on fuzzy index[6]

Source publication
Article
Full-text available
Thispaper deals with various Multi-Focus Image Fusion Techniques in image processing.It is based on the perspective that the blurred image looks peculiar because of the degradation of high frequency information. This is generally caused by the camera optics.The reason is that cameras suffer from limited depthoffield and this causes the image to be...

Similar publications

Preprint
Full-text available
The digital camera captured document images may often be warped and distorted due to different camera angles or document surfaces. A robust technique is needed to solve this kind of distortion. The research on dewarping of the document suffers due to the limited availability of benchmark public dataset. In recent times, deep learning based approach...

Citations

... Blur depends on the distance from the focal plane. By combining multiple partially focused images, an all focus image can be obtained from a multifocus image [7,33]. Displaying objects with different focal lengths together clearly has been an active topic in many applications such as computer vision, digital camera, digital photography, remote sensing, target recognition, aviation, medical diagnosis [4,21,29,45]. ...
Article
Full-text available
A new fusion method based on Multi-Focus Image Fusion Based on Discrete Wavelet Transform with Deep Convolutional Neural Network (MFIF-DWT-CNN) is presented to reduce spatial artifacts and blurring effects in edge details and increase the robustness of multifocal image fusion. The main purpose of the MFIF-DWT-CNN approach is to create a new merged image by collecting the required features from the main image. With the MFIF-DWT-CNN approach, information focused on individual images is combined into a single image, resulting in a clearer image. Within the scope of MFIF-DWT-CNN approach, DWT is applied to the image pairs and the obtained images are then given to the CNN architecture. The MFIF-DWT-CNN approach was developed in this study to reduce spatial artifacts and blurring effects in edge details and to increase the robustness of multifocal image fusion. In order to evaluate our proposed MFIF-DWT-CNN method, QMI, QG, QYi QCB evaluations were made on the public data set. From the experimental results, it is seen that the proposed method gives better results in the relevant metrics than the other methods. This demonstrated the effectiveness of the proposed method.
... Image fusion in pixel level refers to generating of fused image in which the pixel values are based on the pixel values of the source image. Feature-based fusion requires the extraction or segmentation of various features of the source images [5]. The main task of the algorithm is to determine a pixel belonging to an object by comparing it with the average intensity of the pixels classified as an area, wherein the size of the acceptance interval is dependent on the intensity factor taking into account the variability of the environment [8]. ...
Article
Full-text available
In Remote sensing applications, due to the increase in spatial and spectral resolutions, the time complexity became a vital aspect in order to process and deliver the solutions in a timely fashion. Remote sensors which acquire high-resolution image data often misses geometrical alignments and spectral stabilizations because of its dynamic routines, which results in crucial consideration such as loss of information and degraded quality in a single region of interest. This ignited us in finding the need for fusion of data to obtain quality products in remote sensing domain. Leaving the noise in source data compromises the overall quality of the solution. In proposed method to reduce the negative impacts of noise on the image skin, we use the confined filter function which held back the noise while reconstructing the image. In some cases, when image fusion is carried out using the least significant spectral band values will also lead to degraded output; in such condition, the Proposed method chooses high informative bands by careful attention to spectral values in each band. The anticipated outcomes through these methods are significant reduction in computational complexity and noise insensitivity. The fusion process must be verified to validate image sets for smart combinations of bands. The image pixels were considered as components to organize the fused spectral quality. The proposed HSI based MIRAFS method is compared with state of art fusion models and the results shows prominent improvement in the quality of fused images when we chose high spectral values. In MIRAFS, the quality of the resultant fused image has also been checked whether it has minimized spectral degradation with the amplified resolution by conserving spectral information as much as possible.
... Pixel-averaging image fusion is the simplest technique; it is often associated with reducing the contrast of the fused image. In order to overcome the side effects of the pixel-averaging method, many fusion techniques have been developed based on multiresolution [6][7], multi-scale [8] and statistical signal processing. Some researchers have also explored the fusion of images in a transformed domain such as those based on the Discrete Cosine Transform (DCT), which is aimed at enhancing the contrast in the resulting fused image [9][10]. ...
... where h p is a normalizing factor given by , and thefilter kernel ϕ p,q y q is formed by combining the domain and range kernels as ϕ p,q (y p , y q ) = w p-q r(y p -y q ) (6) wherer (y p -y q ) is the range kernel and w p-q represents the domain kernel. ...
Article
Full-text available
Image fusion aims at increasing the information content of the composite image. However, many existing transform-domain image fusion methods have failed to properly enhance the finer details in the input images, and the perceptual quality of the fused image often suffersas a result of fusion process. This paper describes and evaluates a novel image pre-processing technique based on the application of a block Toeplitz matrix as a part of a Discrete Cosine Transform (DCT)-based fusion method. The proposed DCT implementation seeks to enhance the finer details of all input images prior to fusion. A post-fusion stage aimed at adjusting image contrast and improving smoothness is then added in order to improve quality of fused image. The proposed method is applied to a set of medical images, and its results are evaluated based on objective performance measures such as entropy, standard deviation, and root-mean-squared error. These indicators are then compared to values obtained from other existing transform-based fusion methods, showing a significant performance improvement across all experiments.
... Recently, some fusing methods have been proposed by Yu Li [2,17,19,20,34,37,38]. Anish and T. J. Jebaseeli [2] have presented a survey for multi-focus image fusion approaches. ...
... Recently, some fusing methods have been proposed by Yu Li [2,17,19,20,34,37,38]. Anish and T. J. Jebaseeli [2] have presented a survey for multi-focus image fusion approaches. In addition, Zhao et al. [38], Kong, J et al. [17] and Ping Zhang et al. [37] have presented image fusion algorithm with different scenario. ...
Article
Full-text available
Modern developments in image technology enabled easy access to an innovative type of sensor-based networks, Camera or Visual Sensor Networks (VSN). Nevertheless, more sensor data sources bring about the problem of overload information. To solve this problem, some researchers have been carried out on the techniques to counteract the data overload caused by sensors without losing useful data. The aim of fusion in each application is to combine images from several sensors, which leads to the decreased amount of input image data, producing an image with more accurate data. This paper proposes a noisy feature removal scheme for multi-focus image fusion combining the decision information of optimized individual features. The proposed scheme is developed in two main steps. In the first step, the diverse types of features are extracted from each block of input blurred images. The useful information of these individual features indicates which image block is more focused among corresponding blocks in source images. After that, noisy features are removed using binary Genetic Grey wolf optimizer (GGWO) algorithm. The ensemble decision based on individual features is employed to fuse blurred images in the second step. The experimentation is evaluated on different multi-focus images and it reveals that GGWO based proposed method performs better visual quality than other methods.
... Generally, the resultant fused image requires less memory space for storage and less energy consumption during transmission. Further, it contains required information with better image quality for further processing[3]. In particular, image fusion[4]algorithm extracts information from the source images such that the fused image provides better information for human or machine perception as compared to any of the input images. ...
... But it acquires better effects only when the bigger objects are focused on [3,4]. Literature 5, Literature 6 and Literature 13 present a self-adapting selection algorithm based on the focus area of fish swarm algorithm, and break through the limitation of traditional focused region selection algorithm [5,6,13]. ...
Article
Full-text available
The selection window of selection algorithm used in traditional automatic focus window concentrates mainly on the center of image, so the randomly distributed cells would always be out of focus. To address this problem, on the basis of analyzing the performance of selection algorithm of different focusing windows, a modified auto-focus window algorithm upon traditional fish swarm algorithm has been proposed: fish swarm window selection algorithm. After comparatively analyzing the images in focus window that are obtained by traditional and improved fish swarm algorithm, a conclude can be drawn that the focus window of modified algorithm can contain more cells and target bodies. To be specific, owing to fish-swarm window selection algorithm, in the selection window the quantity of the high frequency of images greatly increases, the optimal solution converges to 0.999, and the estimated value of sharpness of the obtained microscopic cell images also improves with high precision and high accuracy of focus of the improved algorithm.
... Fusion of image means the joining of unclear input images of a particular scene to form a sharp end image of the same scene. Fusion is divided based on the method used for fusing the images into fusion in spatial level and fusion in transform level [1] [4] [7]. In spatial level, fusion of images is performed by joining pixels of two or more images to create a final sharp image. ...
... The usual SFF methods compute optimal focus and its depth by applying FM operator on every area in a frame within a sequence of images and seek the optimal focused part along the sequence. The fully focused image can be constructed from merging the focuses optimal parts [4,5]. ...
Article
Full-text available
To generate an all in focus image, the Shape-From-Focus (SFF) is used. The SFF key is finding the optimal focus depth at each pixel or area in an image within sequence of images. In this paper two new focusing measure operators are suggested to be used for SFF. The suggested operators are based on modification for the state of art tool for time-frequency analysis, the Stockwell Transform (ST). The first operator depends on Discrete Orthogonal Stockwell Transform (DOST) which represents a pared version of ST, while the other depends on Pixelwise DOST (P-DOST) which provides a local spatial frequency description. Both of the operators provides the computational complexity and memory demand efficiency compared to the operator depending on ST. A comparison between the suggested operators to operators based on ST are performed and showed that the suggested operators’ performances are as analogous to that of ST.
... Une étude comparative des différents opérateurs est présentée dans [Pertuz et al., 2013a,b]. Les méthodes de fusion d'images "multi-focus" visant à reconstruire l'image entièrement mise au point d'une scène s'appuient sur des opérateurs de mesure de flou similaires [Anish and Jebaseeli, 2012;Li and Yang, 2008]. Deux catégories peuvent être distinguées : les opérateurs s'appliquant dans le domaine spatial (directement appliqués à l'image) et ceux s'appliquant dans le domaine fréquentiel. ...
Thesis
La majorité des dispositifs de métrologie basés vision sont équipés de systèmes optiques stéréo ou de systèmes de mesure externes dits actifs. Les méthodes de reconstruction tridimensionnelle (Structure-from-Motion, Shape-from-Shading) applicables à la vision monoculaire souffrent généralement de l’ambiguïté d’échelle. Cette dernière est inhérente au processus d’acquisition d’images qui implique la perte de l’information de profondeur de la scène. La relation entre la taille des objets et la distance de la prise de vue est équivoque.Cette étude a pour objet l’estimation de l’échelle absolue d’une scène par vision passive monofocale. Elle vise à apporter une solution à l’ambiguïté d’échelle uniquement basée vision, pour un système optique monoculaire dont les paramètres internes sont fixes. Elle se destine plus particulièrement à la mesure des lésions en coloscopie. Cette procédure endoscopique (du grec endom : intérieur et scopie : vision) permet l’exploration et l’intervention au sein du côlon à l’aide d’un dispositif flexible (coloscope) embarquant généralement un système optique monofocal. Dans ce contexte, la taille des néoplasies (excroissances anormales de tissu) constitue un critère diagnostic essentiel. Cette dernière est cependant difficile à évaluer et les erreurs d’estimations visuelles peuvent conduire à la définition d’intervalles de temps de surveillance inappropriés. La nécessité de concevoir un système d’estimation de la taille des lésions coloniques constitue la motivation majeure de cette étude. Nous dressons dans la première partie de ce manuscrit un état de l’art synoptique des différents systèmes de mesure basés vision afin de positionner notre étude dans ce contexte. Nous présentons ensuite le modèle de caméra monofocal ainsi que le modèle de formation d’image qui lui a été associé. Ce dernier est la base essentielle des travaux menés dans le cadre de cette thèse. La seconde partie du manuscrit présente la contribution majeure de notre étude. Nous dressons tout d’abord un état de l’art détaillé des méthodes de reconstruction 3D basées sur l’analyse de l’information de flou optique (DfD (Depth-from-Defocus) et DfF (Depth-from-Defocus)). Ces dernières sont des approches passives permettant, sous certaines contraintes d’asservissement de la caméra, de résoudre l’ambiguïté d’échelle. Elles ont directement inspiré le système de mesure par extraction du point de rupture de netteté présenté dans le chapitre suivant. Nous considérons une vidéo correspondant à un mouvement d’approche du système optique face à une région d’intérêt dont on souhaite estimer les dimensions. Notre système de mesure permet d’extraire le point de rupture nette/flou au sein de cette vidéo. Nous démontrons que, dans le cas d’un système optique monofocale, ce point unique correspond à une profondeur de référence pouvant être calibrée. Notre système est composé de deux modules. Le module BET (Blur EstimatingTracking) permet le suivi et l’estimation conjointe de l’information de mise au point d’une région d’intérêt au sein d’une vidéo. Le module BMF (Blur Model Fitting) permet d’extraire de façon robuste le point de rupture de netteté grâce à l’ajustement d’un modèle de flou optique. Une évaluation de notre système appliqué à l’estimation de la taille des lésions coloniques démontre sa faisabilité. Le dernier chapitre de ce manuscrit est consacré à une perspective d’extension de notre approche par une méthode générative. Nous présentons, sous la forme d’une étude théorique préliminaire, une méthode NRSfM (Non-Rigid Structure-from-Motion) permettant la reconstruction à l’échelle de surfaces déformables. Cette dernière permet l’estimation conjointe de cartes de profondeurs denses ainsi que de l’image de la surface aplanie entièrement mise au point. (...)
... Many multifocus fusion algorithms have been proposed in recent years. Basically, these fusion algorithms can be categorized into two groups: spatial domain fusion and transform domain fusion [3]. For spatial domain fusion, a new image is fused by directly selecting different regions from source images. ...
Article
Full-text available
For multifocus image fusion in spatial domain, sharper blocks from different source images are selected to fuse a new image. Block size significantly affects the fusion results and a fixed block size is not applicable in various multifocus images. In this paper, a novel multifocus image fusion algorithm using biogeography-based optimization is proposed to obtain the optimal block size. The sharper blocks of each source image are first selected by sum modified Laplacian and morphological filter to contain an initial fused image. Then, the proposed algorithm uses the migration and mutation operation of biogeography-based optimization to search the optimal block size according to the fitness function in respect of spatial frequency. The chaotic search is adopted during iteration to improve optimization precision. The final fused image is constructed based on the optimal block size. Experimental results demonstrate that the proposed algorithm has good quantitative and visual evaluations.