Fig 8 - uploaded by Vinod Kumar
Content may be subject to copyright.
SPECT-MRI fusion results of cases 2 to 5: a MRI images, b SPECT images. Fused images by c m1 [30], d m2 [26], e m3 [16], f m4 [25], g m5 [22], and h proposed method m6 

SPECT-MRI fusion results of cases 2 to 5: a MRI images, b SPECT images. Fused images by c m1 [30], d m2 [26], e m3 [16], f m4 [25], g m5 [22], and h proposed method m6 

Source publication
Article
Full-text available
Purpose Multimodality medical image fusion supports better visualization of complimentary information given by different medical imaging modalities. This helps the radiologist for the precise diagnosis of disease and treatment planning. Main purpose of this research is to design a unified frame work for fusion of different anatomical imaging modal...

Contexts in source publication

Context 1
... during fusion process affects the spectral content. However, in the fused image by proposed method m6 presented in Fig. 7h retained the functional information. Anatomical details in fused images by m4 and m5 are smoothed. Proposed method m6 presented the anatomical details with good contrast. Fused images of cases 2 to 5 are presented in Fig. 8. Fused images by methods m1 to m5, spectral distortion is evident in non- functional area of SPECT images, whereas, in fused image by proposed method m6 shown in Fig. 8h, pathological tissues and other tissues are presented ...
Context 2
... Anatomical details in fused images by m4 and m5 are smoothed. Proposed method m6 presented the anatomical details with good contrast. Fused images of cases 2 to 5 are presented in Fig. 8. Fused images by methods m1 to m5, spectral distortion is evident in non- functional area of SPECT images, whereas, in fused image by proposed method m6 shown in Fig. 8h, pathological tissues and other tissues are presented ...

Similar publications

Article
Full-text available
Fusion of the functional image with an anatomical image provides additional diagnostic information. It is widely used in diagnosis, treatment planning, and follow-up of oncology. Functional image is a low-resolution pseudo color image representing the uptake of radioactive tracer that gives the important metabolic information. Whereas, anatomical i...
Article
Full-text available
Image fusion is the process of combining relevant information from two or more images into a single image. The resulting image contains more information as compared to individual images. In this system, we are proposing a new image fusion method by using a technique called framelet transform. This method consists of two phases; First is the frame s...
Article
Full-text available
High-dynamic range imaging technology is an effective method to improve the limitations of a camera’s dynamic range. However, most current high-dynamic imaging technologies are based on image fusion of multiple frames with different exposure levels. Such methods are prone to various phenomena, for example motion artifacts, detail loss and edge effe...

Citations

... Medical imaging modalities such as computed tomography (CT) and magnetic resonance (MR) imaging techniques provide a visual picture of internal human organs. CT and MR images help radiologists in clinical investigation, diagnosis and treatment procedures of various diseases or injuries [2,7,14]. The CT images give rich bony details, however, lack cerebrospinal fluid (CSF) and parenchyma information [26,33]. ...
Article
Full-text available
A novel hybrid fusion scheme is proposed by employing non-subsampled shearlet transform (NSST) and stationary wavelet transform (SWT). In the preliminary stage, multimodal input images are putrefied comprehensively by NSST. The regional energy as an activity parameter is used to fuse the high-frequency sub-band coefficients of NSST. The approximation sub-band of NSST is fused using SWT. The maximum entropy of the squared coefficients and regional energy as the activity parameters are employed to fuse the LF and HF sub-bands of SWT, respectively. This step is explicitly performed for preserving contrast, edges, texture and brightness information within the image. The final output is taken by employing the inverse NSST. The research is verified both qualitatively and quantitatively from the fused images. It is seen that the suggested methodology obtained enriched results of the textural details such as divergence, boundaries, consistency and brightness of any of the brain intracranial masses (tumor or stroke or haemorrhage or even fungal infection). It is further observed that the morphology of the associated intracranial mass and the minimum path distance during invasive surgery along with the bone are prerequisites for the radiologist and are better depicted by the present method (as per the radiologist’s perception). The fusion performance parameters of the suggested technique are related to seven existing methods. The quantitative evaluation of the proposed algorithm is done using mutual information, edge information index, entropy, standard deviation and mean. The parametric values obtained in any of the publicly available datasets or real-time datasets gave assuring outcomes.
... There are situations when a single diagnostic test might not be sufficient, necessitating additional multimodal medical imaging. Even though there are a variety of different solutions available, this method of processing medical images is still widely utilized [6][7][8]. The combination of two or more medical image components results in an increased number of newly created medical images being made available for use. ...
Article
Full-text available
Imaging data fusion is becoming a bottleneck in clinical applications and translational research in medical imaging. This study aims to incorporate a novel multimodality medical image fusion technique into the shearlet domain. The proposed method uses the non-subsampled shearlet transform (NSST) to extract both low- and high-frequency image components. A novel approach is proposed for fusing low-frequency components using a modified sum-modified Laplacian (MSML)-based clustered dictionary learning technique. In the NSST domain, directed contrast can be used to fuse high-frequency coefficients. Using the inverse NSST method, a multimodal medical image is obtained. Compared to state-of-the-art fusion techniques, the proposed method provides superior edge preservation. According to performance metrics, the proposed method is shown to be approximately 10% better than existing methods in terms of standard deviation, mutual information, etc. Additionally, the proposed method produces excellent visual results regarding edge preservation, texture preservation, and more information.
... Image fusion is a common strategy for accomplishing this aim since the combined output preserves MRI image location information and the molecular activity information from PET. So, this research focuses primarily on the challenge of merging MRI-PET multimodal medical images [6]. ...
... The multi-scaling transformation (MST) approach is a popular modeling paradigm for medical image fusion [6]. The MST methods include three stages: decomposition, selection of fusion rule, and reconstruction. ...
Article
Full-text available
The precise diagnosis of Alzheimer's disease is critical for patient treatment, especially at the early stage, because awareness of the severity and progression risks lets patients take preventative actions before irreversible brain damage occurs. It is possible to gain a holistic view of Alzheimer's disease staging by combining multiple data modalities, known as image fusion. In this paper, the study proposes the early detection of Alzheimer's disease using different modalities of Alzheimer's disease brain images. First, the preprocessing was performed on the data. Then, the data augmentation techniques are used to handle overfitting. Also, the skull is removed to lead to good classification. In the second phase, two fusion stages are used: pixel level (early fusion) and feature level (late fusion). We fused magnetic resonance imaging and positron emission tomography images using early fusion (Lapla-cian Re-Decomposition) and late fusion (Canonical Correlation Analysis). The proposed system used magnetic resonance imaging and positron emission tomography to take advantage of each. Magnetic resonance imaging system's primary benefits are providing images with excellent spatial resolution and structural information for specific organs. Positron emission tomography images can provide functional information and the metabolisms of particular tissues. This characteristic helps clinicians detect diseases and tumor progression at an early stage. Third, the feature extraction of fused images is extracted using a convolutional neural network. In the case of late fusion, the features are extracted first and then fused. Finally, the proposed system performs XGB to classify Alzheimer's disease. The system's performance was evaluated using accuracy, specificity, and sensitivity. All medical data were retrieved in the 2D format of 256 × 256 pixels. The classifiers were optimized to achieve the final results: for the decision tree, the maximum depth of a tree was 2. The best number of trees for the random forest was 60; for the support vector machine, the maximum depth was 4, and the kernel gamma was 0.01. The system achieved an accuracy of 98.06%, specificity of 94.32%, and sensitivity of 97.02% in the case of early fusion. Also, if the system achieved late fusion, accuracy was 99.22%, specificity was 96.54%, and sensitivity was 99.54%.
... Medical image processing techniques like multi-modality image fusion are increasing in popularity. Combining two or more medical image characteristics (including computed tomography, ultrasound, MRI, positron emission tomography, etc.) offers new, richer content medical images [7,35]. Many research focusing on image fusion, such as [8,13,26,33], exist. ...
Article
Full-text available
The Internet of Medical Things (IoMT) has included a new layer for development and smart infrastructure growth in the medical field. Besides, the medical data on IoMT systems are constantly expanding due to the rising peripherals in the health system. This paper introduces a new fusion technique in the shearlet domain to improve existing methods, which may provide medical image fusion in the IoMT system. In this paper, firstly low and high frequencies NSST coefficients are obtained of both input images. Over the low frequency component, a new Multi local extrema (MLE) based decomposition is performed to get more detail features (Coarse and detail layers). Over these MLE features saliency based weighted average is performed using co-occurrence filter to get the enhanced low frequency NSST Coefficients. These enhanced low frequency NSST Coefficients of both input images are fused using the proposed weighted function. In high frequency NSST Coefficients, local type-2 fuzzy entropy-based fusion is performed. Finally, inverse NSST is performed to get the final fused image. The experimental results are evaluated and compared with existing methods by visually and also by performance metrics. After a critical analysis, it was found that the results of the proposed method give better outcomes compared to similar and recent existing schemes.
... MR-T1 images after gadolinium-diethylenetriamine pentametric acid (Gd-DTPA) enhance scan becomes the MR-GAD image correspondingly, and the MR-PD images can distinguish gray matter and white matter more obviously. In contrast, positron emission tomography (PET) and single-photon emission computed tomography (SPECT) describe functional information such as blood flow and significant metabolic changes [10] despite their poor spatial resolution. So it is necessary to fuse two different images of the same body part to create new images that contain more informative and complementary information, which improves diagnostic accuracy and saves more internal storage space. ...
Article
Full-text available
Most existing deep learning‐based multi‐modal medical image fusion (MMIF) methods utilize single‐branch feature extraction strategies to achieve good fusion performance. However, for MMIF tasks, it is thought that this structure cuts off the internal connections between source images, resulting in information redundancy and degradation of fusion performance. To this end, this paper proposes a novel unsupervised network, termed CEFusion. Different from existing architecture, a cross‐encoder is designed by exploiting the complementary properties between the original image to refine source features through feature interaction and reuse. Furthermore, to force the network to learn complementary information between source images and generate the fused image with high contrast and rich textures, a hybrid loss is proposed consisting of weighted fidelity and gradient losses. Specifically, the weighted fidelity loss can not only force the fusion results to approximate the source images but also effectively preserve the luminance information of the source image through weight estimation, while the gradient loss preserves the texture information of the source image. Experimental results demonstrate the superiority of the method over the state‐of‐the‐art in terms of subjective visual effect and quantitative metrics in various datasets.
... The methods based on spatial domains, such as Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA), which directly process pixels on spatial coordinates to obtain fused images. These methods can be easily implemented, but useful information are often lost [6]. ...
Article
Full-text available
Multimodal medical image fusion technology can assist doctors diagnose diseases accurately and efficiently. However the multi-scale decomposition based image fusion methods exhibit low contrast and energy loss. And the sparse representation based fusion methods exist weak expression ability caused by the single dictionary and the spatial inconsistency. To solve these problems, this paper proposes a novel multimodal medical image fusion method based on nonsubsampled shearlet transform (NSST) and convolutional sparse representation (CSR). First, the registered source images are decomposed into multi-scale and multi-direction sub-images, and then these sub-images are trained respectively to obtain different sub-dictionaries by the alternating direction product method. Second, different scale sub-images are encoded by the convolutional sparse representation to get the sparse coefficients of the low frequency and the high frequency, respectively. Third, the coefficients of the low frequency are fused by the regional energy and the average \({L}_{1}\) norm. Meanwhile the coefficients of the high frequency are fused by the improved spatial frequency and the average \({l}_{1}\) norm. Finally, the final fused image is reconstructed by inverse NSST. Experimental results on serials of multimodal brain images including CT,MR-T2,PET and SPECT demonstrate that the proposed method has the state-of-the-art performance compared with other current popular medical fusion methods whatever in objective and subjective assessment.
... One example of these extensions is the NSST, which improves the preservation of multi-dimensional signal features [40]. Based on this, Padma Ganasala and Vinod Kumar developed a framework for multi-modal medical image fusion in 2014 [25]. This framework uses the NSST to obtain the high and low frequency components of an image, to combine them separately, and then reconstruct the fused image using the inverse NSST. ...
Article
Full-text available
Epilepsy is a common neurological disease characterized by spontaneous recurrent seizures. Resection of the epileptogenic tissue may be needed in approximately 25% of all cases due to ineffective treatment with anti-epileptic drugs. The surgical intervention depends on the correct detection of epileptogenic zones. The detection relies on invasive diagnostic techniques such as Stereotactic Electroencephalography (SEEG), which uses multi-modal fusion to aid localizing electrodes, using pre-surgical magnetic resonance and intra-surgical computer tomography as the input images. Moreover , it is essential to know how to measure the performance of fusion methods in the presence of external objects, such as electrodes. In this paper, a literature review is presented, applying the methodology proposed by Kitchenham to determine the main techniques of multi-modal brain image fusion, the most relevant performance metrics, and the main fusion tools. The search was conducted using the databases and search engines of Scopus, IEEE, PubMed, Springer, and Google Scholar, resulting in 15 primary source articles. The literature review found that rigid registration was the most used technique when electrode localization in SEEG is required, which was the proposed method in nine of the found articles. However, there is a lack of standard validation metrics, which makes the performance measurement difficult when external objects are presented, caused primarily by the absence of a gold-standard dataset for comparison.
... Magnetic resonance imaging (MRI) provides soft-tissue information, as shown in Fig. 1. 1 Functional systems include positron emission tomography (PET) and single-photon emission computed tomography (SPECT). PET characterizes tumor function and metabolism and SPECT reflects the tissue and organ blood flow [3]. Considering limitations of mono-modal images, image fusion aims to merge the typical and complementary information in multi-modal images into a single output for better human visual perception and automatic detection [4,5]. ...
Article
Existing image fusion methods always use the same representations for different modal medical images. Otherwise, they solve the fusion problem by subjectively defining characteristics to be preserved. However, it leads to the distortion of unique information and restricts the fusion performance. To address the limitations, this paper proposes an unsupervised enhanced medical image fusion network. We perform both surface-level and deep-level constraints for enhanced information preservation. The surface-level constraint is based on the saliency and abundance measurement to preserve the subjectively defined and intuitive characteristics. In the deep-level constraint, the unique information is objectively defined based on the unique channels of a pre-trained encoder. Moreover, in our method, the chrominance information of fusion results is also enhanced. It is because we use the high-quality details in structural images (e.g., MRI) to alleviate the mosaic in functional images (e.g., PET, SPECT). Both qualitative and quantitative experiments demonstrate the superiority of our method over the state-of-the-art fusion methods.
... The main flaw in this method is that the rest of the unutilized componnents are completely lost or ignored. Fusion rules are formulated on the basis of sums of variations in the square and two different characteristics of the Ganasala method for combining the low frequency and high frequency sub-bands respectively [5]. But lacks due to the computational complexity of the fusion rule. ...
... NSST is a modified form of Shearlet, Contourlet, and the Wavelet transform in numerous dimensions and directions [5]. Various techniques have been used for fusion rules to obtain optimal fusion output. ...
Article
Full-text available
Off late, medical image fusion has emerged as an inspiring approach in merging different modalities of medical images. The fused image helps the medicos to diagnose various critical diseases quickly and precisely. This paper proposes two fusion algorithim named Multimodal Adaptive Medical Image Fusion (MAMIF) and Multimodal without Denoised Medical Image Fusion (MDMIF) and both of the method uses Non-Subsampled Shearlet Transform (NSST) and B-spline registration model. However as MAMIF uses denoise method, it provides better visually enhanced images. The presented MAMIF algorithim fuses the images without losing any vital information for the given set of real-time and public datasets. The entire fusion framework uses features extracted from NSST decomposed images by using Human Visual System (HVS) based Low Frequency (LF) sub-band fusion and Log-Gabor energy-based High Frequency (HF) sub-band fusion. The proposed framework is agnostic of source image size (pairs should be of the same size). The experiments were carried out leveraging 14 sets of image dataset that includes grayscale and color images. The performance calculation of the proposed MAMIF is evaluated based on the dataset collected from HCG hospital, Bangalore, and further validated by radiologists from the same hospital. Comparing the simulated results, the proposed adaptive model MAMIF produced superior visually fused images compared to other approaches such as MDMIF and MMDWT.
... Eight sets of multimodality medical images are from the whole-brain atlas medical image databases from the website http://www.med.harvard.edu/aanlib/ are demonstrated in Fig. 3. To establish the performance of the method the comparison is performed with other previous image fusion approaches such as NSST (M1) [32], GMSF + PCNN (M2) [19], EWT + LEM [14], MFDF + NSST (M4) [20], SSOWCO (M5) [33] with the proposed approach (M6). ...
Article
Full-text available
The fast-developing Image fusion technique has become a necessary one in every field. Analyzing the efficiency of various fusion technologies analytically and objectively are spotted as an essentially required processes. Further, Image fusion becomes an inseparable technique in the medical field, since the role of medical images in diagnosing and identifying diseases becomes a crucial task for the radiologists and doctors at its early stage. Different modalities used in clinical applications offer unique information, unlike any other in any form. To diagnose diseases with high accuracy, clinicians require data from more than one modality. Multimodal image fusion has received wide popularity in the medical field since it enhances the accuracy of the clinical diagnosis thereby fusing the complementary information present in more than one image. Obtaining optimal value along with a reduction in cost and time in multimodal medical image fusions are a critical one. Here, in this paper a new multi-modality algorithm for medical image fusion based on the Adolescent Identity Search Algorithm (AISA) for the Non-Subsampled Shearlet Transform is proposed to obtain image optimization and to reduce the computational cost and time. The NSST is a multi-directional and multi-dimensional example of a multiscale and multi-directional wavelet transform. The input source image is decomposed into the NSST subbands at the initial stage. The boundary measure is modulated by the Adolescent Identity Search Algorithm (AISA) that fuses the sub-band in the NSST thereby reducing the complexity and increasing the computational speed. The proposed method is tested under different real-time disease datasets such as Glioma, mild Alzheimer's, and Encephalopathy with hypertension that includes similar pairs of images and analyzed different evaluation measures such as Entropy, standard deviation, structural similarity index measure,Mutual information, Average gradient, Xydeas and Petrovic metric, Peak-signal to-noise-ratio, processing time. The experimental findings and discussions indicate that the proposed algorithm outperforms other approaches and offers high quality fused images for an accurate diagnosis.