Figure 1 - uploaded by N V Kartheek Medathati
Content may be subject to copyright.
Multi Modal Retinal Images.

Multi Modal Retinal Images.

Source publication
Conference Paper
Full-text available
Multi-Modal image registration is the primary step in fusing complementary information contained in different imaging modalities for diagnostic purposes. We focus on two specific retinal imaging modalities namely, Color Fundus Image(CFI) and Fluroscein Fundus Angiogram(FFA). In this paper we investigate a projection based method using Radon transfo...

Context in source publication

Context 1
... is the fundamen- tal step in fusing complementary information contained in different modalities. In this paper we focus on two spe- cific retinal optical imaging modalities, namely Color Fun- dus Image(CFI) and Fluroscein Fundus Angiography (FFA) as shown in Figure 1. CFI captures optical range infor- mation and hence reveals the overall condition of the fun- dus(retina) while FFA captures the blood flow information and hence reveals only structures such as vessels involved in blood flow. ...

Citations

... The approach provides accurate results but the edge maps are not consistent for every modality. [3] and [2] present feature descriptors that were inspired by SIFT but tweaked for multimodal registration. In [3], Chen et al. introduce the PIIFD descriptors. ...
... Ghassabi et al. use these descriptors on UR-SIFT features to improve the results in [10]. In [2], Bathina et al. use a Hessian filter to extract the curvature, extract the features from the junctions of the curvature map and their new descriptors are based on the Radon transform. A problem that all these methods have in common is that the retinal images have a lot of repeated local patterns, which tends to disrupt the feature matching step. ...
Conference Paper
Full-text available
We propose a framework to perform multimodal registration of multiple images. In retinal imaging, this alignment enables the physician to correlate the features across modalities, which can help formulate a diagnosis. The images appear very different and there are few reliable modality-invariant features. We base our registration on the salient line structures extracted with a tensor-voting approach and aligned to minimize the Chamfer distance. For every pair of images, we match the line junctions and extremities to get a candidate transformation that is further refined with an Iterative Closest Point approach. We use a global chained registration framework to recover from failed registration and we account for non-planarities with a Thin-Plate Splines deformation. Our approach can handle large variations across modalities and is evaluated on real-world retinal images with 5 modalities per eye. We achieve an average error of 52 μm on our dataset.
... In the feature matching process, dissimilar PIIFD descriptors may be produced for the corresponding extracted feature points due to the appearance of pathology in only one of the modalities. Hence, the presence of lesions in the neighborhood of feature points in one of the modalities will adversely affect the results of the PIIFD descriptor which is based on gradient orientation information[49]. It is also possible to have similar descriptors for non-corresponding extracted points in retinal images due to the repetitive patterns or low content of poor-quality images. ...
Article
Full-text available
Existing algorithms based on scale invariant feature transform (SIFT) and Harris corners such as edge-driven dual-bootstrap iterative closest point and Harris-partial intensity invariant feature descriptor (PIIFD) respectively have been shown to be robust in registering multimodal retinal images. However, they fail to register color retinal images with other modalities in the presence of large content or scale changes. Moreover, the approaches need preprocessing operations such as image resizing to do well. This restricts the application of image registration for further analysis such as change detection and image fusion. Motivated by the need for efficient registration of multimodal retinal image pairs, this paper introduces a novel integrated approach which exploits features of uniform robust scale invariant feature transform (UR-SIFT) and PIIFD. The approach is robust against low content contrast of color images and large content, appearance, and scale changes between color and other retinal image modalities like the fluorescein angiography. Due to low efficiency of standard SIFT detector for multimodal images, the UR-SIFT algorithm extracts high stable and distinctive features in the full distribution of location and scale in images. Then, feature points are adequate and repeatable. Moreover, the PIIFD descriptor is symmetric to contrast, which makes it suitable for robust multimodal image registration. After the UR-SIFT feature extraction and the PIIFD descriptor generation in images, an initial cross-matching process is performed and followed by a mismatch elimination algorithm. Our dataset consists of 120 pairs of multimodal retinal images. Experiment results show the outperformance of the UR-SIFT-PIIFD over the Harris-PIIFD and similar algorithms in terms of efficiency and positional accuracy
... In the feature matching process, dissimilar PIIFD descriptors may be produced for the corresponding extracted feature points due to the appearance of pathology in only one of the modalities. Hence, the presence of lesions in the neighborhood of feature points in one of the modalities will adversely affect the results of the PIIFD descriptor which is based on gradient orientation information[49]. It is also possible to have similar descriptors for non-corresponding extracted points in retinal images due to the repetitive patterns or low content of poor-quality images. ...
Article
Full-text available
Existing algorithms based on scale invariant feature transform (SIFT) and Harris corners such as edge-drivendual-bootstrap iterative closest point and Harris-partial intensity invariant feature descriptor (PIIFD) respectivley havebeen shown to be robust in registering multimodal retinal images. However, they fail to register color retinalimages with other modalities in the presence of large content or scale changes. Moreover, the approaches needpreprocessing operations such as image resizing to do well. This restricts the application of image registration forfurther analysis such as change detection and image fusion. Motivated by the need for efficient registration ofmultimodal retinal image pairs, this paper introduces a novel integrated approach which exploits features ofuniform robust scale invariant feature transform (UR-SIFT) and PIIFD. The approach is robust against low contentcontrast of color images and large content, appearance, and scale changes between color and other retinal imagemodalities like the fluorescein angiography. Due to low efficiency of standard SIFT detector for multimodal images,the UR-SIFT algorithm extracts high stable and distinctive features in the full distribution of location and scale inimages. Then, feature points are adequate and repeatable. Moreover, the PIIFD descriptor is symmetric to contrast,which makes it suitable for robust multimodal image registration. After the UR-SIFT feature extraction and the PIIFDdescriptor generation in images, an initial cross-matching process is performed and followed by a mismatchelimination algorithm. Our dataset consists of 120 pairs of multimodal retinal images. Experiment results show theoutperformance of the UR-SIFT-PIIFD over the Harris-PIIFD and similar algorithms in terms of efficiency andpositional accuracy.Keywords:Scale invariance, Feature distinctiveness, Uniform spatial and scale distribution, Multimodal imageregistration, Retinal images An efficient approach for robust multimodal retinal image registration based on ur-sift features and piifd descriptors. Available from: https://www.researchgate.net/publication/313182074_An_efficient_approach_for_robust_multimodal_retinal_image_registration_based_on_ur-sift_features_and_piifd_descriptors [accessed Jun 27, 2017].
Chapter
Content-based image retrieval (CBIR) is an essential part of computer vision research, especially in medical expert systems. Having a discriminative image descriptor with the least number of parameters for tuning is desirable in CBIR systems. In this paper, we introduce a new simple descriptor based on the histogram of local Radon projections. We also propose a very fast convolution-based local Radon estimator to overcome the slow process of Radon projections. We performed our experiments using pathology images (KimiaPath24) and lung CT patches and test our proposed solution for medical image processing. We achieved superior results compared with other histogram-based descriptors such as LBP and HoG as well as some pre-trained CNNs.
Conference Paper
The effect of rain on the measurements acquired by this newly launched satellite has yet to be assessed. This project is employing coincident and collocated rain measurements from NEXRAD and SSM/I to examine the changes in the L-band radiometric brightness temperatures and changes in the sea surface radar cross section, for both polarizations. Also include is consideration of the local wind speed and direction, the sea surface temperature and the nominal sea surface salinity.
Conference Paper
This work proposes using the craquelure pattern of a painting as a fingerprint to verify its authenticity against prior records. Craquelure are extracted and matched from photographs in a manner robust to illumination, scale, rotation and perspective distortion. A new crack extraction technique is introduced which uses multi-scale multi-orientation morphological processing and shape analysis in each orientation sub-band. Feature extraction – a Radon-transform based local descriptor at the crack junctions – and matching are described. Matching accuracy was 98.69 % on our database of 151 genuine unique craquelure images with simulated multiple copies of each pattern.
Conference Paper
Often in forensic scenarios, need arises to map a partial or poor quality finger print to the identity of an individual. The general image matching methods found in computer vision perform poorly as they are sensitive to local distortions like broken ridge patterns and incomplete information. To address this issue, we propose to use a weak descriptor to capture local structures at a higher abstract level. The goal here is to mine a large set of initial correspondence through weak description and then rely on robust estimator scheme to prune false matches. By coupling weak local descriptor with robust estimator, we minimize the affect of broken ridge patterns and also obtain a dense set of matches for a given pair. We evaluate the performance of the proposed method against SIFT as per the Fingerprint Verification Competition guidelines. We also report superior performance of the current methods rotation, scale, noise and overlap handling capabilities.