Science topic

Content Based Image Retrieval - Science topic

Explore the latest questions and answers in Content Based Image Retrieval, and find Content Based Image Retrieval experts.
Questions related to Content Based Image Retrieval
  • asked a question related to Content Based Image Retrieval
Question
6 answers
Dear researchers i am working on content based video retrieval, i need popular datasets for evaluation of my method. I have downloaded UCF101,HMDB datasets but those datasets has very small clips specially collected for action recognition.
Relevant answer
  • asked a question related to Content Based Image Retrieval
Question
6 answers
Hello,
I am building a CBIR system with Corel Database (100 classes of 100 pictures). I have implemented some "current" (non deep learning) descriptors (Sift, Surf, HOG, Color Histogram, HSV histogram, LBP histogram, ORB, Hu Moments of the image, GLCM descriptors (contrast, homogeneity,...)). Also, I implemented FlannMatcher and BFMatcher for Sift, Surf and ORB ; `compareHist` function of OpenCV and its four distances for all the histograms and NORM1,2,INF pour vectors (Hu & GLCM).
However, I get really poor results and bad R/P curve. More precisely, it seems to really depend on the query. For example, for the bear, the best results (Top 50) are reached with GLCM where I get this :
[![results with GLCM][1]][1]
On the other hand, when the query is a playing card (which is rather simple), that gives rather good results, at least with some algorithms such as Sift.
[![results with Sift][2]][2]
I was wondering if it was normal to have so bad and variable results ? Actually, I just used OpenCV functions so I don't see where I could have been wrong...
Could it be relevant to make a weighted sum of some descriptors ? Just by normalizing the distances, weight them add sort the global sum ?
Is there another way to improve "simply" the results ?
Thank you in advance for your help
Relevant answer
Answer
Pierre De Handschutter FLAN or BF matchers with SIFT, SURF or ORB like key point descriptors would work reasonably only when the same or part of the query image is appeared (in different scale, orientation, illumination and with other variations) in the matching images. But that is not the case in Corel dataset hence those method would not work as expected. If you really need to use them, try using Bag-of-Feature models of SIFT, SURF and ORB with SVM like classifier. You may refer the following code.
Color histograms (including all the color spaces such as RGB, HSV and Lab etc) have their limits and its obvious they do not hold the necessary information about the shapes of the objects appeared in the image. Therefore no wonder about the poor results in color histograms based CBIR.
LBP is better at discriminating objects in a particular domain (like faces, vehicles etc) and it is vulnerable in general image retrieval applications where the images having highly cluttered background.
GLCM along has the same problem where it only characterizes the texture details which may not be the discriminating feature for a pair of images. Hu in the other hand should not use for natural image classification where it performs reasonably with single object having a well defined shape.
The best non deep learning based approach I found so far is the Fused Color Gabor Pyramidal Histogram of Oriented Gradients (FC-GPHOG) descriptor classified with nearest neighbor on Enhanced Fisher Model (EFM). You can find my implementation of FC-GPHOG from:
The code related to the work mentioned in the paper :
Sinha, Atreyee, Sugata Banerji, and Chengjun Liu. "New color GPHOG descriptors for object and scene image classification." Machine vision and applications 25, no. 2 (2014): 361-375.
EFM is nothing but the applying LDA on truncated PCA projection of the FC-GPHOG descriptor.
  • asked a question related to Content Based Image Retrieval
Question
3 answers
Hello,
I am looking at the different machine learning and deep learning techniques currently used in CBIR.
So far, I mainly found papers studying which features are the most useful on a CNN architecture (and post-processing i.e. binarization, PCA,...) on one hand, and the other hand works that used machine Learning for feedback relevance (i.e. svm and active learning, with bootstrapping)or on classical descriptors (i.e. SURF).
However, I did not find any paper comparing "traditional" machine learning technique with CNN for this purpose...
Does anyone know something interesting ? I've just seen a recent paper (Content-Based Image Retrieval Using Convolutional Neural Networks, Ouhda &al.) but I would also like to have a comparison between the two approaches mentioned above, on a common dataset.
Thanks a lot for help
Relevant answer
Answer
Hello Ali,
Thank you for your answer. I've already read this paper actually and it was indeed an interesting comparison between several techniques, including CNN, but I was also looking for a comparison between DL and traditional ML such as SVM.
However, you're right that it is a good starting point. I also found this one : A Comparison of Active Classification Methods for Content-Based Image Retrieval
from Gosselin which compares several ML techniques (though no DL) and it seems interesting but I haven't read it in details yet.
Regards
  • asked a question related to Content Based Image Retrieval
Question
3 answers
looking for best way to implement a Recommendations Without User Preferences. Content-based recommendation as one of content similarity measurement. Let say we have all of article the user can read from , we need to provide each user with a list of top 5 most similar article that are similar to that article . please help with any simple guide , implementation example, case study, any help is appreciated .
Relevant answer
Answer
  • You can you contents of an article to extract topics/keywords or frequent words.
  • Use tfidf vectorizer to create a model (remember to remove stopwords).
  • Then use cosine similarity to get similar articles.
  • asked a question related to Content Based Image Retrieval
Question
8 answers
Hi,
I want suggestion about open problem in the field Content Based Image Retrieval for thesis work
Relevant answer
Answer
This is an important question in computer vision.
Cutting edge CBIR research is being done in the following areas:
Colour images:
Chawki, Youness(MRC-UMLIST-C); El Asnaoui, Khalid(MRC-UMLIST-C); Ouanan, Mohammed(MRC-UMLIST-C); Aksasse, Brahim(MRC-UMLIST-C)
Content frequency and shape features based on CBIR: application to colour images. Int. J. Dyn. Syst. Differ. Equ. 8 (2018), no. 1-2, 123–135
According to the authors, the precision average can be reached as 74.49%, which is surprisingly low.
Distance:
Herwindiati, Dyah E.; Isa, Sani M.; Sagara, Rahmat
The new notion distance of content based image retrieval (CBIR). (English, Indonesian summary) J. Indones. Math. Soc. 16 (2010), no. 1, 51–67.
This paper introduces a notion of distance in the CBIR process that is derived from the measure of multivariate dispersion called vector variance called the minimum vector variance (MVV) estimator.
  • asked a question related to Content Based Image Retrieval
Question
6 answers
i am doing research in Content based image classification and retrieval using visual bag of feature. my doubt is how to do denoise the image in CBIR
Relevant answer
Answer
thank for all your replay please suggest real time problem and which image should i take
  • asked a question related to Content Based Image Retrieval
Question
4 answers
I am working on CBIR project . but i have to use new distance matrix rather than already used in CBIR to find out the similar images from database in reference to a query image.
Relevant answer
Answer
Hello, Vijaylakshmi Sajwan
Congratulations to you have successfully formulated a distance model in CBIR. Please you can share with us for what kind of imagery, and how to formulate the distance. Thank you
  • asked a question related to Content Based Image Retrieval
Question
12 answers
Until 2016, it was possible to develop apps that could mine Instagram information for analysis; a masters student of mine built her thesis around information processed through a custom-built tool. Then Instagram closed access, and apparently only commercial use is allowed, and through a difficult process.
I'd like to know if there have been any changes to that situation, or to Instagram policy, and whether any fellow researcher has been able to access Instagram data for academic research.
Relevant answer
Answer
Here are their policy https://www.instagram.com/developer/ (I don't see any info that told you need a commercial account to access API):
Before you start using the API Platform, we have a few guidelines that we'd like to tell you about. Please make sure to read the full Platform Policy. Here's what you'll read about:
  1. Instagram users own their media. It's your responsibility to make sure that you respect that right.
  2. You cannot use "insta", "gram" or "Instagram" in your company or product name.
  3. You cannot replicate the core user experience of the Instagram apps or web site. For example, do not build a media viewer.
  4. You cannot use the API Platform to crawl or store users' media without their express consent.
  5. Do not abuse the API Platform, automate requests, or encourage unauthentic behavior. This will get your access turned off.
  • asked a question related to Content Based Image Retrieval
Question
2 answers
Dear all, i want to compute Averaged Normalized Modified Retrieval Rank(ANMRR) which is used for measuring retrieval accuracy of the CBIR system. The ANMRR is defined in the following attached two papers. Actually, i did not understand exactly what they have explained. Can anyone help me to solve the issue?
Relevant answer
Answer
Check an alternative description at the appendix of the following paper:
Krishnamachari S., Yamada A., Abdel-Mottaleb M., Kasutani E. (2000) Multimedia Content Filtering, Browsing, and Matching Using MPEG-7 Compact Color Descriptors. In: Laurini R. (eds) Advances in Visual Information Systems. VISUAL 2000. Lecture Notes in Computer Science, vol 1929. Springer, Berlin, Heidelberg
(similar version available at)
  • asked a question related to Content Based Image Retrieval
Question
3 answers
What is chisquare attack and how do i perform chisquare attack for Least significant bit Image steganography
Relevant answer
Answer
The chisquare attack is a simble and famous method to test the Robustness of the security system against the attacks. It depend on probability analysis of stego image after put the information inside it using LSB method then compare it with the probability analysis of original image to check the difference between them. If the difference near zero it means no information inside the image otherwise if near one it means the image held the information.
Best regard
Zeyad Safaa Younus Saffawi
  • asked a question related to Content Based Image Retrieval
Question
3 answers
I want to do a project on content based image retrieval and want to use it on medical images especially echocardiography, I’d be very grateful if you suggest a technique or any papers on this matte
Relevant answer
Answer
Hello.
Kindly check the attachment. Hope that will be of some help to you.
Regards
Rebika Rai
  • asked a question related to Content Based Image Retrieval
Question
8 answers
I am working on content-based video retrieval. I am not able to find a proper database for experiment. We have some benchmark database in image retrieval, for example Corel datasets, MIT VisTex, Brodatz, STex, etc. In the same way, I need some video database which holds some categories and each category must carry some similar videos. 
Relevant answer
Answer
Dear Manisha mam,
I got some of the video data sets mentioned above. But I am not happy with those data sets. Have you found any other data sets
  • asked a question related to Content Based Image Retrieval
Question
10 answers
Based on Octet patterns for  each image, Octet patterns size is 57 * 59 = 3363 (Octet 56 + magnitude 1 = 57) and uni patterns are 59 
Relevant answer
Answer
principle component analysis or PCA and independent component analysis or ICA are among the most common methods for dimension reduction
  • asked a question related to Content Based Image Retrieval
Question
6 answers
64 is multiplied with a signal sample value. after multiplication the sample value changes 0 or 255
eg=>17x64=  1088  the resultant value of the above process is 255 i cant understand what actually happened.kindly help me in this
Relevant answer
Answer
I suppose that you multiplied values in MATLAB. I think that your signal samples are the numbers of 'uint8' type. In order to multiply signal samples properly you have to convert them on 'double' type. I checked your example in MATLAB Command Window. The listing is below:
>> a=uint8(17)
a=
17
>> 64*a
ans =
255
>> 64*double(a)
ans=
1088
I hope that this is solution of your problem.
best regards
Robert Wielgat
  • asked a question related to Content Based Image Retrieval
Question
1 answer
Dear researcher?
I would like to retrieve relevant images from a collection of images using vocabulary tree with inverted indexing scoring. At first, i generate a vocabulary tree for all  database images in MATLAB. Next  i would like to create the inverted index file, so my question, is how to do that? 
Best Regards
Khadidja BELATTAR
Relevant answer
Answer
The inverted index should map each unique word in the vocabulary tree to a list of image ids in the database, sorted in an increasing order.
  • asked a question related to Content Based Image Retrieval
Question
5 answers
I am using C# with asp.net for re-ranking images based on Content retrieval. 
The concept of my work as:
1. Extracting visual features for all images.
2. Find the similarity between a query image and rest images.
3. Finally, re-ranking the images based on their similarity to the query image.
Now, I chosen (FCTH & SURF) for extracting visual feature of the images. But, I need another method to extract visual features.
What method is the best for this purpose?
Relevant answer
Answer
Mustafa,
As said by Khan Muhammad, the current trend in classification and detection on images is motivated by Deep Learning, precisely, Convolutional Deep Neural Networks. In case you have an abundance of data to train, then go ahead with models like Alexnet but in case, you do not have sufficient data, then learn about already trained caffenet and retrain it with your data so that it adjusts itself to your problem domain. 
You can train models on Caffe/Theano frameworks and use their wrappers in your C# application for inference(testing) phase.
Hope this helps.
kind regards,
Ankit Tripathi
  • asked a question related to Content Based Image Retrieval
Question
3 answers
Hello all,
I want to have a CBIR papers worked on Corel Dataset
  • asked a question related to Content Based Image Retrieval
Question
1 answer
I used PCA method for extracting feature of images, I need a best algorithm for re-ranking images based on user interaction by one click.
Relevant answer
Answer
Hello Dear
Have a good day. I hope you are good. please use this code and change images
  • asked a question related to Content Based Image Retrieval
Question
4 answers
Hello,
Really, I don't know if there is a difference between edge based matching and contour based descriptor? If it is the case, what are these differences?
Thank you
Relevant answer
Answer
For example the contour of a circle.
It can be made of dots. An edge
detector would recognize each dot as
a very short edge, but this is not
what we want. To mitigate the problem
one could apply blur and a threshold
then; now the edge detector would
yield something useful.
A different approach without edge
detector would be a snake, some kind
of springy lasso that tries to minimize
mechanical energy by adapting to the
contour with the help of its shape
memory.
What also would work is a Hough
transformation that has an
accumulator space with 3 dimensions
for x,y, radius. That method is
called generalized Hough transformation,
but for every type of contour a
specialized accumulator and way
of filling it must be defined.
Regards,
Joachim
  • asked a question related to Content Based Image Retrieval
Question
1 answer
I am looking forward to a list of papers published for Content Based Image Retrieval system for medical stereogram/medical stereo images. If anyone knows of any related paper published on this topic, kindly let me know.
Relevant answer
Answer
The h-index can be calculated automatically in Web of Science and Scopus or manually in other databases that provide citation information (e.g. SciFinder, PsychINFO, Google Scholar). The index is based on a list of publications ranked in descending order by the number of citations these publications received. The value of h is equal to the number of papers (N) in the list that have N or more citations.
Before you can calculate your h-index, you will need a complete publication list.
  • asked a question related to Content Based Image Retrieval
Question
1 answer
Hi,
I developed an interactive system for content based skin lesion image retrieval, and I would like to compute its performance on database of 1097 skin lesion images (melanome and benign).
The problem is that I do not have ground truth; which means for a given query a set of images that are similar to this query, so I must create it by myself, but how do I do that? I have no idea.
Any suggestions will be welcome.
Thant you a lot 
Relevant answer
Answer
You can refer some tools for that. or 
You can request your colleagues and friends to draw the outline or border of the regions. Then you can integrate all the outlines and form the ground truth. I do not remember the tool for drawing outline manually. you can google it.
  • asked a question related to Content Based Image Retrieval
Question
8 answers
What are the criteria that I should include in a dermatological image's quality score?
How do you classify them as good or bad quality?
Relevant answer
Answer
  • asked a question related to Content Based Image Retrieval
Question
4 answers
I am planning  to work with lungs and spine images in the field of content based image retrieval(CBIR).
can any one tell me the  research challenges in lungs image retrieval ?
Relevant answer
Answer
It depends what you are looking for. The specific goal of the CBIR for the lung (x-ray or CT) has to come from the clinical side. For instance, the challenge for finding similar cases when we are dealing with pulmonary nodules is mainly the low discrimination between the suspicious mass and surrounding regions, specially in x-ray. In such cases some sort of pre-processing is required first to amplify the nodule boundary against the surroundings. The actual search can be executed afterwards.    
  • asked a question related to Content Based Image Retrieval
Question
5 answers
I need a Matlab source code of the CBIR using Relevance Feedback, any advise?
Relevant answer
Answer
 Thank you a lot Mr France.
I whould like to execute vl_sift but i  don't know how to do that?
Matlab showed the error message "Attempt to execute SCRIPT vl_sift as a function:"
  • asked a question related to Content Based Image Retrieval
Question
6 answers
i am planning to do my research in content based medical image research. i have studied many feature extraction algorithm. can any one suggest me a particular image modalities and latest research algorithm to concentrate on my research. 
Relevant answer
Answer
From view of  textural analysis the class of medical images could be considered as a grand class covers a wide spectrum of textural images, some of images can be effectively considered in terms of edge density, others in terms of shading characteristics, others in terms of color spectral behaviors, others in terms of the geometrical (uniform or non-uniform) behaviors, and so on.
So, I think it is better to identify the types of medical sub-classes you want to work on, then decide to choose the proper type(s) of features you have to depend on for making the retrieval decisions.
Regards... 
  • asked a question related to Content Based Image Retrieval
Question
4 answers
I have developed a content based image retrieval algorithm using local features. I have computed precision and recall as result for my algorithm and other algorithms (for comparison). I need to do statistical testing on results of different algorithms to find the best algorithm.
In terms of results I have precision and recall. Please suggest me what statistical testing I can apply to get best out of few algorithms.
Thanks.   
Relevant answer
Answer
The ANMRR method by MPEG-7 standard. You'll need the ground-truth of the images.
Introduction to MPEG-7: Multimedia Content Description Interface, B. S. Manjunath et al, ISBN: 978-0-471-48678-7. Pag 183-185
Best
  • asked a question related to Content Based Image Retrieval
Question
6 answers
Attached image is an angiogram(Digital Subtraction Angiography image). I am concerned in describing the content of the image in order to retrieve similar images from an angiogram database.
Suggestion about features extraction algorithm? Any MATLAB resources about this?
Relevant answer
Answer
In order to design features for image retrieval, I think it's very important to first understand what makes two of these images similar.  I have no knowledge of these types of images so I can't help you with this, but there could be many possibilities, such as histograms of the image, shapes of the branches, number of branches, etc.  It's always better to be able to understand the basic idea of what these images similar and then design features to encode this information.
  • asked a question related to Content Based Image Retrieval
Question
2 answers
Could you explain please, the difference between simple distance(without learning) and learned distance? 
I would like to learn a distance for content based image retrieval, but i  don't know how to do that?
Feature selection is a one of the techniques that to be used for learning the distance?
Thank you.
Best Regards.
Khadidja BELATTAR
Relevant answer
Answer
A simple distance method for CBIR is a predefined metric used to find the best similarity between a query image and a number of images in a database. e.g. Euclidean, Manhattan, Chi square, Mahalanobis, etc.
On the other hand. the learned distance methods attempt to find the best metric to achieve the best query result, the distance is learned with specialized algorithms such as Large Margin Nearest Neighbor or Neighbourhood components analysis.
I found a new metric which is invariant to data scale, noise and outliers, it is perfect for CBIR systems, a student of mine is making excellent results using this distance, you may find the paper at:
  • asked a question related to Content Based Image Retrieval
Question
9 answers
I am working with Content based  skin lesin image Retrieval .These images suffers from black frame artifacts.Attached,you will find an image example.
I remove the back frame by applying gemotric shape(cercle) on the image, and hence the number of pixels is reduced. 
So, if i count the percentage of gray levels in the skin lesion,  i will take the total number of pixels of the original image or the cropped images?
2. is there any effect on the other extracted features (texture features for instance) when i crop the original skin lesion image?
3. What is the difference betwwen the relative and absolute image features?
4. Is there better solution to remove the black frame without cropping the image? 
Thank You
Khadidja BELATTAR
Best Regards.
Relevant answer
Answer
Mohamed,
Yeah that is d simple code to apply threshold ....
did it  helps you?
  • asked a question related to Content Based Image Retrieval
Question
14 answers
Greeting everyone,
I want to build  Content Based Image Retrieval (CBIR), I have  segmented angiogram images ( attached image shows the original image and the segmented one). 
I need an advice or an example about how to build an indexed image database using image features. Then how to retrieve images from that database according to a query image.
Any simple and clear example to start with it??
Relevant answer
Answer
Dear Nisreen
You need to define each image by a feature vector, those features should be invariant to scale, rotation and translation, and you may perform the following steps:
1- Extract some features from the query image, and save it an array called QArray.
2- For each image in the database, extract the same features in step 1 in the same order and save the features in array called ImageArrayi.
3- For each ImageArrayi , calculate the similarity with QArray, using a similarity (distance) function
4- Define a threshold t,
5- For each i in ImageArrayi  if the distance from QArray to ImageArrayi < t, then retrieve image i, else do nothing,
====
What are the features?
There are hundreds of them, you may use:
1- color histogram (probabilities of the RGB colors in the image
2- you may use the intensity histogram, (probabilities of the (R+G+B)/3 in the image
3- Statistic features, such as mean, mode, media stdev, moments, uniformity, entropy, etc.
4- Hundreds more, read about them.
====
What are the similarity measures?
1- Euclidian distance
2- Manhattan distance
3- Hassanat Distance, this one is invented by me, and it is better than the previous ones because it is invariant to the outliers. you may find the link of the paper at: https://www.researchgate.net/publication/264995324_Dimensionality_Invariant_Similarity_Measure
4- tens of other distances..
Contributions in such research is highly dependent on the features used.
For your first move, you may start with the color features.
  • asked a question related to Content Based Image Retrieval
Question
5 answers
how clustering of images can be done after feature extraction(color,shape.texture)?
Relevant answer
Answer
Please see some papers we published listed on my page. our research results indicate that you need to use a large number of clusters for images with complex colour and texture variations. This finding coincides with the so-called "bag of words" approach for CBIR.
  • asked a question related to Content Based Image Retrieval
Question
3 answers
Hi,
I would like to understand well the "Manifold-Ranking Based Image Retrieval" principale espcially using positive and negative samples. 
Knowing that the images of the database are classified in different classes.
According to me, Manifold-Ranking Based Image Retrieval algorithm is as follows: 
1.assign postive ranking score to the query (1) and zero to the remainng points of the Image database.
2.generate a weighted graph:
            -Compute K nearest neighbors for each image
            -Connect 2 images with edge if they are neighbors
            -Form the  affinity matrix to define the edge weights
            -Normalise the  affinity matrix
3. spread the score of all images to the neighbors via the weighted graph until the images (which have ranking score Zero  ) get ranking score  different from Zero.
4. rank the images according to their ranking scores (largest ones in the top)
Now, I present Manifold-Ranking Based Image Retrieval using relevance feedback (positive and negative images) algorithm
5. Active leanrning selects the positive and negative images.
6. Rerun Manifold-Ranking Based Image Retrieval algorithm
Q.1 The stated algorithms are correct?
Q.2 When I execute different queries for the same class, I get the same retrieved images but with different ranking score for each query . Could you explain why?
Q.3 How to compute the effectiveness of the image retrieval?
Attached, the concerned paper.
Thank you a lot.
Best Regards.
Khadidja Belattar.
Relevant answer
Answer
This is a good question with many possible answers.
A good place to start looking for an answer is in
J.-M. Chang, Classification of the Grassmannians: Theory and Applications, Ph.D. thesis, Colorado State University, 2008:
See Section 3.1, starting on page 19: Matrix representation of points on the Grassmann manifold. 
I have to break off at this point but will return to this interesting question a bit later.
  • asked a question related to Content Based Image Retrieval
Question
7 answers
Which later can be used for svm classification with segmentation being involved as well.
Relevant answer
Answer
(Unfortunately I could not attach the doc file containing the following text. Therefore I just pasted the text here.)
• In applications like face recognition which a vector (or a matrix) is extracted from the whole sample image:
1. Extract histogram (first order or second order histogram) of the image:
 First order histogram: (MATLAB)
      im = imread('tire.tif');
  nf = 20; featureVec = imhist(im, nf); % with any arbitrary value of nf
  % OR
  featureVec = imhist(im); % with default nf=256
 Second order histogram, here GLCM (MATLAB)
  im = imread('tire.tif');
  glcm = graycomatrix(im);
  temp = graycoprops(glcm);
  featureVec(1) = temp.Contrast;
  featureVec(2) = temp.Correlation;
  featureVec(3) = temp.Energy;
  featureVec(4) = temp.Homogeneity;
2. This feature vector can be used in any classification/segmentation framework.
• In applications like image (pixel) classification which a vector (or a matrix) is extracted for each pixel in the sample image:
1. Extract histogram (first order or second order histogram) of a neighborhood of each pixel (e.g. a square window around the pixel):
 First order histogram: (MATLAB)
  im = imread('tire.tif');
  [Nx, Ny] = size(im); % suppose the image is a single-band image
  w = 3; % neighborhood window --> (2w+1)-by-(2w+1)
  nf = 50;
  extIm = padarray(im, [w, w], 'symmetric');
  featureVec = zeros(Nx, Ny, nf); % memory allocation
  for x = 1+w:Nx+w
      for y = 1+w:Ny+w
         WIN = extIm(x-w:x+w, y-w:y+w);
         featureVec(x-w, y-w, :) = imhist(WIN, nf);
     end
  end
 Second order histogram, here GLCM (MATLAB)
   im = imread('tire.tif');
   [Nx, Ny] = size(im); % suppose the image is a single-band image
   w = 3; % neighborhood window --> (2w+1)-by-(2w+1)
   extIm = padarray(im, [w, w], 'symmetric');
   nf = 4; % graycoprops gives 4 features
   featureVec = zeros(Nx, Ny, nf);
   for x = 1+w:Nx+w
      for y = 1+w:Ny+w
         WIN = extIm(x-w:x+w, y-w:y+w);
         glcm = graycomatrix(WIN);
         temp = graycoprops(glcm);
         featureVec(x-w, y-w, 1) = temp.Contrast;
         featureVec(x-w, y-w, 2) = temp.Correlation;
         featureVec(x-w, y-w, 3) = temp.Energy;
         featureVec(x-w, y-w, 4) = temp.Homogeneity;
      end
   end
2. These feature vectors can be used in any classification/segmentation framework.
  • asked a question related to Content Based Image Retrieval
Question
6 answers
I am doing my work in MRI brain for Image retrieval....
Relevant answer
  • asked a question related to Content Based Image Retrieval
Question
3 answers
Relevance Feedback (RF) in CBIR consists to select iteratively the returned (from image retrieval process) positive and negative images according to the image user query, until the the retrieved images are improved.
The attached a CBIR with RF source code  written in MATLAB, i would like firstly understand How they do the Relevance feedback,how the compute relevance score, correct the errors to use the code in my context?
Thank you a lot for your help.
Relevant answer
Answer
Hi  Abdul Hameed, 
Thank you for your reponse.
I would like to  know how does the Relevance  Feedback work in the attached code .
  • asked a question related to Content Based Image Retrieval
Question
6 answers
Other than precision vs recall, are there any other performance metrics?
Relevant answer
Answer
In my research field, Average Precision (for each query) and Mean Average Precision (for all queries) are the most widely used. They reflect both precision and recall information.
If we can regard the "retrieval" method as a solution of "recognition", we can also use ROC curve and/or AUC as the measure. They also reflect both precision and recall information.
Other measures include discounted cumulative gain, mean reciprocal rank, etc., but I never saw any people really used them in their studies (at least in my research field).
  • asked a question related to Content Based Image Retrieval
Question
8 answers
my aim is towfold:
-computing the similarity measure , through neural network , from the relevant and non relvant skin lesion image database.
-use this measure to retrieve set of images.
so, how to learn this measure and how to use it in retrieval?
Relevant answer
Answer
I am not sure if understand you correctly, but I am afraid you are confusing some things.
A similaritry measure is not the outcome of the classification. The result of your classifier is a class assignment for each instance / sample. A similarity measure can be used for this class assignment. If I understand you correct you got a typical 2-class classification scenario, you got a set of samples and you want to decide if the samples either belong to class A or class B.
In a nutshell:
- calculate the features for each of your samples
- this results in a n-dimensional feature vector as a representation of each sample in feature space
- find the features which are most discriminative and weight them higher
- find a rule (e.g. a similarity measure) to distinguish between class A and B
There are lots approaches for similarity measures. A very simple example would be: if you have two feature vectors you can compare them by calculating the euclidean distance (or any other distance measure) between them.
There are lots approaches for weighting features, a very simple example would be to use weighting factors. For example, if you have two features 'a' and 'b' and 'b' is more discriminative than 'a', you could use a weight of 0.5 for feature 'a' and 2.0 for feature 'b' when making the decission. This would cause feature 'b' to have 4-times the impact on the classification decission compared to feature 'a'.
A small and very simple example:
feature vector 1: {3, 2}
feature vector 2: {1, 5}
euclidean distance between 1 and 2: (3-1)^2 + (2-5)^2 = 4+9 = 13
distance = 13
now we weight feature 'a' with 0.5 and 'b' with 2.0: 0.5((3-1)^2) + 2.0((2-5)^2) = 2+18 = 20
distance = 20
I hope that helps a bit ...
  • asked a question related to Content Based Image Retrieval
Question
1 answer
i must use the relevance feedback for skin lesion images to select the relevant and irrelevant patterns. can i do that? and how?
could you please correct me if i am wrong:
the non dscriminative features: are the common features between 2 classes or more.
the dscriminative features: are the uncommon features between 2 classes or more.
what is the difference between the most discriminative and the most dominant feature?
Thank you in advance.
Relevant answer
Answer
Hi,
I'm not sure about relevance feedback, but the "most discriminative" feature should be a feature that can be used to help tell the deference between the target classes. The "most dominant" feature would be the one that occurs the most. The most dominant may not be the most discriminative, for instance in document classification the most dominant feature, in a bag-of-words representation, maybe stop words like "the", "and", "it", but these are not discriminative as most documents will have them, whereas the feature "tennis" may be highly discriminative in telling whether a document is about the game of tennis, but may occur less often than the stop words.
  • asked a question related to Content Based Image Retrieval
Question
1 answer
Is there a method for hair removal? If not, can I do it manually?
Relevant answer
Answer
As far as I understand you have images
of the skin, say from arms. Not from the scalp.
Three methods come in mind for removing
the hair: outlier detection (statistics), or
edge detection (Canny algorithm), or
clustering in RGB histograms: hairs should
cause small sharp peaks.
After recognition you can replace the hairs
by an interpolated hue from the near environment.
A more sophisticated approach would try
to identify the hair as an object and could
follow the trajectories of the hairs for
doing the removal.
Regards,
Joachim
  • asked a question related to Content Based Image Retrieval
Question
3 answers
I am working on pattern detection of dermatological images and I would like to know how to extract and match them.
Relevant answer
Answer
I think the previous two references provided by @Christos P Loizou and @Lucia Ballerini are new and excellent to start with.
  • asked a question related to Content Based Image Retrieval
Question
6 answers
I am doing work in CBIR
Relevant answer
Answer
The question is so general which makes it trivial to provide a specific answer. For example, Neural Network has been widely used for years, while it is still being enhanced (for example: Convolutional NN). Among most recent image classification approaches, sparse representation has recently opened a new demanding research area. The following paper is the latest one:
G. Shenghua, I. W. Tsang, M. Yi, "Learning Category-Specific Dictionary and Shared Dictionary for Fine-Grained Image Categorization,", IEEE Transaction on Image Processing, vol.23, no.2, pp.623,634, Feb. 2014
I think classification based on sparse representation is an easier choice for proposing innovations and enhancements, compared to NN and KSVM which are somehow saturated.
  • asked a question related to Content Based Image Retrieval
Question
12 answers
I am working in image and video retrieval. I just want to ask what are the best method for finding key frames in a video? Here my idea of a best method is that it's efficient and involves less computation and less complex methods.
Note: key frame is a frame which contains rich content of video and represents the whole video. Key frame can be more than one for a single video.
Relevant answer
Answer
There are a number of ways to extract key frames from a video. You can do so for example in MATLAB by:
1. Calculating the histogram difference between two consecutive frames. If the difference is greater than a threshold (selected by user) then consider the frame as a key. You do this for all frames in the video.
2. Another way is using the image entropy. Calculate the Entropy of each frame. Whenever the difference between two consecutive frames is more than a Threshold then mark this frame as a shot. Take for example the frames having maximum Entropy.
  • asked a question related to Content Based Image Retrieval
Question
2 answers
Data set of an image.
Relevant answer
Answer
Thank you somuch. I will try to do it.....
  • asked a question related to Content Based Image Retrieval
Question
3 answers
Can anybody give me some resources on circular harmonics? Thank you!
Relevant answer
Answer
Circular harmonics are a solution to Laplace's equation in polar coordiniates. This is a planar solution to the spatial solution provided by spherical harmonics (http://en.wikipedia.org/wiki/Spherical_harmonics ). A nice introduction to Circular solutions is given by Christopher Baird of U. Mass-Lowell in his EM lecture notes. ( http://faculty.uml.edu/cbaird/95.657(2011)/Lecture4.pdf ).
  • asked a question related to Content Based Image Retrieval
Question
2 answers
If so, could you please send it to me: xl890727@outlook.com
Relevant answer
Answer
Dear Latch Shun, I am not sure that how the following links are helpful for you. With Good Wishes. Good Luck!
  • asked a question related to Content Based Image Retrieval
Question
2 answers
I'm on a visual semantic classification problem and now I need to setup a classification experiment with SVM (using LIBSVM). I need to classify my dataset into n-classes and also detect outliars. Is there any way to do it? If so do I need to train that novelty class with negative examples?
Relevant answer
Answer
Using LIBSVM, you will get only prediction and accuracy of the class. You need to train each class examples.
  • asked a question related to Content Based Image Retrieval
Question
3 answers
How can i extract visual concepts from a specific image. Let suppose an image contains mountains, water, and birds. What type of extractor (Extraction tool) is available that I can use to extract these concepts. Kindly refer me some research papers in wich author has extracted these concepts.
Relevant answer
Answer
Have a first start by looking at the work of the team of Prof. Stiefelhagen at KIT: http://cvhci.anthropomatik.kit.edu/research/multimedia-analysis